Research and analysis

Building great teachers? Initial teacher education curriculum research: phase 2

Updated 14 January 2020

Applies to England

Instructions for printing, searching and saving this report.

Executive summary

In December 2018, we published the findings of our 2-year research investigation. into the curriculum in schools. This was an important study because it provided a secure evidence base on which we were able to re-focus school, further education and skills (FES) and early years inspection onto the substance of education: the curriculum. This applies to both what inspectors should assess under ‘quality of education’ but also the inspection methodology they should use to make valid judgements. We published the findings of this research and subsequently incorporated them into the development of our education inspection framework (EIF).

Given the success of this research design, we wanted to go down a similar path when considering the valid components of our initial teacher education (ITE) framework. The current framework has been in place since 2012 and recently reached the end of its inspection cycle. In addition, the Department for Education (DfE) has published its ‘Initial teacher training core content framework’, and its ‘Early career framework’, which both have significant implications for the inspection of ITE. These updates, along with ensuring alignment with the EIF, mean that now seems the ideal time to refresh our ITE framework.

The purpose of this research study was, therefore, to develop and test a research model that assessed the quality of an ITE programme’s curriculum. We expected the research to identify valid components of ITE curriculum quality, which could then be refined and used to inform the development of a new inspection framework. The model itself, however, is intended to inform development of the inspection framework rather than to be used directly on inspection.

We developed the model using:

From this, we created 22 indicators that the evidence suggested might be associated with ITE curriculum quality. This provided coverage across elements of curriculum planning and partnership working. The curriculum planning indicators were similar in design to our previous curriculum research. The partnership working indicators were specific to the ITE context, especially around the implementation of the programme across multiple sites. These indicators were framed against a detailed rubric on a 5-point scale that helped the 17 inspectors participating in the study to make consistent assessments on quality. A methodology for the research visits was also prepared so that the evidence collection activities were helpfully aligned with the indicator and rubric design. This included in-depth discussions with trainees and school-based mentors.

In total, 46 ITE partnerships were involved in research visits to trial the curriculum quality indicators and visit methodology. This included visits to:

  • 24 school-centred initial teaching training (SCITT) partnerships
  • 20 higher education institutions (HEI)
  • 2 Teach First partnerships

The visits were carried out over 2 days so that evidence from partner schools and settings could be collected to triangulate the full experience of the programme received by trainees.

The fieldwork has provided some encouraging evidence on the effectiveness of the research model. Inspectors found that the indicators and rubric were important in providing them with a clear structure for the research visits. They also found that it gave them a different, more rounded lens through which to assess an ITE programme compared with the current ITE inspection framework. Importantly, their views and the data collected suggest that the indicators are a better fit with ITE curriculum quality than some outcome measures we currently use.

The research model also gave us greater certainty on what factors underpin an effective ITE curriculum. For example, communication and building relationships within the partnership were vital. When this was done well, staff across sites were bought into the ITE programme as a whole. High-quality and focused mentor training and well-designed quality assurance procedures were also important.

Strong curriculum planning across partnership thresholds was also a necessity. However, some partnerships we visited were concerned about the amount of time to cover every aspect of teaching in enough depth. This had consequences for programme design, particularly when providers tried to fit too much in rather than consider their curriculum priorities. Curriculum imbalance, when a major aspect of teaching became the focus of the programme over other equally important areas, was also a concern in some providers. Offering only a surface-level coverage of important areas of teaching is unlikely to meet trainees’ needs. This was particularly an issue with subject knowledge in the foundation subjects of primary teacher education.

In a minority of partnerships, inspectors found poor practice that had an impact on trainees’ knowledge, understanding and practical application of teaching skills. In a few secondary partnerships, for instance, teacher education was focused more on how to maximise progress 8 scores rather than on subject knowledge pedagogy. Misconceptions about how to teach were also a common occurrence in weaker partnerships. For example, ITE programme planning often included the concept of ‘sequencing’, but this was often misinterpreted as being about chunking lessons with different activities to engage pupils rather than sequencing learning over time through the curriculum.

Inspectors provided their views on the viability of the visit process. In general, they were encouraged by the process because it enabled greater scrutiny of curriculum quality. However, they also pointed out some areas where we could improve the design of ITE inspections. For example, they thought that increasing the number of visits to trainee placements within a partnership would provide greater validity to the evidence base. Inspectors were also very keen to point out that speaking with trainees and their mentors was an essential part of the method for assessing the quality of the ITE programme.

Overall, this study gives us confidence that we can assess important factors aligned with curriculum quality in ITE. The findings and evidence are being used to inform the development of the new inspection methodology and shaping of the inspection handbook. This has been enhanced by piloting in the autumn term 2019. Further piloting and development is planned for the spring term 2020.

Main findings

As well as overall strengths in ITE curriculum across the sector, our research model identified some weaknesses in quality across all partnership types. Data suggests that the research model did not favour one type of partnership over others.

The empirical evidence collected did not always reflect the judgement profile for the current inspection framework. This suggests that focusing on curriculum quality has identified areas that require development in a minority of strong partnerships.

There appears to be no relationship between ITE curriculum quality and employment and completion rates. This suggests that these outcome measures might not be the best indicators of programme quality. Given current recruitment challenges, trainees are highly likely to be employed as teachers irrespective of the quality of their ITE programme (though the vast majority of provision is judged at least good in any case).

Partnership working

It was important that sufficiently senior course leaders communicated the partnership’s expectations for the programme and assessment clearly. This was vital for enabling school-, setting- and centre-based teacher education to blend together into a coherent experience for trainees.

In the higher-scoring programmes, course leaders worked with their partnership to plan and deliver a well-sequenced ITE curriculum. This ensured that the programme between centre-based provision and trainee placements was joined-up and allowed trainees to practise what they had learned in central provision. By contrast, leaders in weaker partnerships tended to arrange their programmes to meet the practical needs of partner schools and settings, rather than considering how best trainees learn and develop.

Consequently, some trainees were unable to develop their school curriculum knowledge and understanding beyond centre-based learning. This was because their practice was constrained by their placement provider’s curriculum design.

The extent to which trainers and mentors in schools and settings were trained for their role varied. Higher-scoring partnerships were often determined to improve their school- or setting-based mentors’ own teaching skills. This provided trainees with greater opportunities to be engaged in high-quality discussions about teaching and learning with their mentors.

Weaker training of school-based mentors often lacked clarity of purpose or was simply unavailable. This meant that some mentors were unaware of how best to train and support trainees throughout their placements. The highest scoring partnerships had quality assurance systems that enabled effective 2-way communication. This gave course leaders the means to keep track of trainees’ progress and to intervene when they found gaps in their teaching knowledge and understanding.

Curriculum planning

In the highest scoring partnerships, the sequence of the curriculum was often built with an understanding and consideration of the learning process. Trainees were recognised as novice teachers. Content was grounded in providing them with sufficiently cumulative knowledge and understanding.

In weaker partnerships, sequencing of content was generally ignored in favour of attempting to capture everything in bite-size chunks to ensure coverage of the ‘Teachers’ standards’.This often led to trainees having only a surface-level understanding of teaching concepts and being less well prepared at the end of their programme.

Coverage on subject knowledge and subject-specific pedagogy was often a real strength of partnerships led by HEIs. However, in the weaker HEI programmes, the emphasis on subjects at the centre often left other aspects of teaching to the trainee placements. Poor communication between the centre and placements often resulted in a lack of coherence in the overall programme. This left gaps in trainees’ knowledge and practical application of other aspects of teaching.

Weaker SCITT provision was characterised by weaknesses in the delivery of subject-specific pedagogy because the expertise was not always available. In secondary programmes, this led to trainees not developing an in-depth understanding of their subject. In the weakest examples, it led to an ITE curriculum geared towards educating trainees on how to maximise progress 8 scores.

SCITT provision tended to feature greater and more coherent coverage of other aspects of teaching, such as behaviour management and teaching pupils with SEND, than HEI partnerships. It also provided trainees with more hands-on practical experience.

Behaviour management was prioritised across partnerships because controlling the classroom was something that trainees were often most concerned about. In the best partnerships, the focus on behaviour was initiated at the start of the programme and reinforced regularly, particularly by mentors.

Weaknesses in ITE programmes were more acute for primary school trainees. This was often related to the limited amount of coverage provided in the foundation subjects and science on a 1-year ITE programme. In a few cases, this was linked to the priorities of leaders in partner providers who were providing little focus on non-core subjects for pupils within their own curriculum.

The strongest partnerships mitigated this by offering additional subject sessions, making sure that placements were geared towards providing trainees with opportunities to teach foundation subjects and linking trainees’ progression with professional development courses in their NQT year.

The extent to which providers were up to date with educational research varied considerably. In some HEI partnerships, trainees benefited from trainers who were experts in their field. However, issues arose when the means for transferring theory into useful practice in placements was underdeveloped.

In some partnerships, inspectors identified outdated or irrelevant theories being taught in centre-based provision. This meant that some trainees lacked knowledge on both how pupils learn and curriculum design.

Introduction

Our research investigation into the curriculum of schools identified valid components of curriculum quality that inspectors could assess. We also found differences between our curriculum quality model and published performance outcomes for pupils. This suggests that the latter is not always a good proxy for measuring curriculum quality. We incorporated these findings into the design of our new EIF, which went live in September 2019.

Naturally, changing the inspection framework for schools and other education providers is also likely to have implications on the current framework for how we inspect ITE. Both are intrinsically linked because ITE partnerships provide the lifeblood of early years settings, schools and providers of further education and training: teachers. ITE partnerships are expected to prepare the next generation of teachers to a high professional standard so that they can have that all-important impact on outcomes for children and learners. Failure in this duty is likely to have negative effects on the quality of teaching overall.

Inspection practice should also keep pace with other sector developments. The DfE’s early career framework (ECF) and new framework for ITE core content are welcome developments. However, we need to ensure that our inspection of ITE partnerships provides a consistent approach with these developments. Ultimately, we want to avoid confusion for course leaders, teacher trainers and trainees themselves. Therefore, now is the right time to review the current ITE inspection framework.

The quality of ITE stems from the curriculum offer available to trainees. It should equip trainee teachers with the knowledge and skills they need to teach all learners well, whatever learners’ background or barriers to learning. This also aligns with the rationale and expectations placed on teachers in the EIF. However, like our research on schools’ curriculum, we needed to assure ourselves of the valid components of an ITE curriculum and whether inspectors can assess them effectively. This is particularly important considering the range of different ITE routes and potential variations in curriculums across the sector. For instance, the ITE landscape includes the following main types of partnerships that Ofsted inspects:

  • SCITT – a consortium of schools, usually in a local area or region, providing graduate programmes for teachers
  • HEI – a university or university college that provides undergraduate or postgraduate teacher education programmes. An HEI usually offers an academic qualification that includes qualified teacher status (QTS)
  • Teach First – a charity set up to recruit graduates and provide them with programmes to help them teach in deprived areas
  • further education training – programmes for those entering the FES sector

Each of these types of ITE partnership can offer programmes for up to 4 different age ranges. These are called age-phase partnerships and cover early years, primary, secondary and further education.

Therefore, in our ITE curriculum study, we wanted to investigate validity to provide secure evidence that our new inspection framework for ITE can provide sufficient and fair coverage across each of these partnership types.

Phase 1 overview

Our starting point in the research was to establish the main concepts of curriculum quality in an ITE context. We carried out this first phase of the study in spring term 2019. We then published our findings in an HMCI commentary. The literature review and questionnaire for course leaders, trainees and NQTs provided an initial evidence base for us to work with.

The evidence from phase 1 revealed several areas of interest for us to pursue that might be related to curriculum quality. Some of the concepts identified were seen in our previous research around curriculum planning, such as the sequencing of content and managing the balance of what is planned to be taught. This was especially the case on shorter ITE routes, in which partnership leaders were concerned with making important decisions on the essentials to include in their programmes. Interestingly, this hinted that some of the curriculum indicators from our previous curriculum work may not be context-dependent and could potentially work across different sectors.

However, we also established several additional criteria relating to the specifics of partnership working across the ITE sector. The evidence collected suggests the following factors are potentially important in ensuring that a well-planned curriculum can be successfully implemented:

  • mentor support and guidance
  • teacher educators’ training and relevant experience
  • communication across the partnership
  • quality assurance procedures

The threshold between what is done at the centre-based provision of the partnership and how this relates to practice during placements is particularly relevant here.

The phase 1 findings also provided some scope on the data collection methods we might employ to assess these areas. For instance, trainees’ views from the questionnaire suggested that more structured discussions with them might be an effective means of determining the wider impact of a curriculum offer. We perceived this as one way to move inspection away from an over-reliance on employment and completion data to making accurate assessments of impact on that which really matters: the quality of the ITE programme’s curriculum.

Some evidence also suggested that the model of the deep-dive process for the EIF might be worth testing in an ITE context. This was based on the variability we found between partnerships in how they delivered some subjects and aspects of learning to trainees.

Research design for phase 2

These findings provided the basis for developing a research model that we could test out fully during fieldwork in phase 2 of the study. This would allow us to:

  • confirm whether the criteria drawn out from the questionnaire data and literature review were indeed associated with curriculum quality
  • test whether curriculum quality was something that inspectors would be able to regularly identify and assess
  • recognise any practical limitations of what might be possible in the context of an ITE inspection

Our main concern overall was, therefore, determining whether we could create a valid research model that could assess the intent (curriculum planning) and implementation (partnership working) of the ITE curriculum, so that we could understand the impact on trainees’ preparation to teach.

We developed some hypotheses to support in answering the main research question. These were that:

  • we expected that weaker partnership working would have a detrimental effect on trainee outcomes, no matter the strengths of curriculum planning
  • the model would work across different ITE routes without biasing against a particular type of ITE partnership
  • the model would identify variability across the curriculum indicators to clearly distinguish between effective and ineffective curriculum design

Curriculum indicators and rubric

Using the evidence from phase 1, alongside the knowledge and expertise of HMI, we created 22 curriculum indicators across 8 domains of interest. One of these was a single impact indicator that was designed to act as an overall outcome variable for post-visit analysis. The 22 indicators designed for the study are shown in figure 1.

Figure 1: List of curriculum indicators in the research model

Rationale

Number Indicator
1a There is a clear and coherent rationale for the overall curriculum of the ITE programme
1b Curriculum aims are shared across the partnership and fully understood by all staff involved in teacher education
1c The curriculum design covers important elements of teaching that develop competent trainee teachers

Concepts

Number Indicator
2a Teacher educators demonstrate an up-to-date understanding of educational research and confidently apply this in their curriculum for trainees

Accountability and leadership

Number Indicator
3a Programme leaders and teacher educators have clear roles and responsibilities to design and deliver an effective curriculum
3b Programme leaders and teacher educators have the knowledge, expertise and practical skill to implement a strong curriculum offer for trainees

Sustainability

Number Indicator
4a Leaders and trainers regularly review and quality assure their curriculum for trainees to ensure that it is implemented sufficiently well
4b Leaders enable curriculum expertise to develop across the partnership, ensuring that trainers are well supported and prepared for delivering the programme

Curriculum planning

Number Indicator
5a The curriculum has sufficient depth and coverage across core teaching elements and subject areas
5b A model of curriculum progression is conceptualised for trainees
5c Theory and classroom practice are linked and mutually reinforcing in the curriculum design
5d Leaders and trainers ensure that trainees have the skills to manage behaviour effectively

Partnerships

Number Indicator
6a The overall curriculum programme is purposefully integrated across differing types of training experience
6b Teacher education is focused on meeting the relevant standards, rather than the day to day issues of lesson preparation
6c Mentors are highly effective in supporting trainees and delivering the curriculum
6d Capacity is provided to ensure that the programme is prioritised across the partnership
6e Strong quality assurance procedures ensure that trainees receive a high-quality curriculum offer

Equality

Number Indicator
7a Trainees are equipped with the knowledge and skills to support specific groups of pupils
7b The curriculum provides parity for all trainees
7c The partnership has adapted its curriculum to reflect recent changes to recruitment policies

Assessment

Number Indicator
8a Assessment is designed thoughtfully to shape future programmes – it is not excessive or onerous

Impact

Number Indicator
9a The curriculum is successfully implemented to ensure trainees are ready to teach

Some of the indicators were re-purposed from our curriculum phase 3 study to help us better understand if curriculum, as a concept, is context dependent. In these instances, the indicators were shaped so that they related specifically to ITE but maintained their curricular distinctiveness.

We also developed detailed guidance on how inspectors should assess each indicator. This is a similar method, on a 5-point scale, as found in our curriculum phase 3 research. Full details on the indicators and the rubric for inspectors can be found in Annex A.

Visit method

To further ensure that inspectors could score the indicators consistently, we designed a visit process that complemented the structure of the research model. This allowed inspectors to follow some standardised processes to enhance validity.

The visit design again replicated elements from our curriculum phase 3 research, particularly the concept of a subject deep-dive. However, we also developed the visit process to ensure that it corresponded with the ITE context. One of the main differences contextually is how an ITE curriculum is planned and implemented across several or more institutions, rather than developed for a single provider. Therefore, designing a fieldwork process that allowed us to test associations between curriculum planning and partnership working was a priority. In addition, other important aspects of teacher development, beyond subjects, were also areas that our model would need to assess. Due to these differences, and to avoid confusion, in this study we refer to the visit process that scrutinises a subject area or aspect of teaching as a ‘focused review’.

On this basis, we implemented a 2-day research visit. A range of activities and discussions were carried out with staff in the central provision and partner providers to inform inspectors’ views on the indicators. Importantly, these were framed around 4 focused reviews in subject areas or aspects relating to teacher education. The purpose of these reviews was to look for evidence of how well the programme is planned and structured to enhance trainees’ knowledge, understanding and practical application of teaching. Further details on the visit method and subject or aspect selection for focused reviews can be found in Annex B.

The fieldwork was carried out by 17 HMI. One of these inspectors was designated HMI lead for the project. They worked closely with the senior research lead in developing the research model, visit method and inspector training.

Sampling

Our intention was to ensure that we had a balanced sample of partnerships in the study sample. The main criteria for selection included:

  • partnership characteristics (type and phase)
  • previous inspection judgements (good and outstanding for overall effectiveness)
  • geographical location (Ofsted regions)

Overall, 46 partnerships were involved in research visits to trial the curriculum quality indicators and visit methodology. This included visits to around 80 schools or settings within these partnerships to triangulate evidence with the central provider. The schools and settings selected were not randomly allocated but arranged directly by the HMI leading the visit with the central provider. The final sample included 24 SCITTs, 20 HEI and 2 Teach First partnerships covering 20 primary QTS, 21 secondary QTS, 3 QTLS and 2 EYTS routes.

Further details on sampling issues can be found in Annex B and a list of the partnerships involved in the research in Annex D.

Limitations

It is worth noting that the study has several limitations. These have implications for the external validity of the findings beyond the scope of the research design.

First, this is a validity study. There is potentially a level of unreliability that we cannot control for in the study, because we did not test for inter-rater reliability between inspectors in the research design. However, we have attempted to limit the extent of inconsistency between inspectors through the systematic design of the research model, inspector training and the quality assurance processes in place.

Second, our aim was to test the research model in a variety of ITE partnerships. This ensured that enough depth and breadth of evidence could be secured from the focused reviews and was a priority for establishing a valid process. This seemed more valuable than trying to gain a surface-level picture across a larger number of different aspects of an ITE programme.

Finally, there were difficulties involved in arranging visits for the fieldwork. This had implications on not just the selection of subjects or aspects for the focused reviews but also on the attention that inspectors gave to different routes into teaching. Inspectors picked up evidence on 3-year degree courses, PGCEs and School Direct salaried and non-salaried programmes. However, identifying differences in quality between these routes was more difficult. This is because this approach was not built into the research design in a systematic way. The focused reviews took priority because we assume that this process can be applied equally well and critically across the different routes into teaching. This is something that we will need to test with pilot inspections of the new framework.

Findings

Overall impact scores

Within the 46 partnerships visited, inspectors came across a varied range of ITE programmes. One of the main aims of our research model was to ensure that it could differentiate between common factors of curriculum quality no matter the curriculum design. It was essential, given the variation in teacher routes and partnership types, that the model could work fairly across the sector.

Figure 2 suggests that the model has achieved this because it does not appear biased towards any one type of ITE institution. Instead, the model has identified strengths and weaknesses in curriculum quality across all partnership types.

Figure 2: Curriculum quality impact score (indicator 9a) by partnership type

Centre type Band 1 Band 2 Band 3 Band 4 Band 5 Total
Higher education institutions 1 1 7 8 3 20
School-centred ITT 1 2 6 11 6 26
Total 2 3 13 19 9 46

Band 1: this aspect is absent in the ITE programme. Band 5: this aspect of the ITE programme underpins and is central to the success of the partnership’s work and may include examples of exceptional ITE curriculum design. The 2 Teach First partnerships are included in the SCITT data.

It is worth noting, however, that over four fifths of trainees in 2018/19 were trained through HEI partnerships.This means that weaknesses in HEI partnerships will have a larger impact on overall levels of teacher quality.

The ITE partnerships that participated in the research had also all been judged good or outstanding for overall effectiveness at their last routine ITE inspection. However, figure 3 highlights that, based on our research model, around a third of these partnerships have weaknesses with curriculum quality (bands 1 to 3) and the offer available to trainees.

Figure 3: Curriculum quality impact score (indicator 9a) by the overall effectiveness judgement of the ITE partnership at their last routine inspection

Overall effectiveness Band 1 Band 2 Band 3 Band 4 Band 5 Total
Outstanding 1 - 5 9 6 21
Good 1 3 8 10 3 25
Total 2 3 13 19 9 46

Band 1: this aspect is absent in the ITE programme. Band 5: this aspect of the ITE programme underpins and is central to the success of the partnership’s work and may include examples of exceptional ITE curriculum design. Overall effectiveness judgements are based on data at time of sampling.

One reason for the difference between the impact score in our research model and the overall effectiveness judgements on inspection is likely down to the current ITE framework being weighted towards outcomes. For instance, the sub-judgement for outcomes are very similar to overall effectiveness judgements. The main contributors to the outcomes judgement are data related, namely completion rates and employment rates. Both are often high within partnerships, which is not surprising given the national teacher shortage. This suggests, therefore, that current inspection practice places greater emphasis on the data available, rather than on the quality of the ITE curriculum and the impact this has on trainees’ preparedness to teach.

This can be seen further in figure 4, which shows that although most partnerships in our sample achieved a score of 4 or 5, a small proportion of partnerships were given scores of 2 or 3, with a score of 1 being used sparingly. The indicators, therefore, are generally distributed in a pattern that differs to the current overall effectiveness profile.

Figure 4: Breakdown of curriculum indicator scores for all 46 ITE partnerships, in percentages

Indicator Band 1 Band 2 Band 3 Band 4 Band 5 Total*
Coherent rationale (1a) 0 2 11 50 37 100
Shared understanding (1b) 0 15 22 39 24 100
Coverage of teaching concepts (1c) 0 4 26 48 22 100
Coverage of educational research (2a) 2 9 20 43 26 100
Clear training roles (3a) 0 7 17 41 35 100
Trainer knowledge & expertise (3b) 0 4 24 48 24 100
Regular ITE curriculum review (4a) 2 9 28 33 28 100
Trainer development (4b) 2 13 28 33 24 100
Depth & coverage (5a) 2 17 28 35 17 100
Model of progression (5b) 0 11 13 50 26 100
Theory & practice (5c) 2 9 20 35 35 100
Training on behaviour (5d) 0 2 9 57 33 100
Partnership working (6a) 0 9 22 46 24 100
Focus on relevant standards (6b) 0 7 26 37 30 100
Mentor quality (6c) 0 7 26 52 15 100
Priority across partnership (6d) 0 7 30 37 26 100
Quality assurance (6e) 0 11 33 35 22 100
Supporting SEND and EAL learners (7a) 0 7 24 50 20 100
Parity for all trainees (7b) 0 7 20 48 26 100
Trainee recruitment (7c) 7 4 22 33 35 100
Thoughtful assessment (8a) 0 2 17 50 30 100
Overall impact (9a) 4 7 28 41 20 100

Band 1: this aspect is absent in the ITE programme. Band 5: this aspect of the ITE programme underpins and is central to the success of the partnership’s work and may include examples of exceptional ITE curriculum design.

*Not all rows will add to 100% due to rounding.

The data in figure 4 identifies some interesting patterns in the way that inspectors have scored the indicators. For instance, coverage in managing behaviour (indicator 5d) was particularly strong across the sample. There appear to be relative weaknesses in the depth and coverage of the curriculum (indicator 5a). There also seems to be a mis-match between curriculum planning (indicator 1a) and communication across the partnership (indicator 1b). This may explain why some partnerships were less successful in ensuring that their trainees were well prepared for teaching.

The findings that follow – based on the evidence from the research visits – help to explain that the indicators developed for the study may provide an alternative and perhaps more substantive means of assessing ITE curriculum quality.

Partnership working does not always meet the intent of partners

The breakdown of indicator scores shows that, on occasion, inspectors noted a disparity between the aims of the ITE programme and how well it was successfully implemented across the partnership.

For instance, inspectors identified that a clear and coherent rationale for the overall curriculum of the ITE programme (indicator 1a) was strongly in place across 87% of the partnerships visited. However, the aims of the curriculum were only shared across the partnership and fully understood by all teacher trainers (indicator 1b) in 63%. This suggests that in a few cases the effectiveness of curriculum planning was sometimes hindered by relatively weak communication between the central provider and partner schools and settings.

Analysing the data by partnership type shows that these deficiencies were a more common issue in the HEI partnerships visited. This can be seen in figures 5 and 6.

Indicators Band 1 Band 2 Band 3 Band 4 Band 5 Total
Coherent rationale (1a) 0 0 1 11 8 20
Shared understanding (1b) 0 3 7 5 5 20
Partnership working (6a) 0 3 4 11 2 20
Focus on relevant standards (6b) 0 1 5 8 6 20
Mentor quality (6c) 0 3 5 9 3 20
Priority across the partnership (6d) 0 3 8 6 3 20
Quality assurance (6e) 0 2 9 6 3 20

Band 1: this aspect is absent in the ITE programme. Band 5: this aspect of the ITE programme underpins and is central to the success of the partnership’s work and may include examples of exceptional ITE curriculum design.

Indicators Band 1 Band 2 Band 3 Band 4 Band 5 Total
Coherent rationale (1a) 0 1 4 12 9 26
Shared understanding (1b) 0 4 3 13 6 26
Partnership working (6a) 0 1 6 10 9 26
Focus on relevant standards (6b) 0 2 7 9 8 26
Mentor quality (6c) 0 0 7 15 4 26
Priority across the partnership (6d) 0 0 6 11 9 26
Quality assurance (6e) 0 3 6 10 7 26

Band 1: this aspect is absent in the ITE programme. Band 5: this aspect of the ITE programme underpins and is central to the success of the partnership’s work and may include examples of exceptional ITE curriculum design. Includes indicator data from the 2 Teach First research visits.

There are also some small differences between the following indicators related to partnership working:

  • the overall curriculum programme is purposefully integrated across differing types of training experience (indicator 6a)
  • teacher education is focused on meeting the relevant standards, rather than the day-to-day issues of lesson preparation (indicator 6b)
  • mentors are highly effective in supporting trainees and delivering the curriculum (indicator 6c)
  • capacity is provided to ensure that the programme is prioritised across the partnership (indicator 6d)
  • strong quality assurance procedures ensure trainees receive a high-quality curriculum offer (indicator 6e)

Communication across the partnership is central to success

Strong communication across the partnership was vital to the success of an ITE curriculum. The majority of course leaders spoken to intended to ensure that trainees spend as much time as possible in their placements. They saw this as vital for building competent knowledge and skills, as well as for developing qualities such as confidence, reflection and resilience.

However, meeting this objective was largely dependent on the communication systems in place across the partnership. Where this was scored higher in our model for both HEIs and SCITTs, the quality of communication enabled the coverage between the modules from central provision and the practice applied through placements to blend together into a coherent experience for trainees.

In these cases, course leaders made sure that all stakeholders across the partnership, including mentors and trainees, were engaged with and clearly understood the programme design. Transitions between centre-based provision and experience on placements tended to be smooth, because measures had been put in place to reduce potential loss of learning. This included partnerships sending out regular updates to teacher educators on the focus of what trainees were being taught in central provision for the forthcoming weeks. This was typically more than just a brief list of themes to cover over each week, though. The strongest providers articulated in clear and simple terms the type of discussions they would expect mentors to be having with trainees and other activities (such as observing an expert in the placement school or setting) to help consolidate their learning on these themes. Professional tutors and mentors were, therefore, able to align coverage with that from the centre and reinforce trainees’ learning on these concepts more successfully within their placements.

Other strategies employed by higher scoring partnerships included flexible communication methods. They commonly used emails and open-door policies to keep in touch with partners. Some centre-based provision tended to have a designated ITE programme contact, who was routinely available to speak with partners and trainees. In these partnerships, leaders of partner schools and settings were also involved in the ITE curriculum design. The course was not something ‘done’ to them but was co-constructed with course leaders. When headteachers and professional tutors had contributed to designing the programme, we found alignment and synergy across all aspects of the course. In the case of 2 secondary partnerships, the shared philosophy for teacher education meant that colleagues in partner schools expressed that they felt like genuine partners.

Typically, in the HEI partnerships with lower scores in our research model, leaders were less successful in communicating their aims for the ITE curriculum and expectations for trainees across the partnership. In some cases, professional tutors and mentors were unaware of the focus of lectures and seminars in the central provision. This was not communicated effectively because arrangements for dialogue between course leaders and partner providers were not considered an important aspect of curriculum implementation. When there was an awareness, often through programme handbooks, the detail on how the programme at the centre related to practice in trainees’ placements was insufficiently planned.

Similarly, there was often a lack of clarity on the role that the mentor was expected to play in supporting the trainee through the elements of the ITE programme. In these HEI partnerships, the course structure at the centre tended to be the priority for course leaders, particularly around providing trainees with relevant subject knowledge and subject-specific pedagogy. However, this focus often meant that structured planning for curriculum delivery across the wider partnership took a back seat. There was an expectation from course leaders that partner providers would deliver on other aspects of teacher education but there was often no coherent plan about how this should be achieved.

Trainees from these partnerships told inspectors that this provided them with a less coherent curriculum experience. It hindered their progress because sequencing between theory and practice was not well made. Trainees also said that they did not always have opportunities to practice what they had learned from their course modules. This was because school-based trainers did not have the required knowledge of the modules to support them in a meaningful way. When communication was lacking and the aims not fully understood, it created a ripple effect of weakness down to mentors and their guidance to trainees.

In other cases, including in the weaker SCITT partnerships, we found variability in effective partnership working between the schools and settings within the same partnership. Not all partner providers were adhering to the partnership’s agreed curriculum design. This meant that, although the content of the programme was generally well planned and communicated across the partnership, some partners were actively working against the priority of fully preparing trainees for teaching. In some cases, therefore, individual trainee experiences were often down to luck of the placement.

Some trainees mentioned that they were keen to take what they had learned from their centre-based provision into the classroom. However, they also said that they were often unable to apply their learning on curriculum design beyond their centre-based learning. This was because their practice was often constrained by the understandable priority that leaders in a placement provider had put on the teaching of their own schools’ curriculum. Some trainees, therefore, had limited opportunities to develop or enhance their knowledge and understanding on important curriculum design concepts after reflecting on their teaching practice.

Strong quality assurance processes ensure that curriculum planning is effectively implemented across a partnership

Quality assurance processes were usually linked with effective communication across the partnerships, especially in making sure that the objectives of the ITE programme were being delivered. It is worth noting, however, that only three fifths of the partnerships visited received a strong score for quality assurance mechanisms (indicator 6e). This is one of the lowest proportions across all the indicators scored from the research visits. It suggests that there was a high level of variability in the quality assurance actions being carried out across partnerships. Even in partnerships with a strong central curriculum, successful implementation was sometimes undermined by the lack of feedback on how the ITE programme was functioning across sites.

The strongest providers had systems in place that meant high-quality communication was 2-way. For example, electronic tracking systems were a feature across most of the partnerships visited. Mentors were often encouraged to provide feedback on trainees’ development and performance through this system. However, the difference between this excelling or not was 2-fold. First, success was determined by how effectively course leaders had communicated system procedures to in-school trainers. Mentors in weaker partnerships revealed that this training was often missing. This generated inconsistencies in how they used the tracking mechanisms.

Second, success depended on how well course leaders acted on the information from partner schools and settings to enhance programme delivery. For instance, by using these systems, professional tutors and mentors were aware of trainees’ knowledge gaps and areas where they needed further support. This direct information allowed proactive course leaders to adapt the ITE programme or run additional sessions. Rigorous quality assurance of trainee placements was, therefore, feeding back into the ITE curriculum design and improving the programme for the next cohort of trainees.

This information also alerted course leaders to placements that were not working out for trainees. Leaders could then take action, for instance, by moving trainees to alternative placements that better met their learning needs. Some trainees mentioned how grateful they were that their partnership had picked up on their issues. They felt it had benefited their overall experience in the long-term.

More generally, in-school trainers frequently mentioned that they valued the visits from central trainers as a tool for benchmarking their assessments about how well trainees were doing. The extent to which central staff checked the quality of placements, however, was variable. When this was done often and well, it enabled joint observations to moderate standards. In weaker partnerships though, leaders gathered a great deal of perception evidence from trainees about how well their course was going, but often failed to triangulate this information with other sources. For instance, they rarely looked carefully enough at mentor notes to identify ineffective practices. At its worst, trainees’ targets were shallow and operational rather than linked to the substantial aspects of learning needed to improve their teaching competency.

One interesting point noted by inspectors was that course leaders’ checking of the quality of the ITE programme in central providers tended to be far less common.

Mentoring is critical for developing trainees’ knowledge, understanding and practical application

The evidence also suggests that in-school mentors are a critical factor in supporting the practical implementation of the ITE curriculum. However, the indicator scores for the quality of mentoring (indicator 6c) suggest there are some barriers to making sure that mentoring is always effective.

In the strongest partnerships, we saw a strength in how course leaders perceived the role of mentors. Across all the partnerships visited, leaders considered mentor training to be imperative for supporting trainees in their placements. However, it was the content of the mentor training and how well-informed mentors were on the curriculum plan for trainees that tended to be the main difference between stronger and weaker provision. For example, leaders in the higher scoring partnerships tended to understand the vital importance of training mentors to improve their own teaching skills. One HEI lead stated that they regarded them as ‘agents of change for the wider profession’.

In this scenario, mentors were not just encouraged but required to attend regular continuing professional development sessions at the central provider and to engage critically with recent educational research. This was combined with strategies for keeping mentors – not just school leaders – informed about the aims of the ITE curriculum. Information was regularly shared with mentors so that they knew what trainees had already learned that week from their core modules. The benefit of this was that they could support trainees on placements more effectively with informed discussions about teaching and learning practice, rather than holding more generic discussions.

Our evidence shows that partnerships engaged in training their mentors effectively tended to have more knowledgeable and professional staff involved in mentoring. Therefore, mentors in these partnerships were not just a conduit for making sure trainees made systematic progress through the ITE curriculum. They were vehicles for developing high levels of motivation and making positive changes within the wider workforce.

In contrast, the weaker partnerships often lacked clarity about the purpose of mentors’ training. For example, the training was sometimes about the paperwork needed by the central provider for trainee assessment, rather than training mentors on assessment to help trainees improve. This was further compounded by a lack of opportunities to develop mentors’ skills. Occasionally, mentors were supplemented with one-off professional development events about practice. However, in these instances, training sessions were often optional and poorly attended. Consequently, these mentors received little additional information to develop their own knowledge and professionalism or to support them in improving that of the trainees they mentor.

One common reason given by course leaders for limited mentor training is that this was sometimes dependent on the ‘buy-in from [partner] leaders’. Several partnerships said that they found it difficult to get enough mentors for the number of trainees on the course. This led to them needing to recruit additional partners that they might not have otherwise approached. In these cases, they felt reliant on these partners doing them a favour by serving as a placement for trainees. This affected the balance between mentor contact with trainees because course leaders did not want to sour the relationship with partners by placing too many demands on mentors. Often, this led to school priorities usurping the aims of supporting the trainees.

Some mentors expressed concerns about being given sufficient time from their school or setting to carry out the role effectively. In higher scoring partnerships, mentoring was time-tabled to ensure that it was protected so trainees could receive the support required. Professional tutors also mentioned issues with releasing mentors to attend training: ‘it is difficult to find the time to attend the different training sessions put on’. This was because they were expected to fit this around their other commitments as ‘professional development’. This problem was further exacerbated by the fact that some professional tutors take on trainees from several different partnerships.

As a result, these partnerships were simply recycling weaknesses in teaching. They rarely sought to, or were unable to, influence the quality of teaching within their team of mentors. Inspectors identified examples of class teachers recruited into mentoring who did not always show the capacity to carry out this role effectively. For instance, a few mentors admitted that they were learning from the trainees who were bringing in new perspectives to teaching from their centre-based ITE programme. It is a worry that in a few partnerships, we identified experienced teachers in a mentor capacity without the necessary knowledge and skills to train and support their trainees effectively enough.

Curriculum planning needs to work in tandem across the partnership

The evidence from the research visits suggests that there is an association between the quality of ITE curriculum planning and how this is implemented through partnership working.

In general, partnerships that scored highly on our research model for curriculum planning also tended to score highly for the capacity of partnership working to deliver the planned curriculum. The inverse was also true. Weak planning often led to similarly weak curriculum implementation across the partnership. In a few cases, there were instances of partnership working undermining the strengths of an agreed curriculum plan. For instance, as described earlier, some aspects of an ITE curriculum were eschewed to meet the convenience of participating schools or settings. This suggests that one of the main differentiators between the weakest and strongest partnerships was how they had invested time and forethought into developing the sequence and depth of their ITE programme, both at the centre and across partner settings.

Figures 7 and 8 show that HEI providers in our sample were more likely to be producing a curriculum of sufficient depth and coverage across core teaching elements (indicator 5a) and planning effectively for trainees’ progression through the training programme (indicator 5b). In around half of the SCITTs visited, inspectors noted weaknesses in the breadth of their curriculums.

Indicators Band 1 Band 2 Band 3 Band 4 Band 5 Total
Depth & coverage (5a) 1 3 5 9 2 20
Model of progression (5b) 0 1 3 9 7 20

Band 1: this aspect is absent in the ITE programme. Band 5: this aspect of the ITE programme underpins and is central to the success of the partnership’s work and may include examples of exceptional ITE curriculum design.

Indicators Band 1 Band 2 Band 3 Band 4 Band 5 Total
Depth & coverage (5a) 0 5 8 7 6 26
Model of progression (5b) 0 4 3 14 5 26

Band 1: this aspect is absent in the ITE programme. Band 5: this aspect of the ITE programme underpins and is central to the success of the partnership’s work and may include examples of exceptional ITE curriculum design. Includes indicator data from the 2 Teach First research visits.

Effective sequencing of trainees’ learning needs alignment between central provision and placements

The partnerships that scored highly on planning for trainees’ progression had typically developed their ITE programmes with an understanding and consideration of how trainees learn built into the design.

This was clearly advocated when course content in seminars and lectures at the central provider were linked with timely practice in placements that revisited these themes. The trainees spoken to were often much clearer about what they had learned over the course of the programme when this sequencing had happened. Not only did they have the theory, but they also gained the experience through application while the theoretical knowledge was still fresh from their central modules. This suggested that the teacher education received in these circumstances was well embedded.

Leaders from these partnerships were also keenly aware that trainees were novice teachers. On this basis, content was grounded in gaining sufficiently cumulative knowledge and understanding to make sure that trainees were prepared for the classroom come the end of the course. This worked well when curriculum designers had attempted to order the knowledge and skills necessary for the initial building blocks of the programme and the next steps. The strongest partnerships tended to supplement this by assessing trainees’ previous experience and knowledge at the start of the course. It was not uncommon, therefore, to see most starting points across these partnerships focused on behaviour management, because this is one thing that trainees said they were most worried about on entering the profession.

Useful sequencing was also applied to the practical aspects of learning in the placement providers. For example, a few SCITT leaders said that the priority was not to throw trainees in at the deep end by getting them to teach in front of a full class from the start of their course. Instead, the curriculum was designed so that trainees would first experience teaching with small groups of pupils or learners (for instance, at a table within their mentors’ classroom). They would then gradually build towards teaching the whole class in the final term of the programme.

In the highest scoring partnerships on this indicator, course leaders were flexible in their decision-making around the ITE curriculum. They understood that not everything related to teacher education could be fitted into the programme in the time given, particularly on a single-year ITE programme. Rather than force all possible elements into the programme in bite-size chunks, they were more selective on the core elements of teaching that novice teachers needed to know, understand and practice.

For example, in several primary partnerships, course leaders were aware that key stage 2 trainees’ lack of knowledge on phonics sometimes hindered their progress in teaching early reading. Therefore, leaders introduced phonics into the programme first to improve trainees’ understanding of how young children should learn to read. By recognising the limitations, the leaders constructed a rich curriculum with depth to maximise trainees learning. Interestingly, the evidence highlights that, in several of the strongest HEI providers, the ITE curriculum was set up to progress into and through the NQT year. This offered further enrichment and professional development courses for trainees as they became NQTs and built on their previous teacher education. Quite often, these courses were offered to all staff in placement providers, not just to NQTs.

In contrast, weaker partnerships tended to be less astute in their curriculum design. Commonly, the ITE programme content was based on meeting the full criteria of the assessment model, the ‘Teachers’ standards’, rather than incorporating this into an effective curriculum design that met trainees learning needs. The focus was much more on doing as much as possible in the time given, rather than considering the best approach for ensuring that trainees were ready for the classroom and what this journey covers.

In a few cases, inspectors concluded that this was a tick-box approach to curriculum design that only provided a surface-level overview on important elements of teaching. For example, the ITE curriculum in these partnerships tended to cover large chunks of material in centre-based sessions. The assumptions here were that trainees would practise all that they had learned in placements before returning to the centre for another large download of material. However, a common complaint from trainees was that ‘some [modules] have tried to pack too much information into one day’. Trainees described the cognitive overload created by this approach. They were unable to process this information effectively, hindering their development. Instead, this often led to misconceptions in practice. This was because they did not fully understand the implications of the learning or information was simply being crowded out of memory.

That is not to say that a curriculum design that follows this approach is unworkable. For instance, several effective partnerships included an extensive ‘boot camp’ at the beginning of their ITE programme, which touched on a range of teaching concepts. The main difference here, however, was that the remainder of the programme was well sequenced. Content touched on during these sessions was re-visited in greater depth later in the course through central training. It was also supported by timely instruction through trainees’ placements to reinforce their learning. This was much more of an issue when concepts had not been sufficiently threaded through the curriculum or strongly aligned with classroom practice. This had consequences on trainees’ preparedness for teaching.

Furthermore, the assumption that the knowledge learned at the centre would be followed up in placements did not always hold true. Inspectors saw examples of curriculum sequencing being applied particularly well to the ordering of centre-based modules, but this was not then linked with delivery in trainee placements.[footnote 1] Trainees in weaker partnerships described how learning in these circumstances tended to wither over time because they could not practise the theory in placements. A few trainees explained clearly how they were unable to recall their learning from the centre in its original depth simply because it was weeks after the event and they had not had the opportunity to apply this learning within the classroom.

The inverse was also true. In the partnerships that scored highly on our research model, there was clarity that schools’ direct-salaried trainees were entitled to time away from their employing school for central teacher education. In weaker partnerships, however, this time was not always made available for trainees. This led to salaried trainees missing out on central training because the partnership agreement was insufficiently robust.

Curriculum imbalance can lead to a narrow curriculum offer

The balance of the curriculum, in terms of coverage, was at the centre of trainee and NQT criticisms of ITE provision. This was something that a minority of trainees were also keen to point out in our phase 1 questionnaire. ITE programmes that were overly focused on specific aspects of teacher education at the expense of other equally important aspects were a concern. A narrow in-depth focus on only a few elements meant that some trainees felt they were completing their courses with gaps in their teaching knowledge and practice.

Interestingly, issues with curriculum balance affected HEI and SCITT partnerships in slightly different ways. This can be seen in figures 9 and 10.

Figure 9: Breakdown of scores for indicators related to specific areas of ITE curriculum in 20 HEI partnerships, by number of partnerships

Indicators Band 1 Band 2 Band 3 Band 4 Band 5 Total
Coverage of educational research (2a) 0 2 3 10 5 20
Trainer knowledge and expertise (3b) 0 0 4 12 4 20
Theory & practice (5c) 0 1 5 9 5 20
Training on behaviour (5d) 0 1 1 13 5 20
Supporting SEND and EAL learners (7a) 0 1 7 8 4 20

Band 1: this aspect is absent in the ITE programme. Band 5: this aspect of the ITE programme underpins and is central to the success of the partnership’s work and may include examples of exceptional ITE curriculum design.

Indicators Band 1 Band 2 Band 3 Band 4 Band 5 Total
Coverage of educational research (2a) 1 2 6 10 7 26
Trainer knowledge and expertise (3b) 0 2 7 10 7 26
Theory & practice (5c) 1 3 4 7 11 26
Training on behaviour (5d) 0 0 3 13 10 26
Supporting SEND and EAL learners (7a) 0 2 4 15 5 26

Band 1: this aspect is absent in the ITE programme. Band 5: this aspect of the ITE programme underpins and is central to the success of the partnership’s work and may include examples of exceptional ITE curriculum design. Includes indicator data from the 2 Teach First research visits.

The HEI partnerships visited tended to make coverage on subject knowledge and subject-specific pedagogy the priority within seminars and lectures at the centre. This was often because teacher trainers in centre-based provision had a wealth of experience and expertise to offer trainees on their specialist subject areas, which was not always the case in partner providers. The level of discussion that inspectors had with subject leaders on subject pedagogy in HEI partnerships tended to be more informed and detailed than was the case in most of the SCITTs visited.

One thing noted by the visit methodology, however, was that experts in other aspects of teaching – such as teaching pupils with SEND or behaviour management – were not always available. In some HEI partnerships, these aspects were instead expected to be threaded through the subject relevant teacher education on offer. For example, in one strong HEI partnership, modules on the history curriculum connected subject knowledge with generic pedagogy and subject-specific pedagogy alongside specific coverage of behaviour management. The course leader recognised that teachers’ subject knowledge was more than just knowing their subject. For them, it was not enough that trainees simply read and understood second-hand texts about history. The expectation was, therefore, that the course focused on digging into history scholarship and history teaching methods as well.

Inspectors noted that, although this was the course leaders’ intention, it was not always successful in practice. In weaker partnerships following this design, discrepancies between the plan and the practice were revealed through inspectors’ discussion with subject trainers and further highlighted by trainees. For instance, in a few HEI partnerships, leaders self-identified that coverage of teaching pupils with SEND or pupils who speak English as an additional language (EAL) were areas for improvement. Often, these aspects of teacher education were only taught at a surface level, if at all. This was because the weight of the central course was focused on subject knowledge and subject-specific pedagogy.

This issue was sometimes combined with an expectation that other aspects of teaching would be picked up by partners on trainee placements. However, in the weaker HEI partnerships, the lack of careful planning, ineffective communication and limited quality assurance processes meant that coverage was variable and dependent on the quality of individual partner providers. This meant that trainees on a programme from the same institution were sometimes receiving very different experiences.

Weaker SCITTs, on the contrary, tended to suffer from the opposite problem. There were strengths in how they delivered generic teaching methods and teaching pupils with SEND, for instance. However, they lacked high-quality learning focused on trainees’ subject knowledge and subject-specific pedagogy.

Unlike HEIs, in which course trainers regularly showed the required subject expertise, SCITT partnerships tended to rely on staff across the partnership to deliver modules on curriculum and subject-related matters. In the weakest partnerships, this led to well-meaning but non-expert trainers leading on these modules. Inspectors noted that their lack of curriculum and subject knowledge led to a number of misconceptions arising in trainees’ understanding. For example, in 2 secondary SCITT partnerships, it became clear that trainees were being taught to plan their curriculum working backwards from what was required for Year 11 in order to maximise Progress 8 scores. Some trainees suggested that their central programme lacked focus on the GCSE syllabus and marking. This suggests that, in some cases, the placement setting’s priorities were negatively influencing what trainees perceived their learning needs to be.

Behaviour management

As figures 9 and 10 show, coverage on behaviour management was particularly strong across the sample. It was prioritised across the partnerships (although not always clearly planned for) because, typically, controlling the classroom was something that trainees were often most concerned about when becoming a teacher.

In general, coverage on behaviour was initiated at the start of the programme and reinforced regularly, particularly by mentors, because it was practically applied in placements. Content covered a range of behavioural theories but also included practical methods for improving trainees’ practice. For instance, in one HEI partnership, several course trainers explained how they start their subject modules by overtly demonstrating a positive behaviour technique. Over the duration of the programme, this enabled them to model a wide range of techniques. Across the sample, trainees regularly highlighted how they had taken concepts on the use of voice and reward and sanction from central provision and applied these ideas on their placements.

There was also a consistent focus across partnerships on the need for trainees to engage with and understand the different behaviour policies across their school placements. This was indicative of an awareness that behaviour methods are sometimes different depending on school context. Course leaders, therefore, often made sure that fundamentals on behaviour management covered a broad range of approaches to challenging behaviour. Trainees regularly corroborated that the behaviour content they had received had prepared them sufficiently well for the classroom.

The behaviour management aspect of the ITE programme in some of the partnerships visited was also frequently complemented by coverage on teacher-professional behaviours. For instance, the strongest partnerships included elements of how teachers regulate their own behaviour to encourage positive learning environments. In these circumstances, course leaders had planned for isolating the differences between behaviour for learning and behaviour management. Evidence from speaking to the trainees highlighted that this tended to result in highly self-reflective trainees who were often thinking about their own professional standards in the classroom and with other colleagues to improve their teaching.

The data collected, however, did not provide sufficient evidence on the content of behaviour management interventions in ITE providers. We do not therefore know whether providers focus on proactively teaching pupils desired behaviours as opposed to providing guidance on simply managing poor behaviour as and when this happens.

Supporting and teaching pupils with SEND

Figures 9 and 10 show that the majority of partnerships had strengths in their ITE programme on teaching pupils with SEND.

Partnerships scoring highly tended to invest time in the curriculum plan for trainees to apply their SEND learning at a deeper level. In these cases, trainees were expected to produce a case study or journals to show how they applied the tools they had learned in central provision within their placement settings. Trainees explained that this not only helped them to understand the complexities of individual pupils with SEND, but also allowed them to move beyond broader issues of how SEND affects behaviour to consider issues of how to manage cognitive challenges in learning. Learning that was well threaded and sequenced between theory and practice produced an attitude that one course leader explained as ‘working towards mastery’. Partner schools and settings often considered trainees with this experience as ‘exceptionally well prepared to teach pupils with SEND’.

A few of the stronger partnerships also included a week for trainees to work in a special school (along with their 2 contrasting placements). Trainees saw this as a positive, albeit challenging, part of the ITE curriculum. It allowed them to understand a wider range of SEND learning contexts that existed beyond their placements and gave them the experience of teaching to these needs. Trainees also often explained that this immersion gave them a deeper understanding of SEND, including working with other agencies and understanding the code of practice. This was less useful, however, when visits to special schools were offered as a one-off experience to make up for the lack of planned and coordinated coverage on teaching pupils with SEND. In the partnerships where this was the case, trainees’ SEND knowledge was particularly weak.

Across the HEI partnerships visited, trainees clarified that they often received a short module on teaching pupils with SEND at the beginning of their programme. Whether this was threaded through their subject modules, though, largely depended on the expertise and knowledge the centre-based subject trainers had on teaching pupils with SEND. Discussions with subject trainers in the weaker providers identified that some had poor knowledge and were failing to provide the coverage expected of the intended curriculum plan. Instead, subject aspects tended to dominate discourse.

However, in a few partnerships, there was no plan for providing coverage on teaching pupils with SEND in the centre-based provision beyond the initial module. For example, the expectation was that this would be picked up on placements instead. However, this meant that the SEND learning received relied on the quality of in-school or setting mentors. Trainees who had sound understanding on this aspect often reflected that the mentor’s support was central to their learning. This was often because their mentor’s own knowledge and practice picked up the shortfall, or the mentor provided access to other experts in the placement, such as the SEND coordinator, to develop their understanding. This tended to make up for limitations in the centre-based coverage. Too often, however, there was differing practice between placements. A few trainees stated that the coverage of approaches for SEND pupils was often generic rather than personalised. In these cases, thinking was rarely directed towards the particular needs of the pupils.

In the weaker SCITTs, the main issue was with course content being predominantly front-loaded, but with no capacity built into the curriculum plan to re-visit concepts later in the course. This led to disjointed coverage in the SEND training. For instance, a few partnerships only provided central modules at the beginning of the course. This led to issues with sequencing learning because trainees often did not receive meaningful follow-up during their placements to apply the knowledge they had learned at the centre. Most trainees reported that they valued the days in central provision at the beginning of the course and keenly felt their absence later on because they were often unable to join up their learning.

Educational theory

Figure 9 shows that trainers in HEI partnerships tended to have a better grasp of educational theory than trainers in SCITT partnerships. They were able to regularly integrate this into their curriculum planning across the partnership. In the most effective partnerships, the insights from research were not only shared with trainees, but also with the school-based trainers so that the implications for practice could be discussed.

Weaknesses in HEI partnerships stem from issues with the curriculum not being thoroughly planned and coordinated with partners. For instance, trainees stated that when theory was current and related to classroom practice, this was more effective than a string of stand-alone lectures that they could not relate to. In these instances, trainees spoke clearly about their step-by-step development into the profession. In a few cases, however, course leaders focused on the detail of theory in modules and seminars at the centre-based provision, without developing the means for transferring this into useful practice in placements. This left some trainees thinking that their centre-based learning was irrelevant and unhelpful to their teacher education. A few trainees and NQTs mentioned in a similar vein:

Simply reading literature regarding the intricacies of theories of learning or motivation do not directly help you as a trainee, because at this stage of your professional development you are inevitably more concerned with developing the basics of your practice.

In SCITTs, inspectors identified more issues with using outdated or irrelevant theories in centre-based provision. For instance, in one partnership, trainees were introduced to the history of educational research and given a reading list, although the latest reference was from 1998 and included material on applying now de-bunked neuro-myths. There was a lack of knowledge in some partnerships around more modern concepts on how pupils learn and curriculum theory. This was leading to trainees taking away misconceptions of how to teach. For example, in 3 partnerships (including one HEI partnership), discussions with trainers and trainees revealed they had confused the concept of curriculum sequencing with lesson sequencing. The lesson sequence (engagement activity, plenary, and so on) was the core focus rather than the curricular sequence of a subject. In these scenarios, there was limited exposure for trainees to access a scheme of work and to begin thinking about how to develop learning over a series of lessons. This misconception was then often reinforced by similarly weak curriculum practice in their placement schools and settings.

In addition, across 10 partnerships, inspectors identified that the theoretical knowledge being taught in centre-based modules was often narrow in scope. A preferred style – constructivist approaches – dominated discourse. This tended to leave trainees with fewer options for developing their teaching practice and impacting on pupils’ learning. The lack of diversity in style does not reflect the range of delivery methods used across the sector, meaning that trainees in these partnerships were not fully prepared for teaching in a range of schools or providers.

Subjects in primary partnerships

Figure 11 indicates that we found more weaknesses in ITE curriculum quality in programmes for primary school trainees. Unsurprisingly, this was related to the limited amount of coverage provided across the foundation subjects. This links with the findings identified from our phase 3 curriculum research.

Figure 11: Curriculum quality impact score (indicator 9a) by age phase

Phase Band 1 Band 2 Band 3 Band 4 Band 5 Total
Primary QTS 2 2 6 7 2 19
Secondary QTS - 1 5 8 7 21
Total 2 3 11 15 9 40

Band 1: this aspect is absent in the ITE programme. Band 5: this aspect of the ITE programme underpins and is central to the success of the partnership’s work and may include examples of exceptional ITE curriculum design. Due to the visit arrangements with 1 partnership, both primary and secondary programmes were included in the focused review design. On this basis, the data from this visit is not included. Data from partnerships visited delivering QTLS and EYTS programmes is not included due to small sample size.

Inspectors found that for primary partnerships, the quality of planning and teachers’ subject knowledge were much more well developed for English and mathematics than for the rest of the curriculum. This includes similar weaknesses found in science as for other foundation subjects. Quite often, there was simply a general absence or very little time provided in the ITE curriculum for non-core subjects.

This decision was often dictated by the needs of primary school leaders in the partnership. For example, several trainees said that their preparation to teach a foundation subject was hindered ‘because schools are not bothered by foundation subjects’. Additionally, course trainers regularly noted that in these partnerships, ‘we are prisoners of the school curriculum’. Opportunities to practice the teaching of foundation subjects was, therefore, sometimes limited by a school partner’s approach to its own school curriculum design. Clearly, the issues with curriculum narrowing in the sector were conditioning the design of some of the weaker ITE programmes. As a result, trainees in these scenarios were not provided with the latest thinking about the teaching of their subject(s). This lack of preparedness is subsequently brought into the profession once qualified.

Other concerns compounded this issue. Course leaders did not always have sufficient backing within the partnership. This made it difficult to fully enforce partnership agreements and ensure that trainees received the full entitlement expected. Additionally, some trainers highlighted that central provision was often constrained by the time available to cover everything on the course. This meant that the time for subject training was limited. In some cases, for example, this led to too much emphasis on activities that show trainees how to make science fun and interesting. However, there was no discussion on the merits or weaknesses of this approach or examples of additional science pedagogies available that trainees might consider applying.

The evidence from the research visits suggested that, for primary teachers, 3-year QTS routes were more effective than the 1-year routes in preparing trainees to become effective teachers across the primary school curriculum.

For example, in one high-scoring HEI partnership, it was clear that there was teaching expertise across a range of subjects. The primary history lead, for instance, was passionate about the subject and had written several books on how to teach it. Coverage on the 3-year BA course was thorough. There were opportunities to look at education in a wider context and trainees also worked with the Historical Association to study aspects of the hidden curriculum. However, even here leaders were clear that it was a struggle to cover all that was needed in a single-year course. Central provision to teach history, for example, comprised just a 2-hour workshop.

To mitigate these limitations, some partnerships tended to put the full responsibility of acquiring subject knowledge on trainees to recognise and fill the gaps in learning in their free time. The success of this approach was determined by the trainees’ motivation and how proactive they were, along with the level of quality and support they received from their mentors. However, inspectors found that this created additional workload issues for trainees. On occasion, as non-experts, it also introduced misconceptions around the teaching of subject content.

Despite this, the more impressive partnerships had developed efficient systems to mitigate against a narrow primary ITE curriculum offer. For instance, in a few, trainees had a personal tutor at the centre provision. One of the main roles of the tutor was to check for coverage on the foundation subjects in placements and, where this was missing, to ensure that learning on subjects could be picked up elsewhere. This sometimes led to trainees being moved around in their placement school to acquire relevant subject-specific pedagogy and knowledge. In other instances, some institutions ran professional development days to provide additional learning on foundation subjects. This typically involved trainees going to a particularly strong school with the relevant subject expertise in the partnership.

In the main, partnerships that knew and understood their partners well tried to provide their trainees with the knowledge and practice they need. Leaders in these partnerships were clear that if they had serious concerns on a partner school’s approach to planning that led to trainees receiving a narrow experience, they would ‘seriously consider if that is a school we should be working in partnership with’.

Strong course leaders underpin the relationship between effective curriculum planning and communication across the partnership

Course leaders’ effectiveness underpinned strong partnership working and was central to a quality ITE curriculum offer. When course leadership focused on curriculum planning and delivery across the partnership, this typically had a positive impact on the quality of curriculum that trainees received. The strongest course leaders were often attentive to the smallest details within the majority of their curriculum and flexible in making positive change to the design. They were also highly influential across the partnership and secured strong buy-in from the headteachers and leaders of partner providers. This meant they were able to hold others to account for their activities through a viable partnership agreement.

The strongest leaders were also sensitive to trainees’ needs, not just regarding gaps in their knowledge and understanding, but by ensuring that effective pastoral care was available to support them through the course of study. It is worth noting, however, that strengths in pastoral care are not directly associated with curriculum quality. Trainees from a few weaker providers, although complimentary about the support received from course leaders, were highly critical when curriculum planning gave them an incoherent learning experience. In a few cases, particularly in the weaker SCITTs, the pastoral care of trainees appeared to be given higher priority than designing the curriculum for trainees.

In partnerships providing a strong ITE curriculum for trainees, we found some common characteristics in course leaders.

These course leaders:

  • had sufficient seniority within the partnership to ensure that there was a shared understanding across the partnership of its expectations and requirements for high-quality teacher education
  • planned the ITE programme to ensure that trainees’ learning was appropriately sequenced and frequently reviewed, prioritising this over the demands of partnership schools and settings
  • made sure that the partnership has sufficient expertise for trainees to develop a deep understanding of their specialist subject and its subject-specific pedagogy (in the case of primary trainees, to develop an understanding of how to sequence learning across foundation subjects)
  • ensured that school- and setting-based trainers and mentors attend central training events
  • ensured that school- and setting-based trainers and mentors are allocated sufficient time to provide effective training and mentoring and that they access specialist help and advice, including through subject associations, so that their own professional practice is exemplary
  • ensured that partnership agreements are robust and enforced so that trainees, including trainees employed by schools or settings, receive sufficient learning for them to make rapid progress as trainee teachers
  • checked and assured the quality of both centre-based and school- or setting-based teacher education
  • ensured that trainees are equipped with the knowledge and skills and experience opportunities to teach a range of pupils, including those with SEND
  • ensured that frequent assessment of trainees focused sharply on their strengths and areas for development so that the course is personalised to include the knowledge and skills they need and clear short-term targets
  • ensured that trainers understand and use current educational research

What are the most important indicators in the model?

The evidence above shows that our research model was effective at identifying variation in the quality of curriculum planning and partnership working across the sample of research visits.

This is encouraging because it suggests that inspectors were applying the rubric and methodology in the way we intended. They managed to differentiate between weaker and stronger partnerships and explain the direct impact on trainees’ early teaching experience in a way that current outcome measures are unable to.

Our interpretation of the qualitative evidence from the fieldwork points towards several aspects that we find important for assessing curriculum quality. These include:

  • communication across the partnership
  • mentor training
  • effective curriculum planning
  • balance between subjects and other essential elements of content that novice teachers need as building blocks for their future teaching careers

However, the quantitative scores that inspectors provided for each indicator also allow us to carry out some statistical analysis of the data. This is to clarify if these aspects are indeed the most important indicators in our research model.

First, we looked at the correlations between each of the 22 curriculum indicators in the model. A correlation matrix identified that all are highly correlated with each other. This suggests that the indicators are quite similar and are likely measuring similar things. Therefore, we would expect most combinations of the indicators to yield the same results, because they appear to be measuring very closely related aspects of ITE curriculum. At the very least, this analysis confirms that refining the indicators down to a more manageable number would be beneficial for inspectors because it would remove some duplication and cognitive load from the process.

We also carried out a factor analysis to see if this uncovered a comparable pattern with the indicator scores to the qualitative findings discussed in the previous section. Our hypothesis was that 2 underlying factors would be identified. These would reflect the 2 main dimensions already covered: curriculum planning and partnership working.

However, the data from this analysis was inconclusive. Although we identified 2 main factors, neither clearly aligned with the indicators that most closely linked with curriculum planning or partnership working. The accompanying statistics also indicated that the statistical model has a poor fit, which is related to the small sample size (in the context of a factor analysis) and high correlations between indicators.

Finally, we applied a regression analysis to see if we were able to identify the most important indicators in our research model in terms of predicting an ITE partnerships impact score (indicator 9a). We applied backwards regression modelling on the data to produce a viable statistical model.[footnote 2] The best model (in terms of AIC, a relative quality indicator) explained 99% of the variance between partnerships on their impact scores and includes the following 13 indicators:

  • there is a clear and coherent rationale for the overall curriculum of the ITE programme (indicator 1a)
  • curriculum aims are shared across the partnership and fully understood by all staff involved in teacher education (indicator 1b)
  • the curriculum design covers important elements of teaching that develop competent trainee teachers (indicator 1c)
  • teacher educators demonstrate an up-to-date understanding of educational research and confidently apply this in their curriculum for trainees (indicator 2a)
  • programme leaders and teacher educators have the knowledge, expertise and practical skill to implement a strong curriculum offer for trainees (indicator 3b)
  • leaders and trainers regularly review and quality assure their curriculum for trainees to ensure it is implemented sufficiently well (indicator 4a)
  • a model of curriculum progression is conceptualised for trainees (indicator 5b)
  • theory and classroom practice are linked and mutually reinforcing in the curriculum design (indicator 5c)
  • leaders and trainers ensure that trainees have the skills to manage behaviour effectively (indicator 5d)
  • mentors are highly effective in supporting trainees and delivering the curriculum (indicator 6c)
  • capacity is provided to ensure that the programme is prioritised across the partnership (indicator 6d)
  • trainees are equipped with the knowledge and skills to support specific groups of pupils (indicator 7a)
  • the curriculum provides parity for all trainees (indicator 7b)

The coverage of the indicators included in the regression output tended to correspond with the findings of the qualitative evidence. This helps with more clearly articulating the main concepts of ITE curriculum quality.

What are inspectors’ views of the research model?

Following the fieldwork, we also carried out 2 focus groups with 7 HMI who participated in the research visits. This was so we could collect their views on how successfully they felt the research model had been applied in practice.

It was also an opportunity for them to identify any wider concerns about the application of the design. This has important implications for how we can use the research findings for future framework development.

Indicators and rubric

Inspectors commonly mentioned that the design of the research model was important in providing a clear structure for the research visits. This, they stated, along with the focused review method, enabled them to achieve a degree of consistency in their approach to data collection. They felt that, without this design and our definition of curriculum quality in an ITE context, the strengths and weaknesses found across partnerships may have been less clear to articulate.

However, a few inspectors said that the 5-point scale, although useful in making clear the differences in curriculum quality, would probably have worked better as a 4-point scale. This is because the criteria in the rubric for a score of 1 was very much aligned to there being an absence of curriculum design, something unlikely to be found within ITE partnerships. Consequently, some saw this score as something of an irrelevance in the research model.

A few other inspectors had an alternative view. They suggested that the scale could have been refined further for identifying finer scale variation across ITE programmes. They mentioned that they sometimes found variation in curriculum quality between those partnerships that had scored a 4 on the impact indicator. This means that some partnerships clearly had a less effective curriculum design in place but scoring them a 3 on the basis of the rubric was considered inappropriate, because it did not necessarily meet all of the criteria. These inspectors thought that a further category was required to capture this variation.

Despite asking inspectors about the strengths and weaknesses of the indicators, they were not discussed in detail. There was a general sense that together the indicators were relatable to the context of an ITE programme. They were, in essence, the right things for inspectors to be looking at and the evidence collected was telling inspectors something different to standard outcome measures. Inspectors did not give any preference for the most important indicators. Neither did they say that any of the indicators were invalid.

A few did, however, state that including indicators on partnership working added a new lens of investigation to the process that was considered valuable. The relationship between curriculum planning and how well this is delivered across the partnership was considered vital for understanding curriculum effectiveness. A few inspectors noted that quality of mentoring was important to scrutinise in detail for assurance that trainees were receiving a strong curriculum offer.

Inspector questioning

Some of the inspectors felt that the flexible line of questions that complimented the research model was perhaps just as important as the model itself. This was certainly the case when the visits to the 2 early years and 3 FES partnerships were concerned.

Inspectors said that in these cases, it was the subtle differences in the line of questioning, so that the questions focused on the relevant contexts of these partnerships, that was the important discriminator. This suggests that some of the curriculum quality indicators might be context-independent. They appear to work equally effectively across slightly different contexts.

Timing of the visits

Inspectors viewed the summer as potentially both the best and worst time to carry out the research visits. It was the worst in some cases because some partnerships were unable to be involved in the research due to trainees who had already completed their course being unavailable.

It was the best in others because of the value that inspectors got from speaking to trainees at the end of their programme. Trainees tended to provide a useful overview of the totality of the content and sequencing of their ITE programme. This includes the value of both their contrasting placements and how theory and practice have improved their understanding and application of teaching. They were, therefore, in a good position to give their views on the impact of the curriculum on their preparedness to teach.

Testing the specificity of the model and visit process at an earlier point in the academic year is likely to be more problematic. The 3 additional research visits to FES partnerships made in the autumn term 2019 confirms this assumption. They provided a useful contrast to the summer term visits. The trainees spoken to in these cases were generally very positive about their initial curriculum offer and the pastoral care they were receiving. However, they had not yet been on placement so did not yet have any useful views on the quality of partnership working.

Number of placement visits

Inspectors generally felt that 2 school or setting visits were not enough to fully triangulate the evidence collected from centre-based provision.

This was particularly the case in partnerships that had many schools and settings offering placements to trainees. For example, the light-touch process around the selection of providers led to some inspectors visiting contextually similar schools or settings that had been in a long-standing relationship with the partnership. Inspectors suggested that visiting an additional provider that was new to the partnership may have revealed a different experience for trainees and a different perception on the quality of partnership working.

Furthermore, in the 5 instances when only 1 partner provider was available to corroborate the evidence of the central provider, inspectors suggested that they may have over-stated the quality of the curriculum offer. This is because they did not have enough triangulation points for assurance. They suggested, therefore, that carrying out more than 2 school or setting visits would likely have enhanced the validity of the methodology.

Visit method

Inspectors said that the evidence collected from trainees was essential for triangulating the impact of the ITE curriculum.

Often, it was trainees’ views on how well the partnership had prepared them for teaching that revealed underlying issues with the curriculum design. Similar views were expressed about speaking to mentors and subject tutors as part of the evidence collection process. Both appear central to effective curriculum implementation. Therefore, recognising the quality of their support to trainees and the training they have received were important areas for inspectors to unpick.

The views of NQTs were also considered useful. Although not an initial part of the research design, in the absence of trainees a few research visits included speaking with NQTs as an activity for triangulation instead. The NQTs were typically in their first year of teaching and all had trained at the partnerships under scrutiny. They suggested that their first year as a teacher had, if anything, enhanced their awareness of any knowledge gaps from their ITE programme.

Inspectors also frequently mentioned that looking at documents as part of the focused review process was essential. Documentation, particularly trainee files and course handbooks, provided inspectors with a means to validate the accuracy of discourse and to develop new lines of enquiry as necessary.

Focused review design

Inspectors said that, in some circumstances, the focus review process was not useful. This appears to be down to the variance in curriculum design across partnerships. Although there was generally a positive response to how well it captured evidence along subject lines, there was a view that this was less helpful when it came to a focus on other elements of teaching. This was particularly the case in HEI providers with a subject-specific pedagogy model (in which other teaching elements are threaded through the design) and when no experts were in post or available to contribute.

Inspectors explained that on occasion, the best way of getting at evidence on SEND and behaviour management was down to how they modified interview questions to get at these aspects in more detail during the focus review of a subject. This suggests a more flexible methodology alongside a focused review may benefit the quality of data collection.

Early years and further education ITE programmes

The visits to the 2 early years and 3 FES partnerships in the sample showed that the curriculum quality indicators worked well and could be usefully adapted in these contexts. Both of the inspectors who carried out these visits, one an early years specialist and the other a FES specialist, confirmed that the research model design was as effective in drawing out the first-hand evidence on the quality of the ITE curriculum as it was for the other partnership types visited.

However, we need to consider further the teacher education covering very young children aged 0 to 3 years. For instance, the focused review method was better suited to partnerships focused on the reception year. Additionally, there are no subject leaders in EYTS. This again suggests that a more flexible methodology may benefit the quality of data collection in these circumstances.

As mentioned earlier, the framing of the questions asked by inspectors, so that they are relevant to the context, is an important distinction. This identifies that inspectors having experience of the early years or FES sectors is essential for ensuring that they ask training staff and course leaders valid partnership-specific questions, in order that we make appropriate judgements on quality.

Observation

We did not include observing trainees in the classroom, which is currently standard inspector practice, in the study design. We deemed it unlikely that a one-off observation of a trainee could validly tell us about how well prepared they were for teaching, given issues with measurement error.

However, several inspectors were concerned with the lack of observation involved in the focused review process. They agreed that observing trainees to accommodate a judgement about their teaching ability was inappropriate. However, they did feel that observation still had a role to play in the triangulation process of a focused review.

For instance, they suggested observation would give them a sense about the quality of the education that trainees were receiving. Other inspectors suggested alternative uses of observation to assess an ITE curriculum, including by observing mentor and trainee discussions or course modules at centre-based provision.

The purpose of observation in an ITE context, therefore, needs further consideration.

Conclusion

Overall, the data from our research model provides a helpful alternative view to current practice. The model is assessing what we are intending for it to measure – curriculum quality – rather than replicating existing criteria. Additionally, the evidence points towards a structure and process that works in different ITE contexts that does not favour one curriculum approach over another. This suggests that our research model has good face validity. The outcomes from the model provide a degree of confidence that our plans to assess the broader ITE curriculum are possible and necessary.

The next steps will see features of the model trialled in pilot inspections (some pilots have already taken place). This will allow us to assess how the indicators and the focused review process for getting at first-hand evidence work under inspection conditions. Our aim is to consult with the sector over the spring term 2020 to gather further views on how we should inspect the quality of the ITE curriculum in the future.

Annex A: Indicators and scoring details

Along with the indicators in figure 1, we also designed a detailed rubric. This was formed to help inspectors make consistent decisions on the quality of a partnership’s ITE programme.

Each indicator was assessed on a 1–5 scale, providing a systematic and focused evaluation structure. We felt that the 5-point scale would also remove some aspects of inspectors’ unconscious bias associated with the 4-point scale used on routine inspection. From a data-analysis perspective, a 5-point scale is also more likely to produce greater variability in inspector scoring which would enhance our post-visit analyses. Figure 12 provides details of the categories that each score represents on the 5-point scale.

Figure 12: Categories applied in the rubric for scoring the curriculum indicators reliably

Band Criteria
5 This aspect of curriculum underpins/is central to the school’s work/embedded practice/may include examples of exceptional curriculum 
4 This aspect of curriculum is embedded with minor points for development (leaders are taking action to remedy minor shortfalls)
3 Coverage is sufficient but there are some weaknesses overall in a number of examples (identified by leaders but not yet remedying)
2 Major weaknesses evident in terms of either leadership, coverage or progression (leaders have not identified or started to remedy weaknesses)
1 This aspect is absent in curriculum design

The scoring of the indicators is a high inference process. This required inspectors to determine their scores based on the detailed evidence collected during each research visit. The indicators and rubric were not designed for use in a low-inference tick-box approach.

We included more indicators than necessary – or possible for us to look at – with the intention that post-visit data analysis would allow us to narrow and refine the indicators to just those that are clearly related to ITE curriculum quality.

Annex B: Visit method

Day 1 involved discussions with course leaders and centre-based staff to build a profile of the curriculum thinking and planning at the forefront of the ITE programme design. This would include focused reviews in a maximum of 4 subject areas or aspects relating to teacher education.

Day 2 included visits to 2 schools or settings in the ITE partnership. The focus of discussions covered the main themes identified from the central provider evidence. In particular, inspectors sought mentors’ views covering the deep-dive areas investigated in the central provision. This allowed inspectors to triangulate the evidence with the views of their partners to determine how well the ITE curriculum was being delivered in practice. It also made sure that specific tensions undermining the quality of a programme could be identified. Across both days, inspectors sought trainees’ views, in central provision and the placement providers visited. This was so that inspectors could establish the impact of the programme on trainees’ preparedness to teach. For consistency purposes, inspectors were asked to follow, as much as possible, the research activities in the specified order.

Day 1 research activities included discussion with senior leaders at the central provider responsible for the ITE programme about curriculum intent; on-site document review, particularly looking at trainees’ course files; focus groups with a range of trainees – not all linked to the subject areas in the deep-dives – to discuss the impact of the programme; and meeting with 4 subject or aspect leads delivering the programme. The focused reviews with subject or aspect leads included a short discussion on their methods of curriculum planning to see how this correlates with course leaders’ views and scrutiny of curriculum planning, including looking at course documentation.

Day 2 research activities included visits to at least 2 schools or settings to understand how well the programme is delivered in partnership. This involved discussions with the headteacher and school coordinator alongside separate discussions and work scrutiny of course files with several mentors and trainees. These individuals were selected on the basis of the subjects covered in the day 1 focused review selection.[footnote 3]

On each research visit, inspectors carried out 4 focused areas of investigation with relevant staff. Selection of the areas of focus was largely dependent on the partnership programmes available and required HMI discretion. The availability of programme leaders and other staff often had a bearing on which subjects or aspects of the partnership were studied in detail.

We agreed with inspectors that at least one of these 4 slots looked at coverage on behaviour or other aspects of teacher education (if this was delivered separately from subject knowledge and subject-specific pedagogy).[footnote 4] This was so that enough evidence could be collected to appropriately score the indicators:

  • ‘trainees have the skills to manage behaviour effectively’ (indicator 5d)
  • ‘trainees have the skills to support specific groups of pupils’ (indicator 7a)

There was also an expectation that both core and foundation subjects were included in the focus area selection for primary-based trainee partnerships. Our previous research on primary school curriculums suggested that we may see a marked difference in the quality of the curriculum in these cases.

For the discussion sessions, we designed a series of questions for inspectors to use with each defined group. This ensured that the evidence collected matched the areas of interest in the research model. Additional questions on aspects that were not subject-specific were also created so that inspectors could incorporate these elements into discussions focused on subject knowledge and subject-specific pedagogy. This made sure that inspectors could investigate how coverage on behaviour and other aspects was threaded into more subject-specific curriculum designs. The discussions remained flexible though, rather than a typical semi-structured interview. Inspectors could adapt the questions to pursue lines of enquiry as necessary.

The trainee focus group was designed so that it could incorporate the views of trainees from other subject areas. The rationale for this was to identify if the views from trainees on other courses in the partnership aligned with those studied in greater detail through the deep-dive process.

The fieldwork was carried out by 17 HMI and covered all 8 Ofsted regions. One of these inspectors was designated HMI lead for the project. They worked closely with the senior research lead in developing the research model, visit method and inspector training.

As we were testing indicators that were unproven in their validity, it would have been inappropriate for inspectors to provide feedback on curriculum quality to partnership leaders. We were explicit throughout the fieldwork that these were research visits and not inspections. On this basis, we have applied statistical disclosure methods to the data in this report.

Out of scope

We did not include observing trainees in the classroom as part of the study design. We deemed that a one-off observation of a trainee was likely to be limited in what it could validly tell us about how well prepared they were for teaching, given issues with measurement error. Our recent research on lesson visits indicates that the purpose of observing trainees for inspection needs to be considered carefully. This was beyond the scope of this study.

Additionally, we decided that the focus of the study would be on a single-stage process rather than replicate the 2-stage process in the current inspection framework.

Pilot visits

The HMI lead and research lead carried out 2 pilot visits across different phases and types of partnership to test the integrity of the research tools.[footnote 5] This led to us making some minor adaptations to the discussion questions and the visit process. Although the general method was applicable across different types of provision, some practical consideration was required for the selection of the focused areas. For example, smaller partnerships did not have quite the same range of subjects available as larger partnerships.

Sampling constraints

Despite the intent behind the sampling process, the timing of the research was a problem for the involvement of some partnerships. The fieldwork was planned for June and early July 2019. However, the timing meant that some trainees would likely be unavailable to participate in the fieldwork. Many had already completed their training and were no longer available for interviews or focus groups.

To mitigate this issue, we front-loaded the research visits with partnerships involving HEIs. This was because trainees in these institutions typically finished their training earlier than in school-based training routes. In other instances, course leaders, professional tutors and mentors were unavailable on allocated visit days.

When partnerships were unable to participate in the research, we made alternative visit arrangements. This had some implications on sample balance. In total, 29 partnerships that we contacted about the research were unable to participate in a research visit. In all cases, replacement partnerships were found. However, because the ITE sector is small, these were not always like-for-like replacements.

It is worth noting that, although we had a preference on the order of activities for consistency purposes, the visit schedules were also designed to be flexible. This is because they needed to accommodate staff’s and trainees’ availability on the visit days. Some schedules were revised to facilitate the visit, often with some activities we would have preferred to have carried out not taking place. On a few occasions when trainees were unavailable, inspectors spoke with NQTs who had trained with the partnership instead.

Similarly, it was not always possible to arrange the research visits so that they fully incorporated the planned research design. Sampling constraints led to a few visits being carried out with modifications applied to the planned methodology. In some cases, control was ceded to course leaders to ensure that the visit could go ahead. For instance, in 3 visits when logistics were particularly tricky, we accommodated the partnerships’ preferences to carry out trainee, mentor and professional tutor interviews in the centre-based provision rather than holding discussions at the partner schools. Furthermore, in 5 research visits, participants from only one partnership school or setting were available for triangulation.[footnote 6]

Inspector training

All the inspectors participating in the research received an intensive day of face-to-face training. This helped the level of consistency in how they collected evidence and used the indicators and rubric. Specific training on the research study was further supplemented by wider organisational inspector training on curriculum and the deep-dive process for the EIF.

The HMI lead carried out quality assurance of the data and evidence forms completed during the fieldwork. This captured a few cases when the evidence collected did not match the statements of the rubric sufficiently well. These inconsistencies were resolved in discussion with inspectors. Generally, quality assurance of the earliest evidence bases from the fieldwork and the feedback provided meant that the quality of data collected improved during the later research visits.

Following the fieldwork, we invited all the inspectors participating in the research to a focus group to discuss their views on the research tools and methodology. Seven HMI provided information that was important for assessing the validity of the indicators.

Annex C: Indicator data tables

Impact indicator Band 1 Band 2 Band 3 Band 4 Band 5 Total
Overall impact (9a) HEI 1 1 7 8 3 20
Overall impact (9a) SCITT 1 2 6 11 6 26
Intent indicators Band 1 Band 2 Band 3 Band 4 Band 5 Total
Coherent rationale (1a) HEI 0 0 1 11 8 20
Coherent rationale (1a) SCITT 0 1 4 12 9 26
Shared understanding (1b) HEI 0 3 7 5 5 20
Shared understanding (1b) SCITT 0 4 3 13 6 26
Coverage of teaching concepts (1c) HEI 0 1 5 9 5 20
Coverage of teaching concepts (1c) SCITT 0 1 7 13 5 26
Coverage of educational research (2a) HEI 0 2 3 10 5 20
Coverage of educational research (2a) SCITT 1 2 6 10 7 26
Implementation indicators Band 1 Band 2 Band 3 Band 4 Band 5 Total
Clear training roles (3a) HEI 0 1 3 9 7 20
Clear training roles (3a) SCITT 0 2 5 10 9 26
Trainer knowledge & expertise (3b) HEI 0 0 4 12 4 20
Trainer knowledge & expertise (3b) SCITT 0 2 7 10 7 26
Regular ITE curriculum review (4a) HEI 1 1 7 6 5 20
Regular ITE curriculum review (4a) SCITT 0 3 6 9 8 26
Trainer development (4b) HEI 1 3 5 6 5 20
Trainer development (4b) SCITT 0 3 8 8 7 26
Depth & coverage (5a) HEI 1 3 5 9 2 20
Depth & coverage (5a) SCITT 0 5 8 7 6 26
Model of progression (5b) HEI 0 1 3 9 7 20
Model of progression (5b) SCITT 0 4 3 14 5 26
Theory & practice (5c) HEI 0 1 5 9 5 20
Theory & practice (5c) SCITT 1 3 4 7 11 26
Training on behaviour (5d) HEI 0 1 1 13 5 20
Training on behaviour (5d) SCITT 0 0 3 13 10 26
Partnership working (6a) HEI 0 3 4 11 2 20
Partnership working (6a) SCITT 0 1 6 10 9 26
Focus on relevant standards (6b) HEI 0 1 5 8 6 20
Focus on relevant standards (6b) SCITT 0 2 7 9 8 26
Mentor quality (6c) HEI 0 3 5 9 3 20
Mentor quality (6c) SCITT 0 0 7 15 4 26
Priority across partnership (6d) HEI 0 3 8 6 3 20
Priority across partnership (6d) SCITT 0 0 6 11 9 26
Quality assurance (6e) HEI 0 2 9 6 3 20
Quality assurance (6e) SCITT 0 3 6 10 7 26
Supporting SEND and EAL learners (7a) HEI 0 1 7 8 4 20
Supporting SEND and EAL learners (7a) SCITT 0 2 4 15 5 26
Parity for all trainees (7b) HEI 0 2 5 10 3 20
Parity for all trainees (7b) SCITT 0 1 4 12 9 26
Trainee recruitment (7c) HEI 1 1 4 8 6 20
Trainee recruitment (7c) SCITT 2 1 6 7 10 26
Thoughtful assessment (8a) HEI 0 1 3 10 6 20
Thoughtful assessment (8a) SCITT 0 0 5 13 8 26

Band 1: this aspect is absent in the ITE programme. Band 5: this aspect of the ITE programme underpins and is central to the success of the partnership’s work and may include examples of exceptional ITE curriculum design. Includes indicator data from the 2 Teach First research visits.

Annex D: List of the 46 partnerships that participated in the research visits

Partnership name Region Partnership type
Altius Alliance North West SCITT
Billericay SCITT East of England SCITT
Birmingham City University West Midlands HEI
Bromley Schools Collegiate London SCITT
Buckingham Partnership SCITT South East SCITT
Cabot Learning Federation SCITT South West SCITT
East London Alliance London SCITT
e-Qualitas South East SCITT
Fareham and Gosport SCITT South East SCITT
Flyde Coast Teaching School North West SCITT
GITEP SCITT South West SCITT
Greenwich University London HEI
i2i Teaching South East SCITT
Kingsbridge EIP SCITT North West SCITT
Leeds Becket University North East, Yorkshire and the Humber HEI
Leeds SCITT North East, Yorkshire and the Humber SCITT
Liverpool Hope University North West HEI
Mersey Boroughs ITT Partnership North West SCITT
Mid-Essex Initial Teacher Training East of England SCITT
Mid-Somerset Consortium South West SCITT
North Essex Teacher Training (NETT) East of England SCITT
North Manchester ITT Partnership North West SCITT
Nottingham Trent University East Midlands HEI
Poole SCITT South West SCITT
Sheffield Hallam University North East, Yorkshire and the Humber HEI
Suffolk and Norfolk Primary SCITT East of England SCITT
Surrey South Farnham SCITT South East SCITT
Teach East East of England SCITT
Teach First London London Teach First
Teach First South East South East Teach First
Teeside University North East, Yorkshire and the Humber HEI
Thamesmead SCITT South East SCITT
Tudor Grange SCITT West Midlands SCITT
University College London London HEI
University of Buckingham South East HEI
University of Chichester South East HEI
University of Derby East Midlands HEI
University of East Anglia East of England HEI
University of Hertfordshire East of England HEI
University of Hull North East, Yorkshire and the Humber HEI
University of Manchester North West HEI
University of Nottingham East Midlands HEI
University of Oxford South East HEI
University of Plymouth South West HEI
University of St Mark and St John South West HEI
University of Sunderland North East, Yorkshire and the Humber HEI

Help with this report

How to search the report

Click on Ctrl + F or Command + F on a Mac

This will open a search box in the top right-hand corner of the page. Type the word you are looking for in the search bar and press enter.

The word will then be highlighted in yellow wherever it appears in the report. Click on the enter key to move to the next word found.

How to print a copy of the report

Click on Ctrl + P or Command + P on a Mac

You have an option to print the full report or select a page range. You can also choose to save the report as a PDF.

  1. This may explain that the higher scoring for planning effectively for trainees’ progression through the training programme (indicator 5b) is related more to the sequencing of centre-based content. The partnership working indicators are much lower because it picks up the lack of sequencing at this level. 

  2. Backwards regression analysis looks to produce the model which has the best Akaike Information Criterion (AIC) score. The AIC is a relative measure of the quality of a predictive model, and a lower score indicates a better fit. This technique starts off with all 21 dependent indicators included within the model, and then removes the indicator which reduces the AIC score by the largest margin. The process is repeated, until the model results in the lowest AIC score. 

  3. The trainees involved in these discussions were from the partnership participating in the research. 

  4. We have previously identified teacher education in behaviour as a weakness. This has been a priority for some time, hence its specific inclusion in the design of the ITE curriculum research model. 

  5. The findings from these 2 pilots are included in the sample of 46 research visits. 

  6. In a couple of instances, 2 schools were originally available on day 1 of the research visit but at this point some schools decided that they no longer wished to participate in the research. There was no time available to secure a replacement school.