The case for the British model of independent regulation 30 years on

The Currie Lecture given by CMA chairman David Currie to the Cass Business School in London on 21 May 2014.

David Currie

It is a great pleasure to be back at Cass to give this sixth lecture in the series. I was greatly honoured when the University established this lecture series in my name, reflecting my interest in regulation both as an academic and a practitioner. And the series has been graced with a number of leading regulators, including Callum McCarthy, Alistair Buchanan, Philip Collins, David Bennett and Martin Wheatley. To have Currie give the Currie lecture is the lecture equivalent of a selfie, and needs some explanation. When two years ago now I suggested to the Dean, Professor Steve Haberman, that I give one of the lectures in the series, I had in mind that, after some twenty years in the regulatory fray and having stepped aside, I would give my retrospective summing-up of the regulatory field and the factors that underpin effective regulation. I did not imagine that I would be following in the footsteps of many distinguished economists and lawyers and building and leading the UK’s new competition agency. But it was not long after that suggestion to Steve that I was asked to chair the Competition and Markets Authority, almost exactly ten years to the day that I was asked to chair Ofcom. So over the past one and a half years or so I have been in the thick of it, overseeing the CMA’s formation from the Office of Fair Trading and the Competition Commission, a process which came to successful fruition at the beginning of last month when the CMA opened its doors for business.

However, with your indulgence, while I will come to the CMA and our ambitions for it, I want also to stick to my original intent and talk more broadly about the state of regulation and the role of markets. And because of that wider canvas, what I say this evening should not be attributed to the CMA, but rather be seen as the reflections of a long-in-the tooth pensioned-off academic who has had the privilege over his career to work in business and government, often explaining business to government and government to business. In the process, I have been able to observe directly many sides of complex economic phenomena.

My key theme tonight is the interaction between politics and the UK system of independent regulation, and the danger that this system is at risk from greater political intervention. There is a strong and welcome degree of political consensus for the principle of independent regulation of markets – that was reflected in the smooth passage of the Enterprise and Regulatory Reform Act 2013 that established the CMA. However, there is also growing pressure on politicians to intervene in a range of markets to achieve particular ends. Whatever the merits of the individual intervention, such interventions taken together could damage the overall regulatory regime. I want to explain why, if our regulatory system were to succumb under this pressure, that would be bad for consumers, businesses, the economy, and ultimately, I believe, the vast majority of politicians.

Let me start by reflecting broadly. In the last five or so years the world economy, particularly in Europe and the US, has gone through a very major financial, and consequently economic, crisis, the worst since the 1930s. This has had a severe impact on businesses and individuals: many businesses that seemed perfectly viable have gone to the wall, many individuals have lost their job, those entering the labour market for the first time have found it very difficult to find a job with severe consequences for their life chances, and many of us who have been much less affected have had to wind down our expectations about the future. Not surprisingly against this background, public faith in the efficacy of markets has fallen sharply, and many more are willing to contemplate government interventions in the operation of market forces. This has been reflected in the discourse of politicians, with price and rate caps being reached for across the political spectrum, a revival of public interest concerns about international mergers, and even calls for nationalisation. Is this just a healthy correction to the overblown faith in markets that preceded the financial crisis, or is the pendulum swinging too far towards government intervention?

There are, of course, good intellectual reasons for scepticism in the operation of the market. General equilibrium theory is taught in university economics courses up and down the country – I was raised at the University of Birmingham on the classic treatises by Debreu and Arrow and Hahn – and its simplified essence is also taught in schools. It shows how the market can deliver a Pareto-optimal outcome which, if one is intrepid enough to define a social welfare function, can be socially optimal on the assumption of appropriate redistributive taxes. But as Frank Hahn used regularly and powerfully to say, the analysis relies on very stringent conditions: lump sum taxes, the full panoply of forward markets, and very specific aggregation assumptions, not least to exclude the possibility of increasing returns.

The international trade theory that I learnt at the same time showed the conditions under which free trade delivered optimal outcomes between countries, but these too were stringent, needing to exclude not just the capital re-switching phenomenon that lay at the heart of the now dead Cambridge capital controversy, but also the economies of scale which Paul Krugman showed to be essential in explaining trade between countries with similar industrial structures. Thus one can turn such analysis round and use it to argue the market will only deliver optimal outcomes under highly limited and implausible conditions. That is why the Austrian School’s view of markets and competition as a process of rivalrous discovery, with continual change and evolution, rather than embodying the concepts of optimality and equilibrium, is more helpful, as Stephen Littlechild and others have consistently emphasised.

The same point applies to the seminal work of Ronald Coase, often cited by market proponents wishing to argue against government policy interventions. His insight that refining property rights can help to resolve market failures arising from externalities is crucial. But Coase was a pragmatist and recognised that transaction costs could well make coordinated action by scattered individuals infeasible, leaving scope for government to influence outcomes, for better or worse.

One can argue similarly in the field of macroeconomics. The rational expectations revolution led to a major rethink of how intelligent agents respond to macroeconomic phenomena including government policy interventions. Much of the thrust of the literature was to show how government policies were ineffective because of the adjustment of expectations to the new policy. But this was true only in very specific circumstances: essentially the same aggregation and other conditions required for general equilibrium theory to show the efficacy of markets. If I may here indulge in a little personal footnote, I with my colleagues Paul Levine and Joe Pearlman was analysing the characteristics of control rules in dynamic macroeconomic models, with the view to identifying simple robust policy rules (a line of inquiry also pursued by James Meade with his team, including Martin Weale and David Vines, with whom we collaborated).

But with a view to making the assumptions more realistic, we moved to analyse the consequences if agents had less than perfect information. With a succession of wet towels we were able to derive an analytical solution in the restricted case where agents shared the same partial information set, but for the case where information sets differed we had to resort to computational methods, with the deployment of very considerable computer power. This was a ‘reductio ad absurdum’: the notion that agents were acting as though they could solve this complex calculation was not tenable. And if one instead assumes plausibly that they adopt a simple heuristic (a la Kahneman and others), then only in very simple and unrealistic models is there any possibility that learning will lead to the rational expectations outcome. So we had gone down a fascinating intellectual blind alley.

Thomas Piketty has hit the headlines with a somewhat different macroeconomic perspective on wealth accumulation. He reminds us that if rates of return exceed the rate of growth, the historically normal state of affairs, then the natural laws of capital accumulation mean that wealth distribution will tend to become more unequal over time, a trend interrupted in the twentieth century by the wealth appropriated and destroyed by two world wars. This raises important questions about income and wealth concentration, as well as industrial concentration, that may be used to justify government intervention. Market processes may well generate major net benefits, but if these benefits are not shared their fairness may be questioned, and noisily amplified in the political discourse by the media.

This is all the more the case when the authorities take actions in the interests of market stability which have at least the appearance of bailing out guilty parties. Thus Tim Geithner referred in Saturday’s Financial Times to “this sense of deep unfairness, of rewarding the unjust, rewarding the arsonist”.

These various arguments all suggest reasons for challenging the outcomes of market processes, and provide the possible basis for government intervention to improve matters. But, of course, there are equally compelling reasons why such intervention may not help. The extreme case of central planning has been discredited by history.

A fact that I don’t much advertise is that my first qualification in economics was a masters degree in national economic planning which I studied at the University of Birmingham having abandoned a PhD in mathematics. Much of the course was essentially mathematical programming, relatively trivial after Gödel’s Theorem, so I had plenty of time to attend a wide range of postgraduate economics courses that gave me my real economics education. I remember musing on the way to the course interview as to how on earth central planning could coordinate the diverse range of economic activities that I was observing from my train window. The nearest answer came in the proof of the double decomposition theorem, showing the conditions under which an optimisation problem could be decomposed into a simple high level optimisation providing the parameters for a subset of local optimisations, justifying the planning subsidiarity then practised in Poland and Yugoslavia.

Of course, the conditions for such a decomposition were as stringent and unrealistic as the Arrow/Debreu conditions. Peter Wiles of the LSE came to Birmingham to regale us with entertaining stories of the horrors of Soviet planning, including the inability or unwillingness of the planners to get loo paper to Leningrad over a twenty year period, as well as charging the Indian planners with killing through starvation more people than Stalin.

The course, long since justly dead, reinforced the initial insight of my train journey and left me with a healthy scepticism about government planning. And of course the Soviet Union was brought down by the failures of planning. As an illustration, the comprehensive enterprise-level data available in East Germany after the Berlin Wall came down showed that 90% of enterprises were unprofitable at world prices, and some 40% were actually value-destroying: that is, the value of the raw materials (excluding capital and labour) going into the production process exceeded the value of the final product – the sausage coming out of the factory was smaller than the one going in.

That lesson is now well learnt and few, if any, argue for a return to central planning. But, as an aside, it is possible to ask whether the lesson has been sufficiently learnt: the huge sprawling multinational companies, larger than many national economies, are the last bastion of central planning and may, for example in the case of the complex banks, be too large to manage, as was perhaps revealed in the financial crisis.

So with all that theoretical throat-clearing, we are left, unsurprisingly, in the commonsense middle ground of recognising that there is no practical alternative to the market, but also recognising that market outcomes may not always be for the best. Market failures may be rife and may be susceptible to government intervention to improve matters, but as Coase argued we must be as alert to government failures as market failures.

And examples of government failures are easy to find – just flick through the recent book on ‘The Blunders of Our Governments’ by Tony King and Ivor Crewe. Let me offer three of my own.

In the sphere of macroeconomics, in his classic study of the post-war business cycle, JCR Dow showed that fine-tuning of the business cycle had been mildly destabilising, adding to, rather than reducing, volatility. That was a conclusion drawn from, and relevant to, the immediate postwar era of relative macroeconomic stability of fixed exchange rates and limited capital mobility. Fine-tuning - trying to smooth out mild business cycle fluctuations - was problematic for a number of reasons. But it does not vitiate coarse-tuning of major economic fluctuations – and indeed we have just passed through an episode where government intervention in the form of quantitative easing or outright monetary transactions has almost certainly averted a repetition of the failures of the 1930s, chronicled so effectively by Milton Friedman and Anna Schwartz.

In the field of international trade, the literature on effective protection shows the unintended consequences of putting in place import tariffs in response to political lobbying. The immediate postwar pattern of import tariffs was very varied, with the nominal tariff rate typically rising through the production chain, with lower protection on raw materials and much higher rates on finished products. The consequence was twofold. First, the effective or real degree of protection was hugely in excess of the nominal – nominal tariffs of 20% could easily generate real effective rates of around 100%, so that what might have been intended as a helping hand to domestic industry amounted to solid protection and feather-bedding. Second, variations in nominal tariffs led to great variation in the real level of protection afforded to different domestic sectors, leading to inefficiencies in the allocation of resources and investment. Europe learnt this lesson quickly, and dismantled tariffs; Latin America was a slower learner and suffered from lost economic growth.

A high price was paid for muddled government intervention. But the correct lesson is not that all protection is necessarily harmful: Paul Krugman’s work shows at a theoretical level that this need not be so, and the development history of many parts of the world show the benefits of some protection in the early stages of industrialisation. But key to success is keeping it simple, to avoid unanticipated side-effects. And avoid piling intervention on top of intervention, which will assuredly result in poor outcomes.

That was very much in my mind when I took my role at Ofcom in 2002. An early challenge was our approach to telecoms regulation and BT. We inherited from Oftel somewhere between 150 and 300 separate regulatory interventions (depending on how you categorised them) into BT’s business. Yet although there were very many companies in the telecoms space, almost all were minnows and wholly dependent on the regulatory drip-feed. What I think had happened was that regulation had grown in a non-strategic way: while each intervention could be rationalised in its own terms, the unintended consequences of piling regulation on regulation were not properly analysed. At my first, very early, meeting with Christopher Bland and Ben Verwaayen, when I was the only person in Ofcom, I said that we had so many hooks into BT that they could scarcely move without our say-so; but that despite this we weren’t achieving our key policy goal of effective competition. I said that there must be a better way. BT and the new Ofcom ran with that germ of an idea and the result was functional separation, the creation of Openreach and the requirement for equality of access: competitors to BT Retail were given equal access to BT’s core network as BT Retail itself. The result was the take-off of the UK broadband market, appreciable deregulation and the emergence of scale competitors to BT.

Each of these examples illustrates a key lesson. Because of the risk of government failure, government intervention needs to be carefully limited and focused on tackling the most egregious market failures. The intervention may not be simple – functional separation was a technically complicated intervention – but it needs to be focused. And a key difficulty for government policy-making is that lobbying of government by interest groups encourages the opposite - interventions that are less focused and wider ranging, aimed at satisfying a wide group of diverse interests.

That brings me finally to the central focus of this lecture. If we are to design careful government interventions in markets, what is the merit of government giving powers to an independent regulator, whether interest rate setting to the Monetary Policy Committee or price setting powers to a sector regulator? And what safeguards are needed to ensure that this arrangement works satisfactorily? And what is the nature of the independence, and how is it exercised and protected?

There are a number of reasons why such an arrangement makes sense.

First there is the argument that I have just developed, that delegation in this way is more likely to deliver focused, effective interventions, because it will be less susceptible to diverse lobbying influences.

Second, as in many other areas of life, there is advantage in clear delegation within a well-defined and contained framework. This facilitates greater transparency around the rationale and process for decision-making, improving predictability and reducing uncertainty.

Third, independent regulation may allow a greater commitment to beneficial long term objectives that it is hard for government itself to deliver on. That was the motivation for delegating interest rate setting to the Monetary Policy Committee with a clear and transparent objective to pursue low and stable inflation. Before this delegation, much of the discussion about future interest rates concerned political factors; afterwards the discourse focused much more on economic factors. That greater clarity and certainty arguably yields a benefit in the form of a lower risk premium in long term interest rates. Similar considerations motivate the delegation of pricing of essential facilities: independent regulation using the well-tried principles of the regulatory asset base can encourage long term investment by the private sector without an undue risk premium being required, and will result in direct benefits to consumers in the form of lower prices. Hence the form of regulation in energy, water, transport and parts of telecoms.

The economics literature formalises this as the issue of time inconsistency: Government may determine an optimal policy path, but the passage of time alone renders the original path suboptimal and without some form of commitment to the original path Government will renege and re-optimise. But that reneging will be anticipated by intelligent actors. That anticipation leads to a severely suboptimal outcome. Delegation of powers, carried out in the right way, can provide a mechanism for pre-commitment that overcomes this problem. It was this issue that motivated much of my macroeconomic academic research focused on finding ways to design robust, simple macroeconomic policy rules. Robust and effective delegation mechanisms are the equivalent challenge at the sector level.

Fourth, delegation to independent regulators may well allow the regulator to build up a much greater concentration of sector and technical expertise than is possible within central government. While it would be wrong to overstate the point, the typical career development with the central civil service is one of rotation from one area to another, and therefore tends to favour general administrative skills over technical skills. An independent regulator may well be able to attract and retain a stronger body of technical specialists, with benefits for the quality of regulation. This applies to the competition area as well as to the sector regulators.

Fifth but importantly, there may be matters seen to be of legitimate public concern in which government should not intervene. The obvious example is content issues in the media. It has long been accepted that there is content that is within the law but the dissemination of which should be controlled: in broadcast media, regulators (now Ofcom) have enforced standards for accuracy and impartiality and to avoid harm and offence, though with increasing difficulty in a world of proliferating channels, standards that are over and above what the law requires. Given the sensitivity of these issues, government has preferred this to be done independently and at a distance. Much of the controversy over the Leveson Inquiry centred on the case for similar standards in the print media.

Finally, any satisfactory regulatory arrangement needs a strong and robust appeals system, either to independent tribunals or the courts (or to the CMA in the case of some regulatory decisions), and such appeals are an essential part of a healthy regime. But without delegation of decisions to independent agencies, the government may well find itself mired in a set of appeals and challenges that weaken its broader authority to the detriment of its overall effectiveness.

These reasons provide a powerful case for delegation of a number of key areas of decision-making to an independent regulator. The broad shape of that delegation has now evolved into a well-established structure. Government through Parliament sets out in statute the duties and powers of the regulator as well as its governance structure. Appointment of those charged to exercise these powers in the discharge of the duties is by government, though (at least in the UK) with some independence of the process. They then exercise their powers in the way they judge appropriate, and they are held to account by public scrutiny, including from select committees and usually by appeal to the courts or specialist tribunals.

Although the broad shape is similar, there are important differences across different areas of delegation. Some (notably the Monetary Policy Committee) have very clear and simple objectives, while others (notably the sector economic regulators) have long shopping lists of objectives. Powers have also varied appreciably: thus while all sector economic regulators now have (or shortly will have) concurrent Competition Act and Enterprise Act powers, that is only recent. While the model of collective board responsibility is now quite general, that too is relatively recent. And the way in which particular interests (the industry, consumers, the devolved nations) are represented (or not) in the board structure varies very considerably. These differences partly arose from differences in the nature of the devolved task; but in large part it reflected the fact that much regulation sprang from government departmental initiatives at different times without any overall and consistent regulatory philosophy. And playing into all of this is the framework of European law, which varies in its force across sectors.

A crucial design question is just what is delegated to the regulator and what retained for government, and there is considerable variation across different areas. Delegation of interest rate setting to the Bank of England was straightforward and clear, but government retains the power to change the inflation target, as it has done once. In the field of sector regulation, the need to put such delegation in statute has led to ever more complex legislation, and sometimes to surprising results: thus Ofcom in practice negotiates on behalf of HM Government in international agreements on spectrum, but somewhat surprisingly has no national security duty despite the centrality of telecommunications to security matters.

Central to this arrangement is the independence of the regulator. Effective regulation needs to be independent of both the industry and government but at the same time it requires a thorough understanding of the regulated industry, and sensitivity to political currents. And independence is also earned: earned through the quality of the analysis, insights and solutions that the regulator offers; and earned through the long term benefits delivered to consumers. Of course, the regulator suffers from an asymmetry of information: the regulated firms will inevitably know far more about their business than the regulator. But through careful research and analysis, the regulator can know more about the industry as a whole and how it works than the individual firms within it.

And that is the key to the regulator’s reputation and authority. Where there is a perception that this insight is lacking, then that authority will wane. One of the great strengths of the British competition regime is its long and distinguished track record: the quality of analysis and actions that has flowed from the OFT and Competition Commission has earned respect and understanding for the benefits of allowing the regulatory process to run its course.

The broad delegated structure of regulation that I have described is well understood both in the UK and internationally, and indeed represents a key British policy export, both within Europe and more widely. This provides both stability and assurance to business, and helps to make the UK an attractive place for international investors. Within Europe, there has been a considerable drive towards independent regulation as a means of creating a genuine internal market, achieving a more consistent and vigorous application of EU-wide rules without regard to national preference. The EU regimes for telecoms and energy illustrate this well. And just two sectors illustrate the long run benefits of letting competition work: telecoms where data costs have plummeted, fuelling the digital economy and the UK’s strength in e-commerce; and air transport which has been transformed by new entrants with new, innovative business models.

The competition regime, now overseen by the CMA, illustrates the considerable benefits of a delegated structure of regulation very well. The UK regime has evolved over the past few decades from a market and merger regime where governments driven by poorly defined public interest considerations could influence outcomes to one where the regime is independent, respected with a considerable body of case law and where wider public interest considerations (as opposed to a focus on effects of competition) apply only in clearly defined and limited areas. And with this evolution has come a widespread acceptance across political parties that competition is generally to be welcomed.

There is always the temptation for governments to want to influence specific outcomes. But politicians of all stripes need to bear in mind at all times the costs of so doing, and why the regime has evolved in the way that it has. While there may be political attractions for intervening in particular cases, the long term costs are potentially considerable and widespread, if not so obviously tangible. Political interventions often have undesired side effects, and most certainly damage the credibility of the regime, raising uncertainty for business and reducing the attractions of investing at home. And domestic interventions may well be copied internationally, raising the barriers and costs for British businesses operating internationally.

The better route is to work with the institutions that successive governments have created and developed and, where appropriate, to find constructive and beneficial adjustments to the regime. That may be done through legislation, such as we have seen in the competition and consumer area with the Enterprise and Regulatory Reform Act 2013, which brought the CMA into being. As I have argued elsewhere, that represented a judicious adjustment and enhancement to the regime and one that will, I think, be seen as a significant strengthening of it. It requires us to operate to shorter timescales, needed in a fast-moving world. It gives us some additional powers of investigation. And it enables us to conduct more focused inquiries on market issues that span sectors.

Three particular features of that reform are relevant to my broader theme. The first is that the legislation requires government to give the CMA a high level strategic steer. Some have pointed to the strategic steer that government has given us as a weakening of that independence. I could not disagree more strongly. It is important that any regulator, including the competition authority, is sensitive to both political currents and commercial realities. And it is important that there is a continued dialogue between the regulator and Government (as well as other stakeholders) on a “no surprises, no veto” basis. The strategic steer makes that high-level communication, which might otherwise be covert, open and transparent.

And that is consistent with the current government’s “Principles for Economic Regulation” which suggests that such a “Strategy and Policy Statement” can be a way of aligning regulation with the broad thrust of government policy while maintaining regulatory independence. In our case, the CMA must have regard to the steer, but is not bound by it. And were we to be steered to an issue which we judged to be without substance, then the CMA Board would be obliged to explain in public why we reached that view – that is the transparent way that independence would be asserted. Given the high-level and well-positioned nature of the current steer, I would be very surprised if that were necessary. And I see the strategic steer as an effective and helpful way for government to hone the framework of delegation laid down in legislation.

It is, however, important to note the risks if the steer moves from the strategic to become too prescriptive and detailed. That has not happened in our case, and nor do I think it likely in future given the economy-wide role that the CMA plays, as recognised by all parties during the passage of the ERR Act 2013. But governments can go too far in prescribing courses for regulators and independent bodies to take and can – as King and Crewe note – fail to engage with the practical implications of implementation. For example, it is now recognised by all parties that the energy sector – which has particular political salience at this moment – involves trade-offs between domestic prices, investment in supply, environmental concerns and other issues. But as Tim Tutton has recently suggested, under the previous government, the early Social and Environmental Guidance to Ofgem in 2004 following the 2003 Energy White Paper was both more detailed on the key issues of decarbonisation, security of supply, competition and affordability, and rather blithely suggested there were no trade-offs to be addressed.

Thus “The government has not sought to rank the four objectives set out in the White Paper. It is the government’s view that these objectives can be achieved together and the government has put in place policies designed to achieve this.” Policy-making of this kind, prescribing detailed aims without considering whether they are feasible and achievable, leaves the regulator open to blame and as the punch-bag when things don’t work out. But with strong analysis and intellectual courage, the regulator can inform the public debate to make clear what the inherent trade-offs and the limits of the possible may be.

The second is the greatly strengthened competition concurrency regime in the regulated sectors. A concern expressed by the government and others about the UK competition regime has been the relative paucity of competition cases in the regulated sectors. These sectors account in total for some 25% of the economy. They are also typically characterised by monopolistic or oligopolistic market structures. This might suggest the need for more, rather than less, competition enforcement than in other parts of the economy – hence the concern.

Despite having concurrent powers, the OFT was reluctant to get drawn into consideration of the regulated sectors. The CMA, by contrast, is tasked with taking the lead in coordinating competition actions in the regulated sectors and will cooperate with sector regulators through the newly formed UK Competition Network. And the ERRA 2013 - by strengthening the requirement for regulators, before they exercise direct regulatory powers, to consider whether it would be more appropriate to exercise competition powers, is intended to make more likely the adoption of pro-competition outcomes in these sectors. Greater reliance on markets and competition will place less burden on regulation to safeguard consumer interests, recalling Stephen Littlechild’s famous quote of 1983: “Competition is indisputably the most effective – perhaps the only effective means – of protecting consumers against monopoly power. Regulation is essentially the means of preventing the worst excesses of monopoly; it is not a substitute for competition. It is a means of ‘holding the fort’ until competition comes.”

The third is the role that the CMA is asked to play in acting as an advocate for competition within government, as part of our wider advocacy role. This is something that the OFT has done in the past, but we are required to do more actively. A number of competition issues arise from decisions made within government, by departments addressing key policy issues and inadvertently in the process doing things that impede competition. This has happened with privatisation and with the opening by government of new markets. There is an important role for the CMA to act as an advocate for competition within government, and working with other government departments to design policies before implementation that are as friendly as possible to competition, rather than having to sweep up the problem afterwards. Some of that work will be in public, and there is now an important obligation on government departments to accept CMA recommendations or explain their contrary reasoning to the highest levels of government. Some will be behind the scenes, so you may not hear a lot about it. But it may well prove to be as important part of the CMA’s work as those much more in the public eye.

These last two points lead me to my concluding theme for this lecture. The CMA’s primary duty, laid down in the ERRA 2013, is “to promote competition, both within and outside the United Kingdom, for the benefit of consumers”. The reference to “outside the UK” is not some imperialist throwback, but rather an endorsement of the important international dimension to our work, recognising that true competition traverses boundaries. Our overall mission is to make markets work well in the interests of consumers, businesses and the economy. This echoes the OFT’s mission, but expands on that. It mentions the interests of business not to set those interests against those of consumers, the focus of our primary statutory duty. Rather it is because open, well-functioning markets are very much in the interests of efficient, innovative and fair-dealing businesses as well as consumers.

And it mentions the economy because of the centrality of effective competition to the longer term growth and innovation performance of the economy.

How best to deliver benefits to consumers is not always obvious. I well remember that early on during our strategic review of telecoms consumer groups criticised Ofcom for focusing on the companies, not the consumer. It was perfectly true that we focused on the interconnect arrangements between BT and other suppliers. But that was what was required to get the market working properly: once fixed, effective competition drove down prices and broadband take-up, with huge consumer benefit. Making markets work better through competition is the surest way to achieve our central goal.

The CMA delivers on this mission to deliver benefit to consumers in two main ways: enforcing competition and consumer law, to ensure that harmful and illegal market behaviour is punished and deterred; and by using our broad powers to make markets that are performing poorly work better for consumers and businesses. A key set of tools at our disposal are the market studies and market inquiries. Market studies are the Phase 1 tool deployed hitherto by the OFT: these may lead to changes in market practice through agreed undertakings as a consequence of the investigation, or to a referral to a Phase 2 market inquiry (hitherto undertaken by the Competition Commission). The key point about these market tools is that there is no presumption that anyone is behaving illegally (though an inquiry may uncover such behaviour), but simply that the market is not working well and can be improved. The Enterprise Act gave the Competition Commission the power to impose proportionate and appropriate remedies to improve market performance, and that power has passed to the CMA.

This is a very powerful tool, and one that is widely admired around the world. Its importance is highlighted by several themes that I have touched on in this lecture. First, there is the point that, while markets represent the most effective way to organise complex and dispersed economic activity, markets do not always work well. This may be because there are impediments, such as entry barriers, to competition. It may also be because competition takes a malign form, with businesses competing to gouge, rather than serve, customers. Second, there is the point that designing market interventions that enhance market performance is a complex, difficult and time-consuming task, and one that is best done calmly and out of the political spotlight. And that is particularly so because it requires a lot of careful analysis to avoid interventions that have unconsidered consequences.

A graphic example of all these points is provided by that maestro of financial story-telling. Michael Lewis, in his latest book, ‘Flash Boys’. I cannot comment on the veracity of all of his account, but it has a sufficient ring of truth to be worth recounting, since it shows how the law of unintended consequences can operate.

In 2005, Congress was concerned to block the practice of front-trading, whereby those executing a financial trade on behalf of an investing client could trade in advance, thereby benefiting from the market price adjustment that the commissioned trade would induce. To do so, they passed Reg NMS, which Lewis describes as “well-meaning and sensible” and which, if market participants abided by its spirit, would have established new fairness in US financial markets. Lewis says that “instead it institutionalized a more pernicious inequality”. How did this happen. It replaced the notion of “best execution” with the tight legal concept of “best price”, defined by the concept of the National Best Bid and Offer.

In practice, this had unforeseen consequences. The strict definition of best price meant that someone executing a $100mn trade on behalf of an investment fund would have to look first at the price on offer for a $10 trade, and move up the price gradient irrespective of block size on offer. Of course, all of this is done automatically and almost instantaneously, and commonsense would suggest would therefore be of no consequence, and would assure the best possible deal for the trader’s client.

But of course in matters of this kind, common sense is a poor guide. The unforeseen consequence was to create a race for speed, measured in micro-seconds, a tiny fraction of a blink of an eye. If I post a low offer of $10’s worth of stock, I would get first information about a major trade. If I can then use that information to beat the trader to their next best and subsequent trades, I can front run and on a big scale. I have a much better chance of doing that if I am allowed to set up inside the exchange, so that I have a head start. Even better if, in the name of competition, I am able to establish an exchange some distance from others, but with an ultra-fast private link to other exchanges. Then I can lure the ordinary trader to remote spots, where they have no chance at all of beating me on the return journey. And even better still, if I do not have actually to execute a trade at the price that has lured my victim to the outback, because the rules of my exchange allow for retractable offers, allowing costless bait.

As I have said, I do not know the truth of all of this, which is under investigation by several leading financial regulators, but if even a small part of it is true then it is a major scandal. Close to two-thirds of trades on US financial markets are now accounted for by high frequency trading, and if Lewis is right returns to ordinary financial investors, including the insurance and pension funds so many of us rely on, are correspondingly depressed. The counter-argument that high speed trading helps market liquidity would seem a nonsense - holding a stock for a micro-second, and ending every trading day with no position provides no meaningful liquidity to markets. It would have John Hicks, that doyen of monetary economics who did so much to define and refine the concept of liquidity, spinning in his grave.

This account illustrates the key point that well-intentioned regulation may have perverse consequences: while many will put effort into compliance, there may well be those who find the way to avoid and profit. It also illustrates another point: that an increase in competition, in this case through the proliferation of competing exchanges, may not always be benign.

Promoting effective competition on a fair basis is likely to be the best way to improve outcomes for consumers. But it may require more than that. The payday loan market has many competing providers, but the OFT referred that market to a Phase 2 market inquiry (still in progress) because it suspected that this competition was working to the disadvantage of vulnerable consumers. Despite the fact that there are six large competing energy companies, the levels of dissatisfaction with the operation of that market are very high, and is one among several reasons why following the joint OFT/CMA/Ofgem competition assessment Ofgem is consulting on whether to refer the energy market to a Phase 2 market inquiry.

In some cases, behavioural remedies may be the right way to go, as the Competition Commission judged in its inquiry into the audit market. In others, structural remedies in the form of divestment may be appropriate: the break-up of BAA by the Competition Commission has led to much improved customer experience at London airports. Divestment is currently being required in the London private hospital market. And in some cases a package may be called for. Although structural separation of BT was considered in the early days of Ofcom, and indeed proposed by me before becoming chairman, the end solution of functional separation was a hybrid – some organisational change but as importantly behavioural changes, overseen by appropriate governance and audit.

There is no science to the devising of remedies that improve failings in markets but which avoid adverse side effects. But there is no substitute for deep, considered analysis so that remedies are based on a sound understanding of how a market operates and focused on the features that need adjusting. And that takes time. I remember my former London Business School colleague, Paul Geroski, a very fine industrial economist who was a Deputy Chairman and then Chairman of the Competition Commission for a very short time before his tragically early death, describing to me the thrill he got from deep immersion in a market and coming to understand exactly how it works. That takes time, diligence, objectivity and independence. That has underpinned the reputations of the OFT and Competition Commission, and is what the CMA is determined to uphold.

Published 23 May 2014