A new fibre optic network link between London and Paris commissioned last month by EXA Infrastructure sends data on a roundtrip of just 5.5 milliseconds — take a breath, count “one Mississippii, two Mississippi” and your data would have flashed between the two capitals over 360 times; rather faster than the Eurostar. The link is one of 96 such pairs on the 2,400 Tbps capacity “CrossChannel” subsea cable owned by Crosslake Fibre that became ready for service in December 2021. (CrossChannel connects the Equinix LD4 data centre in Slough, to the Interxion PAR3 and Equinix PA7 data centres in Paris; touching various points-of-presence between.)
EXA Infrastructure’s capacity on that cable comes as it builds up its own European footprint. EXA was created in September 2021 after the acquisition by private equity firm I Squared Capital of GTT Communications’ Europe-wide, subsea and North American network infrastructure and data centre assets. It claims to have invested €190 million in adding over 7,000 kilometres of network acrosss Europe in its first half-year, with a promised additional capex outlay of up to €150 million before its September anniversary. (Via other portfolio companies I Squared Capital says it has committed over $3 billion to over 120,000 route kilometres of fibre globally…)
It was the second “complex carve-out of fibre and data centre assets from an integrated telecom company to an independent, carrier-neutral infrastructure platform that we have completed,” Gautam Bhandari, Managing Partner at I Squared Capital said at EXA Infrastructure’s launch last year and EXA — which owns 112,000 kilometres of fibre network across 32 countries via a network linking 300 cities — rapidly landed a marquee customer in London Stock Exchange Group (LSEG), for which it is providing on-network dark fibre, wavelength and Ethernet services via LSEG’s new co-lo Isle of Dogs site (the LSEG migration cutover is October 15).
EXA Infrastructure: Companies are reinventing themselves around information utilisation
To Crosslake, the new London to Paris fibre optic cable came none-too-soon.
“Current subsea systems are all nearing their commercial and technical maximum lifespan of approximately 25 years… all subsea systems between the UK and Western Europe were deployed in and around 1999/2000. There is a growing risk the UK could become digitally isolated unless a new generation of subsea fibre systems is deployed” it said at launch.
To EXA Infrastructure’s Andrew Haynes — who joined earlier this year from global IP backbone provider Arelion, where he was COO — such historical underinvestment continues to leave scars on the industry.
“I would say that — because of the nature of the legacy investments that were made years ago [during] years of capital constraint — that institutional memory about being poor all the way through the [market] lows drives a lot of behaviour. A lot of the execs in this industry cannot help but reach back to [the days of] ‘how did I manage the company when we only had three pennies to rub together?'” he says wryly.
“The consequences of that are internal management setups and paradigms where [as an industry] we’ve historically just not really good at leaning into customer demand and going, ‘how can I help? I’ve got a solution!’ We’re sort of better as an industry at going, ‘oh, let me kind of think about that. I’m going to need to go and do a lot of modelling and then talk to the CFO 27 times before we can make him an offer…”
The new London-Paris fibre optic link and EXA’s own European investment are a welcome sign that appetite for dramatically upgraded infrastructure has returned, he says however, noting on a call with The Stack that amid digital transformation across sectors the market for such high-speed, low-latency connectivity has shifted dramatically in recent years: “[Earlier in my career] you could separate customers based on the industry that they were in. Industry was the segmentation: Financial services? A lot of bandwidth. Agriculture? Less so.
“But if you look at the way that companies are reinventing themselves around information utilisation, you can look at two different companies that are in agriculture or financial services or insurance or in music and they have wildly different approaches to what they’re doing with data, how they’re using data, and therefore the intensity with which they need the infrastructure that supports that” he says, speaking on June 20.
“In retail for example, there are retailers with electronic Point-of-Sales to the shops, and that’s it. Then there are retailers that have blanketed the shop with video, with sensors, with music that’s optimised in real-time via data centres algorithmically to try and nudge people to purchase more stuff! When you look at a shop needing to connect to a data centre, what’s really happening is back in those — probably two or three — data centres, there is a wealth of complexity in the calculations that are happening between different data sets and different data centres, trying to calculate what should happen next. It’s in that area that we see this explosive growth: It’s that data centre-to-data centre; cloud-to-cloud; intercloud-to-intercloud; application layer on the most digitally-driven companies; this is where you’re seeing this hyper-growth of bandwidth between between locations…”
Follow The Stack on LinkedIn
The most recent Data Gravity Index DGx from Digital Realty – which measures the creation, aggregation and private exchange of enterprise data across 53 global metros – revealed that London and Paris are among the cities with the greatest intensity of “data gravity” along with Amsterdam, Dublin and Frankfurt.
Digital Realty describes data gravity as “the phenomenon of large amounts of data attracting more data and applications into the same place. Cities with strong, open data exchange with other cities often generate the greatest intensity of Data Gravity – bringing major strategic advantages to businesses in those cities”
To Haynes, this is only set to continue as requirements for resilience and quality grow: “A lot of the growth of the internet over the last five to 10 years has been about fundamentally moving broadcast television to be consumed online. We’ve kind of gotten to the end of that, but there’s a large number of applications that are becoming much more quality-sensitive. It’s not just about gaming: it’s about health, it’s about self-driving cars; about the IoT. Everyone can get excited about IoT but IoT still ends up in fibre, very shortly after it hits the base station. So I think we’re going to see a shift from quantity being the end-all to quality really mattering a lot more; low packet loss, jitter, latency. All those things are attributes and facets of the IP layer and what’s possible in those layers…”
“One way to drive efficiency for all of these cloud-based applications is trying to make sure that there’s a good number of routes between two data centres” he notes, emphasising that “if you’ve got two data centres, you need to be resilient of each other; with two fibre paths you are going to have 50% loading on the onem 50% loading on the other: that’s not very efficient. If you’ve got three paths between them, then you’re only worried about being able to protect against a single failure. Now you can run 66%. If you got four loads between them, now you can run 75% utilisation… so new routes between different cities, new routes between different data centres in the metro; all of these things are making that backbone inter-data centre connectivity more efficient, more effective and more resilient. With six different routes you can be protected against two cuts by running four plus two.
“So this is also about hardening the infrastructure.”