A major fire that broke out at an OVHcloud data centre complex in Strasbourg, France at 00:47 on March 10, 2021 has destroyed its SBG2 data centre completely and severely damaged its SBG1 facility.
The facility is home to four OVHcloud co-location data centres.
After urging customers to activate their Disaster Recovery Plans in the early hours, OVHcloud told customers that firefighters had “immediately intervened to protect our teams and prevent the spread of the fire.
“By 2:54 am they isolated the site and closed off its perimeter. By 4:09 am, the fire had destroyed SBG2 and continued to present risks to the nearby datacenters until the fire brigade brought the fire under control. From 5:30 am, the site has been unavailable to our teams for obvious security reasons.
OVHcloud’s founder and chairman Octave Klaba said the plan for the next one to two weeks involved:
- Rebuilding 20KV for SBG3
- Rebuilding 240V in SBG1/SBG4
- Verifying DWDM/routers/switchs in the network room A (SBG1). checking the fibers Paris/Frankfurt
- Rebuilding the network room B (in SBG5). checking fibers Paris/Frankfurt
He added: “We plan to restart SBG1+SBG4+ the network by Monday March 15 and SBG3 by Friday March 19. In RBX+GRA we have the stock of new servers, pcc, pci ready to be delivered for all the impacted customers. Of course for free”, he added on Twitter. “We will add 10K servers in the next 3-4 weeks.”
Among the many legitimate and inconvenienced businesses using the data centre there was — as there is with most cloud providers — a not inconsiderable continent of cybercriminals using their hosting services.
Kaspersky’s Costin Raiu noted on Twitter: “Out of the 140 known C2 servers we are tracking at OVH that are used by APT and sophisticated crime groups, approximately 64% are still online.
“The affected 36% include several APTs: Charming Kitten, APT39, Bahamut and OceanLotus.”
It was not immediately clear how the OVHcloud fire started.
A Telstra data centre in London’s Isle of Dogs also caught fire in August 2020. That issue was triggered by a faulty Uninterruptible Power Supply (UPS) — the same issue that triggered a 14-hour outage at Equinix’s LD8 data centre on August 18.
In a less outright destructive incident, an AWS data centre in Tokyo also suffered a cooling failure in 2019, resulting in the destruction of multiple servers. Racks of servers started overheating, the company explained in a summary of the ensuing outage, after control system failure that caused “multiple, redundant cooling systems to fail in parts of the affected Availability Zone… A small number of instances and volumes were hosted on hardware which was adversely affected by the loss of power and excessive heat. It took longer to recover these instances and volumes and some needed to be retired as a result of failures to the underlying hardware.”