Kronos outage latest: individual restoration timelines landing; company vows more cold storage backups
The attackers who crippled widely used applications from global HR software company Kronos disabled the company’s “ability to communicate with our back-up environments” owners UKG have confirmed – as the company continues to work on restoring customer data after regaining access to its backups.
The company has promised that “between January 3 and January 7, we can give you a better sense of your own individual restoration timeline. You do not need to log a case or check in with anyone to find out if we know the specific date for your restoration yet, because we will proactively reach out to you…”
Following The Stack on LinkedIn? We break news, run interviews with CIOs, CTOs and more. Follow us here.
Multiple Kronos platforms have been unavailable since December 11. The outage has left millions of users at tens of thousands of customers unable to check pay, arrange rotas, or request paid leave.
The issue has bedevilled IT teams globally who’ve been forced to spend time in early 2022 supporting their companies with Excel-based workarounds provided by UKG and other related HR/payroll issues.
In the US public sector alone, the New York Metropolitan Transportation Authority, the City of Cleveland, the state of West Virginia, the Oregon Department of Transportation, the University of California system, and Honolulu’s EMS and Board of Water Supply, along with scores of smaller local authorities have been affected.
Kronos outage latest: Data exfiltrated
The company’s private cloud-based applications were hit in the attack, with data centres in the US, Frankfurt, and Amsterdam all affected by the ransomware attack – reported at the time by The Stack here.
The company had touted a robust backup policy in whitepapers for its private cloud. (“All database backups are replicated via secure transmissions to a secondary UKG Private Cloud environment in an alternate data center. Backups are retained for the prior 28 days. UKG conducts formal tests on a quarterly basis to validate that the backup infrastructure is functioning correctly and that the data can be restored.”)
Given these previous claims, many customers have been asking why restoration is taking so long. Asked why it was taking so long to restore customer data, the company said that it “employs a variety of redundant systems and disaster recovery protocols. In addition to several redundant data centers, UKG Kronos Private Cloud environments are backed up on a weekly basis, as well as on a daily basis with the delta from the previous day.
Kronos added in a Q&A last updated January 3: “That backup data is stored in a different environment from the Kronos Private Cloud production environments, with a different architecture than the production environments. The threat actor responsible for this attack disabled not only the Kronos Private Cloud production environments, but also disabled UKG’s ability to communicate with our back-up environments.”
What, precisely, from an IT perspective happened here vis-a-vis the backup architecture and attack on it? And what could have been done to avoid it?
Pop us an email if you know more.
Kronos added in its FAQ, last updated on January 3, 2022: “We have restored the ability to communicate with our back-up environments and as of December 25, we are in the process of regaining full access to the datastore that contain the back-ups. This is an important step in the full restoration process.
“As we regain access to the back-ups, we will perform scans for malware and other vulnerabilities.”
Asked to comment what had happened with access to the backups, one sysadmin told The Stack: “I’ve just had a read of their FAQ and I’m not sure there’s a good answer that’s not meaningless speculation.
“I did wonder if it might be something like VPN tunnel keys deleted or appliances damaged but it seems unlikely this would have taken so long to fix. It’s a pretty good idea to have some sort of offline/immutable backup that can’t be easily accessed by anyone, and it could just be that their easily accessed/recovered backups were trashed and access to the offline ones is much slower or requires some sort of convoluted manual process (like decryption key in a safe plus physical data centre access, or restoration from tape…)”
Attackers increasingly go very specifically after backups in order to gain more leverage in a ransomware attack.
As the UK’s NCSC pointed out in 2019: “The NCSC has seen numerous incidents where ransomware has not only encrypted the original data on-disk, but also the connected USB and network storage drives holding data backups. Incidents involving ransomware have also compromised connected cloud storage locations…”
The NCSC has an excellent guide, “offline backups in an online world” available here.
Some customer data was stolen in the attack, owners UKG admitted. It has notified authorities in Australia, Belgium, Brazil, Canada, Hong Kong, India, Singapore and New Zealand of data loss for those nationals.
The company said it has “found no evidence at this time to indicate that this incident is related to known Log4j vulnerabilities” in the wake of that critical and widely exploited vulnerability in the Java framework.”
It has contracted security firms Mandiant and West Monroe who are “working in parallel to test and continually harden our environment” and said it would be “expanding the scanning and monitoring program of these environments using current insights from this on-going investigation; supplementing our SOC monitoring with additional third-party managed service monitoring; further expanding the deployment of additional specific monitoring agents across the environment”; and “further expanding cold storage backups.”