Skip to content

Search the site

A NASDAQ-listed firm left its Kubernetes clusters perilously exposed. 1,000+ have fallen into the same trap. Here’s why

Google says "we've taken several steps to reduce the risk of users making authorization errors with the Kubernetes built-in users and groups, including..."

A NASDAQ-listed company left its Google Kubernetes Engine (GKE) clusters publicly exposed, letting attackers access an environment that included AWS credentials (embedded in a bash script) and admin credentials for RabbitMQ, Elastic, an authentication server and internal systems.

This started with a potentially hugely damaging GKE “loophole” or configuration error that has also exposed thousands of others, due to a misunderstanding about the system:authenticated group in GKE – which, despite the name, does not only include verified corporate identities, but any Google authenticated account; yes, including external users.

The blunder was identified and rectified with the help of researchers from cloud security specialist Orca Security, who say that the issue is not just end-user misunderstanding, but a problematic configuration/user experience issue on Google’s side that Orca Security is calling “Sys:All.”

Google Kubernetes Engine configuration

Orca Security, said in brief that:

  • "In the context of GKE, the system:authenticated group includes any user with a valid Google account. A discovery that increases the attack surface around Google Kubernetes Engine clusters.
  • "The group is bound to several unharmful API discovery roles by default, but it can accidentally get bound to other permissive roles because administrators may mistakenly believe this is safe... since this would be the case for similar Kubernetes groups in AWS and Azure.

How Russian spooks hacked Microsoft, the gap in its “morally indefensible” response, and what CISOs can learn from the attack

  • "Any external attacker can utilize this misconfiguration by using their own Google Oauth 2.0 bearer token for reconnaissance and exploitation, without leaving an identified trail. 
  • "Although this is intended behavior, Google did block the binding of the system:authenticated group to the cluster-admin role in newer GKE versions (version 1.28 and up). However… this still leaves many other roles and permissions that can be assigned to the group.

It has shared more detail about the attack chain here.

GKE bindings: Follow the instructions...

Google has since published a security bulletin that confirms 1,300 customer clusters were exposed to risk due to “misconfigured bindings” and that 108 had exposed “high privileges” to potential attackers.

It said on January 26 that its “approach to authentication is to make authenticating to Google Cloud and GKE as simple and secure as possible without adding complex configuration steps. Authentication just tells us who the user is; Authorization is where access is determined. So the system:authenticated group in GKE that contains all users authenticated through Google's identity provider is working as intended and functions in the same way as the IAM allAuthenticatedUsers identifier…”

In a security advisory, Google added that “with this in mind we've taken several steps to reduce the risk of users making authorization errors with the Kubernetes built-in users and groups, including system:anonymous, system:authenticated, and system:unauthenticated. All of these users/groups represent a risk to the cluster if granted permissions. 

Since Orca flagged how widespread misconfigurations were, Google said that "to protect users from accidental authorization errors with these system users/groups, we have:

  • "By default blocked new bindings of the highly privileged ClusterRole cluster-admin to User system:anonymous, Group system:authenticated, or Group system:unauthenticated in GKE version 1.28.
  • "Built detection rules into Event Threat Detection (GKE_CONTROL_PLANE_CREATE_SENSITIVE_BINDING) as part of Security Command Center.
  • "Built configurable prevention rules into Policy Controller with K8sRestrictRoleBindings.
  • "Sent email notifications to all GKE users with bindings to these users/groups asking them to review their configuration.
  • "Built network authorization features and made recommendations to restrict network access to clusters as a first layer of defense.
  • "Raised awareness about this issue through a talk at Kubecon in November 2023.

Google added: “Clusters that apply authorized networks restrictions have a first layer of defense: they cannot be attacked directly from the Internet. But we still recommend removing these bindings for defense in depth and to guard against errors in network controls.  Note there are a number of cases where bindings to Kubernetes system users or groups are used intentionally: e.g. for kubeadm bootstrapping, the Rancher dashboard and Bitnami sealed secrets. We have confirmed with those software vendors that those bindings are working as intended. We are investigating ways we can further protect against user RBAC misconfiguration with these system users/groups through prevention and detection.”

James Maskelony, Senior Detection & Response Engineer, security firm Expel, noted to The Stack: “Accidental misconfiguration is the #1 problem plaguing Kubernetes environments, and the GKE/gmail vulnerability is a good reminder to review the role bindings in your environment. 

“With Kubernetes constantly growing in popularity and adoption, adversaries are scrambling for any and all weak spots to exploit and accidental misconfigurations are a siren’s call to them. Security teams should be on the lookout for users and groups like system:anonymous, system:unauthenticated, and (if you use GKE) system:authenticated. If any roles other than system:public-info-viewer are bound to those, you’ll want to ensure they’re necessary and then scope your environment to make sure attackers haven’t abused those permissions/roles.”

With Microsoft admitting that a poorly configured test tenant became the entry point for a major attack that saw the emails of its security leadership and cybersecurity team accessed this month, configuration and attention to detail is very much (as it always should be) the story of the month when it comes to security. There's a bumper sticker that reads "I have two rules: 1 Don't sweat the small stuff. 2) It's all small stuff." CISOs by now know all too well that for them, that sticker needs the "don't" removing. Want to ensure you have a secure environment? Sweat all the small stuff, sweat all the small print.

Join peers following The Stack on LinkedIn

Latest