The tech industry will eventually need regulations on AI, just don't make them "heavy-handed" and lets not rush into things. That appears to be the opinion of Zendesk's Chief Technology Officer Adrian McDermott at least, speaking to The Stack during the SaaS company's annual Relate event in Las Vegas.
Additionally, with Zendesk revealing its AI-powered Resolution Platform at the event, hoping to increase the 4.6 billion resolutions processed by its customers annually, The Stack had questions on how this growth aligns with a commitment to "sustainable AI" made in 2023.
As generative AI drove a doubling of datacentre power requirements in the US, from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, and with much of that power still coming from fossil-fuel energy sources, it seems more AI tools could only mean a bigger impact on the environment.
However, as the author of Zendesk's sustainable AI strategy, Sustainability Director Shengyuan Su, was let go from the company just a month or so before Relate, The Stack put this issue, among others, to its CTO.
You’ve spoken in the past, and at this conference, about the fact there are people that are always going to want to speak to a human operator over a chat bot no matter their issue. Is Zendesk hoping to change the minds of these consumers on AI or is that a fruitless endeavour?
I think that high quality service changes people's minds, right? I think about parallels with the history of service and there's been probably two or three inflection points. So, fifty, sixty or seventy years ago all service was in person, right?.
Then the telephone changed that and allowed us to create the rituals of customer support - conversations, cues, ‘can I help you today?’ And as we invented those rituals, people became comfortable with them and service actually ballooned in that moment right, probably 5x.
Then along came the internet and that democratized it. You could actually have a website where everyone could have self-service, and Zendesk was one of the forces of democratisation of service in that way.
What we can take from those historical transformations is [that] because it was more convenient, people used it more. Economists talk about Jevons Paradox, which is that if you make a resource more efficient [then] people use more of it, and so I think we see the same thing in [customer] support.
The gauntlet is thrown down to support leaders where they actually have to build great experiences and people will want to use them, and I think that's a measure of success, not failure.
Join the conversation on LinkedIn
Zendesk’s CLO Shana Simmons told media at the Relate event the company wasn’t a “passive” participant in the AI regulation discussion. Where would you like to see the regulatory environment go on AI?
I think it may be that we do eventually need some kind of regulation, but we have to be very careful. If you go too fast with regulation you’ll probably be building rules and structures for the wrong thing. The current administration in the U.S actually just rolled back some regulation, and it was probably impractical regulation, so that was good.
From Zendesk’s point of view, we're really trying to influence what responsible AI looks like. And I am a believer in that, in terms of how you use people's data, and how you train your models. The second thing that I'm super interested in is something that could be akin to cyber risk management.
[For example], you might want to know that a company Zendesk works with has a SOC 2 certification, etc. So it could be that we establish standards that are voluntary and customers and companies begin to say, “I have tested and validated that my AI cannot be easily jailbroken and will not do these things”.
I think the landscape will evolve. I’m against heavy-handed regulation because it blocks innovation, but I am for helping people understand whom they can trust and whom they cannot.
Companies are looking for data protection tools though, especially in the EU, do you think vendors who don’t offer these tools will just naturally lose out on business or will regulators need to step in at some point?
History tells us that eventually a standard becomes not an option, or a luxury, or a differentiator, it becomes essential to do business. Companies and entities that we work with in the EU are not going to deal with someone who cannot be certified in that way [on data privacy], they can't afford to take the risk.
So, I think eventually we [will] create these understandings, and I think those understandings are amazingly important because they give people confidence in the technology they're using or in the data that they're sharing.
See also: Monday.com's CPTO Daniel Lereya on AI failures and successes, scaling up
Zendesk committed to a sustainable AI strategy in 2023 but has launched a growing number of AI products since then. How much does sustainability still play into the decisions Zendesk makes?
Zendesk is a company that produces a climate impact statement, and we analyse what and where we're using things [that have an environmental impact]. Currently, training large language models, from a climate point of view, is a large consumer of carbon. We know that when one is thinking about acquiring GPUs to do AI work one has to think about the power density at the same time.
If you look at what Zendesk is doing, I think about using AI APIs in the same way I think about using database APIs. We don't have nation-state level compute requirements, right? We see almost five billion AI inquiries a year, [but] that's web scale. I think there are uses of AI that are going to be massive consumers of energy, but the inference for a customer service implementation [is different].
[Zendesk’s AI products] may also offset the need for other processing like, for example, for an extended conversation with a lot of back and forth where it's storing a lot of data in a database and coming back. There are all these different ways that that interaction with the user consumes energy, and I'm not too concerned that this way of consuming energy is going to be particularly different to the other way of consuming energy.
We are not folding proteins, we're not performing mass calculations for rocket science.
Is this view different to Zendesk's approach from a few years ago then?
No, not really. I think about just compute load and, unless you're doing logical model training, I don't think you need to think too differently about how your compute load is being serviced one way or another. The people that we buy compute from are basically charging us for the energy and amortizing the hardware. Whether that hardware is GPU or CPU doesn't much matter.
On the customer side, are you hearing that environmental sustainability is still important to them when it comes to AI use, despite changes in the regulatory environment in countries like the US?
It is still in the charter of investment funds and is still something that a lot of companies that we are selling to care about downstream. And it's something that we care about. So yes, there might be a cooling on regulation and a focus on shareholder value in the United States, but I think that hasn't changed the approach.