
AI development is moving at an unprecedented pace. In a few short years, generative AI has become an integral part of individual and organizational workflows. As AI is evolving quickly, so are the global regulations that govern how businesses use customer data.
AI runs on data, and companies are constantly collecting it — including personally identifiable information (email address, date of birth, credit card number, etc.), purchase history, and customer service interactions — and using it to make decisions. Companies with an international footprint must comply with data protection requirements in each country and region they operate in or face substantial penalties. A brand selling to customers in Europe, North America, and South America, for example, must ensure that it is meeting the rules outlined by GDPR in the EU, HIPAA in the U.S., and LGPD in Brazil.
According to Deepti Kunupudi, former chief data and analytics officer and current advisory board member for multiple companies, leaders need to build global AI compliance into their products from the beginning, not as an afterthought. When they are proactive and intentional about compliance, they can improve their workflows, products, and business outcomes and avoid expensive, time-consuming replications.
“Compliance is at the crux of how you make decisions for your business,” says Kunupudi. “You have to understand how you’re using AI, what kind of data your AI is being trained on, and what the rules and regulations are in each country or state. That said, people often see AI compliance as an innovation roadblock — but it is actually an innovation enabler when you keep strategic design principles in mind. Building a system and then putting compliance on top of it doesn’t work. You need to build compliance in from day one.”

