IPG Kinesso’s Sheila Colclasure: Privacy Is ‘A Strategic Business Value’

Sheila Colclasure, global chief digital responsibility & public policy officer, Kinesso

Don’t forget to nominate yourself and/or colleagues for our Top Women in Media and Ad Tech Awards!

There’s a new category this year, Privacy Powerhouses, honoring the women working across legal, business and tech who are confidently steering their media brand’s ship amid the changing (and rising) tides of privacy regulation. The deadline for entries is March 16.

Sheila Colclasure doesn’t believe in hiding in the crowd.

There’s no such thing as “compliance by obscurity,” said Colclasure, who serves as global chief digital responsibility and public policy officer at Kinesso, the mar tech unit within IPG.

But some companies are getting less-than-stellar advice on how to comply with privacy laws and be good corporate citizens.

In a meeting last year, a client’s outside counsel proposed making a risky move from a privacy perspective that bypassed some of the compliance controls Colclasure would typically implement and advise.

The counsel noted that “everybody’s doing it, so there’s a low likelihood of getting caught.”

But there’s nothing foolhardier than maintaining the status quo and hoping not to get dinged.

Also, regulators are more likely to accommodate businesses that make a good-faith effort at ethical data use and keep diligent records of everything they do, Colclasure said.

“If we try our best to do everything right and we document everything, then, if a regulator does come and look, you get some grace,” she said. “And I’d much rather be in that position.”

Colclasure caught up with AdExchanger as she was en route to a Privacy for America meeting in Washington, DC, earlier this week.

AdExchanger: You’ve been in the public policy and privacy field for more than 25 years, including over two decades with Acxiom before the IPG acquisition. What’s been the biggest change?

SHEILA COLCLASURE: More complexity, and it still feels like we’re playing catch-up. There’s been so much innovation, the pace of change is so rapid, and the pivot that the entire industry made during the pandemic means that everything has become connected.

I’ve long talked about privacy as a strategic business issue. Now it’s an operational resiliency issue. It’s a board event. Boards of directors are charged with ensuring that a company is fiscally sound, secure and operationally resilient – and privacy is a part of that.

You wrote a column recently where you refer to what you call “the F-word that really matters” – as in, fairness. How do you define data fairness, and how do we get there?

Fairness is something we have to agree on, and that’s what ethics are. We discuss, we debate and then we agree on what is fair and balanced.

There is sometimes a difference between what is legal – as in what is codified into law – and what is considered ethical. Some notions of fairness have been debated, agreed upon and codified into law, but innovation in technology, including advertising and marketing technology, has moved well beyond that construct.

Now we have to think about bias and discrimination. We have to agree on what is fair overall and not just on a one-to-one basis. Something might be legal, but is it fair? These are issues of justice we have to grapple with.

But will the online ad industry ever be able to successfully explain its value to regular people?

It’s a huge challenge, and part of it is because we’re not having a fulsome discussion about all of the benefits, including economics. We fund innovation and growth, but advertising is also the cross subsidization of democracy and free speech.

The socioeconomically vulnerable among us get more benefits from advertising than those of us who are doing well enough to avoid paywalls. We’re talking about distribution and access to knowledge.

But we also have to be responsible, accountable and respectful, and we need to detect and prevent harm.

Speaking of, how are you thinking about the ethical use of AI?

It’s the next frontier, and there is a lot to consider.

What data is being used to train AI? How should that data use be governed? How do we ensure that the data is being used ethically for the benefit of people? How do we know when we’re dealing with synthetic content? How do we handle content moderation? Is content being curated fairly? How do we govern generative AI?

We’ve already seen generative AI models emerge that are politically incorrect, punitive and even mean.

Can AI be mean?

Absolutely. Mean, cringey, vicious. Interactions with some of these chatbots have gotten quite ugly.

People and brands can receive benefits, but you always have to start with fairness and making sure there’s no manipulation. We can’t ever be violative of people’s free will.

You were a 2022 Top Women honoree. What advice would you give to women who want to get into your field?

Privacy and ad tech are both exciting fields, and I’d encourage all young women to examine them. Give it a go – but find a mentor. Some of the most important work at this stage in my career is helping to bring others along.

This interview has been edited and condensed.

Enjoying this content?

Sign up to be an AdExchanger Member today and get unlimited access to articles like this, plus proprietary data and research, conference discounts, on-demand access to event content, and more!

Join Today!