Cortex AI Analítica
"Define tendencias en innovación y transformación digital."
- OpenAI announced at the end of March that it will test ads in Canada, Australia and New Zealand.
The ChatGPT app. OpenAI announced at the end of March that it will test ads in Canada, Australia and New Zealand.Kiichiro Sato/The Associated Press
Miranda Bogen is director of the AI Governance Lab at the Center for Democracy & Technology, based in Washington.
When I first learned of OpenAI’s plans to add advertisements to ChatGPT, my heart sank, but I can’t say I was surprised. At the end of March, the company announced it would expand on its initial pilot to test ads in Canada, Australia and New Zealand, and hired Dave Dugan, a former Meta advertising executive, to lead ad sales, suggesting that momentum toward a broader rollout is growing.
Before taking my current role at the Center for Democracy & Technology, a Washington-based non-profit focused on civil rights and civil liberties, I worked at Meta as the company was investing in increasingly sophisticated ways to deliver ads to its billions of users, and the news of ads’ arrival in AI systems felt eerily familiar.
Like most frontier AI companies (and like social media companies before them), OpenAI faces mounting pressure to justify immense investments in computing infrastructure, energy production and other operating costs, with subscription revenue covering only a fraction. Tech company employees have lots of different opinions about how their products work, but at the end of the day it’s economic incentives that shape the tools.
We’ve seen this movie before – and the sequels, and the remakes. From what I can see, nothing is stopping any of these AI labs from following the playbook of platforms that came before them. Once the seal is broken, the lucrative revenue spigot will be hard for others to resist.
Looking at the business models frontier AI companies are pursuing, a clear pattern is emerging: Advertising is coming to AI. Meta has already announced plans to use conversations with its AI chatbot to power ad targeting across Facebook and Instagram. Google now displays ads in its AI-generated search summaries.
Here’s what we’re likely to see next.
In advertising, the name of the game is intent – whether people are poised to take an advertiser’s desired action. The stronger the intent, the higher the prices ads command. Searching the web for a hotel in Barcelona signals much stronger intent than casually flipping through a travel magazine.
Conversations with AI assistants are uniquely revealing. After a sustained exchange with a chatbot to map out a vacation to Spain, the chatbot knows not just that you’re planning a trip, but your budget, travel companions and which specific experiences appeal to you. And as AI tools integrate memory, these insights accumulate rapidly.
These deep signals mean that – as long as they don’t completely destroy user trust – ads are likely to be highly effective, attracting more advertisers and making the business model appear to kick into gear. The ads will seem relatively benign at first: paid promotions set apart from organic responses, with commitments that ads will be helpful to people. But if these first tentative moves avoid significant backlash, things likely won’t stop there.
As AI threatens to eliminate jobs, unions are drawing a line
As more advertisers vie for space, the next challenge will be figuring out which ads to show to whom.
At Meta, this happened by predicting an ad’s relevance – how likely it is to result in the advertiser’s desired action. Since it’s tough to know for sure what people find useful, ad platforms rely on proxies: clicks, cart additions, purchases. Once some people act on some ads, developers can build AI models to predict who else might do the same, based on whether they share characteristics like demographics, interests or behaviour.
As revenue from basic ads fills the coffers, developers will start getting creative. OpenAI’s current chief executive officer of applications, Fidji Simo, pioneered creative ad formats in her previous job at Instacart, from sponsored products to last-minute checkout suggestions.
There’s little reason to doubt that playbook is about to get applied again. Chatbots are naturally interactive, and it won’t be long before the pull to make monetization feel organic overcomes existing hesitation.
Generative AI leaders have already floated the idea of affiliate marketing, where outputs remain organic but the company gets a kickback if the user makes a purchase based on ChatGPT’s recommendations. This theoretically maintains AI companies’ editorial discretion, but incentives could nudge platforms toward more frequently recommending the more lucrative options – or letting profit tip the scale between equally useful responses.
As AI tools are increasingly marketed as companions or friends, they risk taking on the dynamics of online influencers (or worse), raising questions about what they should disclose about financially motivated recommendations.
Affiliate marketing can also easily bleed into lead generation, the process of identifying potential customers and clients with the goal of driving sales, sometimes by intermediaries who sell data about prospects to businesses seeking clients. OpenAI stated in the announcement introducing its ads’ pilot that “we never sell your data to advertisers,” but lead generation is different. Platforms prompt users to voluntarily submit information to advertisers seeking prospective customers for big-ticket items like loans or business software.
Lead generators collect lucrative commissions and are common in health care, legal services, staffing and higher education – but are also known for peddling harmful products like payday loans and for-profit universities.
If a user asks for advice on how to afford college, it would be simple for a chatbot to check whether they qualify for student loans, then offer to pass their information to solicit bids – including from lead generators – for lenders offering high interest rates. And it would be easy to justify that the AI was simply being helpful, and that nothing it recommended was influenced by ads.
Opinion: AI is practising medicine without a licence
Advertising isn’t the only problematic business model for AI. Hyper-personalized subscriptions can lead to addictive products and discriminatory outcomes. Doing business with governments can create dependencies that could compromise companies’ independence, if the risk of losing a sizable contract would hit the bottom line hard enough (or lead the government to make trouble for a company’s other business lines). Commerce features that blur assistance and sales raise conflicts of interest. Each introduces its own perverse incentives.
But OpenAI’s tentative embrace of advertising is clarifying. How the company rolls out these offerings, and whether it stays true to its current commitments, will speak volumes about the pressures on businesses to prioritize revenue over the interests of the people they claim to serve. We’ve seen with social media what happens if that pressure wins out. We know where it can lead.
The AI industry keeps insisting this time will be different, that they’ll build responsibly, that they’ve learned from other companies’ mistakes.
I worked inside one of those companies. I saw how good intentions are vulnerable to crumbling under revenue pressure. I argued against systems I knew would cause harm and watched them ship anyway.
So don’t take the promises at face value. Watch the business models. They have power that individuals – even CEOs – find nearly impossible to resist. They’ll tell you everything you need to know about whose interests these tools actually serve. General-purpose AI products are still in their relative infancy, and there’s still time to change course.
Galería de Imágenes

