Connect with us

Technology

The Price of Politeness: How Saying ‘Please’ Is Burning Millions at OpenAI

ChatGPT is now available in Azure OpenAI Service
  • Explore how the way you phrase AI prompts may be draining computing resources and racking up unexpected costs.
  • Discover what OpenAI’s recent findings mean for prompt design, user behaviour, and enterprise adoption of AI tools.

You’ve probably done it. We know we have about 100 times or more! Started a prompt with “please”. Ended a task with “thank you.” Asked an AI to “kindly” help you rewrite a sentence.

It sounds natural. Courteous. Human.

But it’s also costing OpenAI millions.

In April 2025, Bloomberg reported that OpenAI has been tracking a growing issue across both enterprise and consumer usage: prompts are getting longer. Not because of complexity, but because of excessive formality. Internal logs showed users increasingly padding prompts with polite phrasing — language that’s unnecessary for machine understanding.

Why does that matter?

Because every character processed by ChatGPT becomes part of a token. Tokens consume compute. Computers cost money.

According to OpenAI’s internal estimates:

  • Prompts with courteous or formal phrasing are longer on average
  • Every additional 100 characters can increase GPU load substantially.
  • Across billions of requests, that adds an estimated cost that will set OpenAI back quite a few pounds.

That’s not a rounding error. That’s operational overhead created by habit.

What is OpenAI take on this?

OpenAI has distributed revised guidelines for enterprise users.

Those guidelines emphasised brevity, clarity, and direct task framing. Instead of:

“Could you please kindly summarise this article briefly and concisely?”

The guide recommends:

“Summarise this article concisely.”

The shift isn’t about changing your tone. It’s about reducing waste.

Why You Should Care

If your company builds on GPT APIs, or trains employees to interact with AI tools, your phrasing matters.

More text means more tokens. More tokens mean more usage. More usage means higher costs.

Especially when scaled across:

  • Chatbot deployments
  • Customer support scripting
  • AI content workflows

The friendly language you encourage might be creating quiet cost inflation.

What You Can Do Now

Conduct a prompt audit. Look at how your team interacts with AI.

Ask yourself:

  • Are knowledge workers using filler language out of habit?
  • Are support scripts padded with “please” and “kindly”?
  • Are the prompts consistent with your usage goals?

Brevity doesn’t require rudeness. It requires focus.

And if you’re not tracking prompt length vs. token usage, you’re not tracking your full spend.

Etiquette vs Compute

Politeness is part of human interaction. AI doesn’t need it.

That doesn’t mean you shouldn’t teach courtesy, but in AI systems, courtesy has a computational cost. And right now, that cost is measurable.

If you’re running high-scale deployments, think of prompt structure as a performance issue. It affects speed, cost, and model response allocation.

OpenAI’s adaptation shows that language, once seen as user-side only, now lives in the infrastructure layer.

The next time you prompt ChatGPT, ask yourself:

Are you being polite… Or just expensive?

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Text Translator

Awards Ceremony

Click on the Image to view the Magazine

GBM Magazine cover


Global Brands Magazine is a leading brands magazine providing opinions and news related to various brands across the world. The company is head quartered in the United Kingdom. A fully autonomous branding magazine, Global Brands Magazine represents an astute source of information from across industries. The magazine provides the reader with up- to date news, reviews, opinions and polls on leading brands across the globe.


Copyright - Global Brands Publications Limited © 2025. Global Brands Publications is not responsible for the content of external sites.

Translate »