pricing strategy
Image by Gerd Altmann from Pixabay 

By Iryna Chernukha

Deniz AkbasaranPricing strategy is one of those functions that every SaaS company thinks it has figured out until someone actually looks at the data. Deniz Akbasaran is one of the people who scratches not only the surface. A data and product professional based in Paris, she leads pricing and monetization strategy for an AI agent product. Before landing on pricing, she built LLM evaluation frameworks, owned LLM cost monitoring, and led data infrastructure across product and GTM teams.

We spoke with her about what years of discounting data revealed and what it actually takes to build a pricing strategy that doesn’t rely on a sales rep’s instincts in the last five minutes of a negotiation.

The audit nobody runs

Whether you’re a startup or a big corporation, concerns about giving away too much through discounting are common. But is this fear really the main issue when discussing pricing packages and discount management as sales tools?

Giving away too much is rarely the real problem. In my experience, the main issue with the discounting is that nobody owns it. In most SaaS companies, there’s an approval process — you can give a few percent freely while negotiating with a client, above a certain threshold, you check with your manager, above that, you go to the CFO, and people regularly do that when the case is about an important or old client. That process exists, but has nothing to do with a strategy. 

There’s no thought behind who should get discounts, at what stage, on which products, or why. Giving perks in the form of discounts is just reflexive. A deal is slipping, the rep cuts 10%, the customer signs, and everyone moves on. At the moment, everyone is happy, the KPI has been met, and the Excel reports look fine. And what strikes me most is that nobody tracks what happened next or whether there was a smarter way to use that discount in the first place.

When did you realize this problem was worth actually investigating?

As someone on the Decision Intelligence team, when I started leading Pricing Strategy, the first thing I did was pull two years of deals and discounting data and just look at it properly.  Not just the total number, I wanted to understand the patterns. Who were we discounting? At what stage? On which plans? In which industries? Are high-revenue or low-revenue customers more frequently discounted? 

To customers buy more volume or less? I wanted to understand how our sales team was actually using discounts in practice, because I suspected the answer would be more surprising and revealing than anyone had acknowledged.

What the data actually showed 

Based on your experience, what are the most common structural issues with SaaS pricing?

When you see heavy discounting concentrated on specific plans, that alone can tell you a lot. If you’re discounting more for high-revenue or high-volume customers, it might signal pricing pressure at the high end, or that your packaging doesn’t offer enough economies of scale. On the other hand, if discounts are clustering around smaller plans, it could mean you’re struggling to capture the cost-conscious segment and might be priced above competitors for that market. Each pattern tells a different story about where your pricing model might be misaligned, and in every case, the discount simply masks a flaw in the model.

The most common structural culprit is flat-fee pricing, which misses economies of scale entirely. When a buyer is purchasing significant volume from you, they expect the unit price to reflect that. If it doesn’t, the sales team fills the gap with ad-hoc discounts: informally, deal by deal, without it ever showing up as a pricing problem. 

Another common issue is missing plan thresholds. If a common deal size in your market is 20K and your nearest plan is 30K, your sales team will close those accounts on the 30K plan and immediately discount back to 20K. That’s not generosity. It means you have a packaging gap, and it should be addressed through pricing strategy, not discounts.

Data/Statistics
Image from Freepik

You work specifically on AI agent products. Does pricing get harder there?

Significantly harder. With traditional SaaS, you have predictable cost structures — when discounting erodes margins, you can model it. With AI agent products, you face two additional problems that pure SaaS doesn’t. 

First, variable LLM costs: your cost per usage isn’t fixed, let alone your cost per customer, so margins are genuinely uncertain.Second, uncertain outcomes: resolution rates depend heavily on what the customer feeds into the agent: their instructions, their data quality, their setup. That makes value-based pricing much harder to anchor. Pricing is already hard. Pricing AI agents is harder because there isn’t yet a proven playbook. You’re figuring it out as you go.

What your discounts are actually telling you

Can discounting data become diagnostic data?

That’s exactly what happens when you look at it properly. If you look at where your discounts are clustering (which customer segments, which plan tiers, which deal sizes) it tells you what your pricing model is getting wrong. Most companies view discounting as revenue leakage to minimize. I’d argue you should look at it first as a map of where your pricing is failing.

Once you have that diagnosis, how do you think about what to do with it?

Everything ties back to a revenue equation: price times volume times number of customers. If you’re giving up on price through discounting, you need to be gaining enough on volume or customer count to compensate — otherwise you’re just shrinking. So every discounting decision should connect back to that equation. Are we discounting to expand your base? To increase volume per customer? To reduce churn on high-value accounts? If you can’t answer that, you’re not discounting with intention.

The commission problem nobody warns you about

Are you currently running experiments to test different approaches?

Yes, and this is where it gets complicated. You can’t just decide on a discounting strategy and roll it out. You have to test it, because you genuinely don’t know which approach will perform. But how you run experiments really depends on which part of the funnel you’re focused on. Top of funnel? Work with the growth team. Lower funnel, sales-led deals? Loop in the sales team. Self-serve? That’s more of a product and lifecycle collaboration, since there’s no salesperson involved and everything happens inside the product itself.

The challenge is that B2B SaaS sales-led experiments are particularly hard to make statistically significant due to low deal volume — you simply don’t have the sample sizes that a product or growth team might. The right frame is to match rigor to stakes: for low-stakes, easily reversible decisions, one team can plan quickly, ship, and see what happens. For high-stakes decisions where the goal is company-wide rollout, you need a properly designed test that can collect strong enough signals to make a confident decision.

What’s the specific friction you run into?

Two things. The first is the broader question of experiment rigor. Stakeholders will often ask “is this statistically significant?” which can push teams toward over-engineering the design upfront rather than running fast, lightweight tests. The answer is to match the rigor to the stakes, not every experiment needs to be airtight before you learn something useful.

The second is the incentive alignment problem, and this is common in SaaS experimentation. Let’s say I want to test a more aggressive discounting strategy, offer deeper discounts to see if it closes more deals and whether the volume gain outweighs the price reduction. To run that experiment properly, sales reps need to follow the protocol even when they’d normally close without offering the discount at all. But if they’re already closing, there’s no natural incentive to offer a deeper discount. Generally, it’s their commission on the line, and who would want to risk their income? So your experiment can break down because the people running it are rationally acting against it. 

Sales-led experiments require strong buy-in from sales reps. And when the experiment goes against team incentives, leadership enforcement isn’t just helpful: without it, your data won’t be clean. Otherwise, your experiment won’t have the sample size you were hoping for. 

Making the case upstairs 

How do you make the case to leadership for running these experiments in the first place?

Always with numbers, and always with two scenarios, optimistic and pessimistic. The optimistic scenario shows the potential ARR uplift if the hypothesis is right. But the pessimistic scenario is actually the more important one. If you build the worst-case outcome and it’s still acceptable (still moves the revenue equation in a useful direction, still teaches you something) then the argument for running the experiment becomes very hard to refuse. You’re not asking them to bet on a best case. You’re showing them that even if you’re wrong, the downside is bounded. That’s a much easier conversation.

And for those who challenge the data with instinct, that’s legitimate, and I take it seriously. If someone says my optimistic scenario is too aggressive, I’ll work through the pessimistic case with them explicitly. If the pessimistic scenario proves them right, then maybe they are right. The goal isn’t to win the argument with data but to make a better decision. Sometimes a gut feeling is pointing at something the model hasn’t captured yet.

Discounting can shift from a concession to an offer. What does that actually look like in practice?

When a sales rep offers an ad-hoc discount, they’re giving up something to close a deal. When a discounting strategy has been tested and proven, the same discount becomes something you offer with confidence because you know it works for the customer and for the company. That changes the dynamic of the sales conversation entirely. It’s no longer a negotiation tactic but a product of the pricing strategy. 

And for the sales team, it’s actually easier: they’re not improvising under pressure; they have something real to offer. A good pricing strategy also shortens the sales cycle, because the conversation stops being about what you’re willing to give up and starts being about what you’re offering and why it makes sense. The goal is to get to a place where discounting is deliberate enough that it stops feeling like leakage and starts feeling like leverage.

LEAVE A REPLY

Please enter your comment!
Please enter your name here