I run a small SaaS for independent landscapers that helps them schedule jobs and invoice clients. We’ve been charging a flat $29/month per user since we launched, and while we’re growing, I have this nagging feeling we’re leaving money on the table or maybe even pricing out some smaller operations. A founder I met at a conference mentioned using a specific pricing optimization tool to run experiments, but honestly, the whole idea of A/B testing prices makes me a bit uneasy. I worry it could come off as sneaky or confuse our current customers if they find out someone else is paying less. I’m just not sure if our volume is even high enough for that data to be meaningful, or if I should just trust my gut and pick a new tiered structure.
You’re not alone worrying about how your pricing lands with real customers. A staged approach helps you learn without burning trust: grandfather current users, test a clearly separated new tier, and watch signups and usage before you go any bigger. Would a two-tier test for new signups feel like a fair middle ground?
Have you considered carving out a revenue ladder instead of a single price point? Keep existing customers on $29, offer a lighter tier for smaller shops, and use neutral messaging so current users aren’t surprised. Some operators explore price optimization software to model elasticity and forecast revenue impact before you roll anything out. Would that approach work in your market?
From a boots-on-the-ground view, the hardest part is isolating signal from noise when your base size is small. Test with new signups or a clearly separated beta group, and track conversion, retention, and observed willingness to pay. Do you have a rough target cohort size in mind?
Grandfathering keeps your existing customers happy while you explore new pricing. Make the new tier's value obvious and aligned with their needs, and keep communications transparent so nobody feels baited. Got any existing customers who might be sensitive to change?
Keep it lean: a small pilot with a defined geographic area or a limited feature set can show you what changes actually move the needle. Measure ARPU, churn, and signups, and be ready to pivot quickly. What metric would you track first?
As you weigh it, consider a measured test plan and, if you go broader, use a pricing optimization tool to quantify elasticity before you commit publicly. How big of a pilot would you run to balance risk and learning?