Product Owner · Business Analysis Case Study
No-show rate
15%
0%

Every month, 1 in 6 appointments was simply lost. Then it wasn't. Here's how.

REVENUE GROWTH
+22%
revenue growth
MONTHLY USERS
5,000+
active bookings
MY ROLE
Sole PO
vision to delivery
Sector
Specialist hair removal
My role
Sole Product Owner / BA
Duration
9 Years
Team
Sole PO with offshore development team
01
Understanding the Business

Product ownership from first principles — diagnose, prioritise, deliver, measure.

A specialist hair removal company with two locations. The business was busy — 5,000+ appointments a month across both locations — but the revenue didn't reflect the volume. Something was leaking.

Their booking system was legacy, manual in parts, and had no visibility into what was actually happening at a location level. Management felt the pain but couldn't quantify it. That's where I started.

The business objectives I was given
Reduce missed appointments — they were costing real money every month
Give management visibility — they needed to see what was happening, not just feel it
Improve the client experience — long-term retention, not just short-term fixes
Replace the legacy system incrementally — no big rewrite, it was live and running
The business at a glance
Monthly appointments 5,000+
Annual revenue £1M+
Locations Two
Revenue lost to no-shows 15%/mo
My role

Sole Product Owner working with an offshore development team. I owned the vision, the backlog, the requirements, and the outcomes. The team built what I prioritised.

"If it wasn't tied to a diagnosed problem, it didn't make the backlog."

02
Gathering Relevant Data
I built the evidence base before writing a single requirement

I didn't start with a solution. I started with questions.

I went into the salons before writing a single requirement. Shadowed therapists, interviewed managers, spoke with clients. Then built the reporting infrastructure to quantify what the observation work suggested.

Floor Observation
Qualitative
Shadowed therapists during peak and off-peak hours across two locations
Observed the manual check-in process and where delays occurred
Watched clients interact with the existing booking journey from mobile
Stakeholder Interviews
Qualitative
Interviewed salon managers at each location on their biggest operational frustrations
Spoke with 20 clients about their booking experience — what worked, what frustrated them, and what would make them more likely to keep appointments
Asked why, not just what — to find root causes, not symptoms
Data Extraction
Quantitative
Built 18 custom reports on the legacy system — each one tied to a specific business question
Extracted 11 months of booking, no-show, revenue, and client data from the highest-volume location as a baseline
Agreed definitions first — e.g. what counts as a "no-show" vs a "late cancellation" — before measuring anything
18
Custom reports built
3,674
Missed appts in baseline
20
Clients interviewed
11
Months of data analysed
Data sample from pre-payment implementation phase
Peak
Average
Lowest
5% 10% 0% avg 10.5% Jan 9.2% Feb 9.0% Mar 8.8% Apr 9.1% May 8.2% Jun 10.5% Jul 9.8% Aug 9.4% Sep 10.3% Oct 9.3% Nov ↑ Peak ↑ Peak ↑ Peak Lowest

The pattern: Jan, Jul and Oct spike above the 9.6% average every year — post-holiday, mid-summer, post-summer. This is a seasonal signal. A flat reminder strategy was leaving recovery on the table in three predictable windows.

Figures indexed. Highest-volume location only (~3,000 appointments/month). Pattern reflects real data from 11 months of live bookings.
Month
Treatments
No-Shows
Rate
Signal
Jan
High
High
10.5%
↑ Peak
Jun
Mid
Low
8.2%
↓ Best month
Jul
Mid
High
10.5%
↑ Peak
Oct
Mid
High
10.3%
↑ Peak
Total
32,626
3,674
9.6% avg
Pattern identified
Figures indexed. Highest-volume location only (~3,000 appointments/month). Pattern reflects real data from 11 months of live bookings.
The intervention

Before deposits went live. After.

The no-show rate didn't gradually improve — it collapsed the month online deposits were enforced. One mechanism, one month, measurable result.

NO-SHOW RATE
BEFORE — NO DEPOSITS
AFTER — DEPOSITS LIVE
BEFORE — NO DEPOSITS AFTER — DEPOSITS LIVE 15% 10% 5% 0% DEPOSITS GO LIVE 10.5% 8.2% 10.5% ~1% ~0% Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Month+1 Month+2 Month+3 0% NO-SHOW RATE WITHIN 3 MONTHS
The rate had been consistently between 8.2% and 10.5% for 11 months with no meaningful trend downward. Within three months of deposits going live it reached effectively zero.
ONE MECHANISM. ONE MONTH. MEASURABLE RESULT.
03
SWOT Analysis
Honest assessment before any build decisions

What I was actually working with

Before proposing any solutions I mapped the business position clearly. This shaped everything that came after — what to fix first, what was a risk, and where the real opportunity was.

S
Strengths
Established, loyal client base — high repeat booking rate
Two-location operation with proven demand at each site
Management buy-in — they wanted to fix this, not ignore it
Strong revenue baseline despite the inefficiency
W
Weaknesses
No upfront payment — zero financial commitment from clients at booking
No automated reminders — cancellations relied on client memory alone
No reporting visibility — management couldn't see patterns across locations
Legacy system with no API access — custom build required
O
Opportunities
15% no-show rate = immediate, recoverable revenue — fix this first
Client data existed but wasn't being used — reporting was untapped
Self-serve rescheduling could cut admin time and reduce hard cancellations
Google reviews near zero — a fixable problem with the right prompt strategy
T
Threats
Live system — any disruption during the transition would hit revenue directly
Client resistance to paying deposits — risk of booking drop-off
Off-the-shelf tools (Timely, Ovatu) couldn't handle multi-room bookings — no quick fix available
Small development team — scope had to be tightly controlled
04–05
Key Issues & Root Cause Analysis

The no-show problem had three root causes. Finding all three shaped the entire backlog.

On the surface it looked like a reminder problem. Dig deeper and it was a system design problem — the booking flow gave clients no reason to show up and no friction to cancel. Every root cause pointed to the same gap: no commitment mechanism at the point of booking.

Why this matters for backlog prioritisation

Once the root causes were clear, the backlog prioritised itself. Fix commitment first — deposit + policy. Then fix the reminder gap. Then make rescheduling easier than cancelling. Then build visibility. The sequence was driven by the diagnosis, not by opinion or gut feel.

Root cause diagram
15% no-shows monthly No deposit required Zero commitment at booking No reminders sent Relied on client memory Cancelling easier than rescheduling System incentivised drop-out 1 2 3 CAUSES EFFECT
Priority order — Issues ranked by revenue impact
1
No-shows — direct revenue loss every month
15% of all bookings. Most recoverable with the right mechanisms in place.
Fix immediately
2
No reporting visibility — decisions made on gut feel
Management couldn't see patterns. Problems weren't caught early enough.
Build alongside
3
Client experience friction — booking and rescheduling
Clients found it easier to cancel than reschedule. The system was working against retention.
Phase 2
4
Google reviews near zero — reputation not reflecting quality
The timing of review requests was wrong. A behavioural fix, not a system fix.
Phase 3
06
Proposing Solutions & Action Plan
Three phases. Each one building on the last.

I didn't build a booking system. I built a solution to a diagnosed problem.

Three phases, each building on the last. Highest-impact fix first, prove it works, then layer the next priority. Every release was small, reversible, and tied to a measurable outcome.

Phase 1
Fix the foundation
Online deposits via Stripe — full payment required at the point of booking, not on arrival
Non-refundable within 48 hours of appointment — enforced by the system, not by staff asking awkward questions
One free reschedule permitted at any time before the appointment — this was deliberate. It redirected cancellation intent into a future booking rather than lost revenue
100% charge for no-shows — no exceptions. The policy was written into the booking confirmation so clients accepted it explicitly at the point of payment
Automated SMS reminders at 48h and 24h before appointment. Evaluated providers against integration complexity, cost and delivery reliability — selected TextMarketer based on fit with the existing stack. Trigger logic designed to give clients a clear window to reschedule before the charge applied.
Phase 2
Improve flow
Redesigned booking UX — this came first. The UX was designed with the full product journey in mind. Phases 2 and 3 were always part of the vision; we just shipped Phase 1 first because it had the highest impact.
Therapist dashboard on in-room iPads — real-time visibility of today's schedule, arrivals, and no-shows
Self check-in kiosk — clients tap their name on arrival, reducing front-desk pressure during busy periods
Client credits system — full ledger for managers to apply and adjust credits with a full audit trail
Phase 3
Grow reputation
Evening SMS review requests — sent 4 hours after appointment, when clients are home and relaxed, not rushing out the door
Location-specific branded URLs — tracked click-through rate per salon, not just overall
SerpApi integration — real-time Google review sentiment tracked inside the reporting dashboard
Reviews went from near zero to ~10 per week. Behavioural insight drove the fix, not a technology one.
The finished product
Booking platform — desktop, tablet and mobile. Built on a custom stack to handle multi-room bookings in a single transaction.
Booking platform — desktop, tablet and mobile views showing treatment selection, calendar booking and appointment management
Why the deposit policy worked
The risk was real

Asking clients to pay upfront could have killed bookings. That had to be answered before anything else.

Brand trust made it viable

20 years of loyal clients who came back for quality, not price. Client conversations confirmed they would accept it if explained honestly.

Designed to feel fair

Pay upfront, one free reschedule at any time. Rescheduling was made easier than cancelling. Clients who planned to leave rebooked instead.

The result was immediate

The rate did not gradually improve. It collapsed. That is what happens when the mechanism is right, not just the intent.

Why off-the-shelf tools were ruled out

I evaluated multiple SaaS booking platforms before recommending a custom build. All were assessed against the same requirements.

PLATFORM
MULTI-ROOM
VERDICT
Get Timely
✗ No
Ruled out
Ovatu
✗ No
Ruled out
Custom build
✓ Yes
Selected
How the live system was kept safe

Every change was small and reversible. Definitions were agreed with the team before development started. Each release was scoped to roll back cleanly if needed. Impact was checked against the same reports after every change before moving forward.

07
Monitoring & Results
Measured against the same reports that identified the problem

The numbers that confirmed it worked.

I defined the success metrics at the start and measured against them at the end using the same 18 custom reports I used to baseline the problem. Same data, same definitions — so there was no ambiguity about whether the team's work had delivered the intended outcome.

Context

Revenue increased by approximately 22% in 2022 compared to the 2019 pre-Covid baseline, during the post-Covid reopening period, supported by booking and payment workflow improvements.

Context
SME consumer services business
Two physical salon locations
Offshore development team
Sole Product Owner reporting to business owner
0%
No-show rate
↓ from 15% monthly
22%
Revenue growth
↑ vs 2019 baseline
~10
Google reviews per week
↑ from near zero
Three findings from the data that shaped decisions
① Observation
9.6% avg no-show rate across 11 months
The platform surfaced 3,674 missed appointments. The distribution wasn't even across the year.
② Question
"Is the pattern random, or is there a seasonal signal?"
If it was seasonal, a uniform reminder strategy was wrong and we were leaving money on the table in predictable windows.
③ Finding
Jan, Jul, Oct spiked well above average every year
Post-holiday, mid-summer, post-summer. Predictable cycles — not random client behaviour.
④ Decision
Seasonal reminder escalation added to the backlog
Extra 72h reminder triggered automatically in the three peak windows. Targeted, not blanket.
① Observation
Most frequent visitor had 4x more appointments than the top spender.
The data separated two segments the business had always treated as one.
② Question
"Are we treating loyal and valuable clients the same way?"
If frequent ≠ high-value, then a blanket promotion email to all clients was optimising for the wrong metric.
③ Finding
High spenders were price-insensitive. Promos were eroding their perceived value.
Top spenders bought premium services. Sending them discount emails was actively working against retention.
④ Decision
Spec'd a client_tier field into the data model from day one
Built segmentation into the architecture so future campaigns could target each group differently. A data model decision, not just a marketing one.
① Observation
New client volumes held in Q4. Spend per new client dropped.
One month brought the highest new client volume of the quarter. But average spend per head was the lowest.
② Question
"Are we acquiring the right kind of new clients?"
More bookings that generate less revenue and a lower return rate isn't growth — it's dilution.
③ Finding
Oct promotions were pulling price-sensitive clients unlikely to return at full price
The volume number looked healthy. The spend trend showed the opposite — a channel quality problem, not a volume one.
④ Decision
Two actions: product upsell for first-timers + marketing channel review
Added a first-visit upsell screen to the booking confirmation. Flagged to marketing: review Oct channel ROI, not just volume numbers.
What I'd do differently

I'd build the reporting layer earlier. The reports I spec'd in month three should have been in place from day one — even in a lightweight form. The data was always there. I just didn't instrument it fast enough.

I'd also formalise the client segmentation sooner. The frequency vs value insight came from manual analysis. It should have been an automated report from the start.

The principle behind everything

"Every report started with a business question, not a data request. If the answer wouldn't change a decision, the report didn't get built."

This applied to every feature too. If it couldn't be tied back to a diagnosed problem, it didn't make the backlog.
Tariq Iftikhar
Product Owner · Business Analyst · Manchester
PSPO I · PSM I Certified
9 years · Specialist hair removal sector
Availability
Open to Product Owner and Business Analyst roles in Manchester and remote. Available immediately.
Get in touch
TARIQ IFTIKHAR — PRODUCT OWNER CASE STUDY
Manchester · 2025