Skip to main content

Pricing Experiments

Test pricing strategies with A/B experiments before rolling them out to all customers.

What Are Pricing Experiments?

Experiments let you test different prices with a portion of your traffic to find what works best:

flowchart LR
subgraph Traffic[Incoming Bookings]
A[All Visitors]
end

subgraph Split[Random Assignment]
A --> B[50% Control]
A --> C[50% Variant]
end

subgraph Pricing[Different Prices]
B --> D[Current Price: R400]
C --> E[Test Price: R450]
end

subgraph Results[Measure]
D --> F[Compare Conversions & Revenue]
E --> F
end

Creating an Experiment

Step 1: Define Your Hypothesis

Before starting, be clear about what you're testing:

  • "A 10% price increase will not significantly reduce bookings"
  • "A twilight discount will increase afternoon utilization"
  • "Weather-based pricing will improve wet-day revenue"

Step 2: Create the Experiment

  1. Go to Pricing > Experiments
  2. Click New Experiment
  3. Configure:
FieldDescription
NameDescriptive name (e.g., "Weekend Premium Test")
Control RuleYour current pricing rule
Variant RuleThe new pricing to test
Traffic SplitPercentage for each (default 50/50)
DurationHow long to run (recommended: 2-4 weeks)

Step 2A: Example Experiment (plain-language)

Name: Weekend Premium Increase
Status: Running
Scope: Course
Traffic %: 50
Control rule: Weekend Premium +15%
Variant rule: Weekend Premium +25%
Start date: This Friday
End date: 4 weeks from start

Step 3: Set Success Metrics

Choose what to measure:

MetricWhat It Tells You
Conversion Rate% of viewers who book
Revenue per SlotTotal revenue / available slots
Booking VolumeNumber of bookings
Average Order ValueRevenue per booking

Experiment Lifecycle

stateDiagram-v2
[*] --> Draft: Create experiment
Draft --> Running: Start experiment
Running --> Paused: Pause if needed
Paused --> Running: Resume
Running --> Completed: End date reached
Running --> Cancelled: Stop early
Completed --> [*]: Apply winner or discard

Status Descriptions

StatusMeaning
DraftConfigured but not active
RunningLive and collecting data
PausedTemporarily stopped
CompletedFinished, review results
CancelledStopped without conclusion

Reading Results

Key Metrics

After your experiment runs, review:

Experiment: Weekend Premium Test
Duration: 14 days
---------------------------------
Control Variant
Visitors: 1,240 1,235
Bookings: 186 172
Conv. Rate: 15.0% 13.9%
Revenue: R74,400 R77,400
Rev/Visitor: R60.00 R62.67

Statistical Significance

Results include a confidence indicator:

ConfidenceMeaning
< 80%Not enough data, keep running
80-95%Trending, but not conclusive
> 95%Statistically significant
Wait for Significance

Don't end experiments early. Let them reach 95% confidence for reliable results.

Quick Start Checklist

  1. Pick one change to test (price only).
  2. Set Traffic % to 10-20% for the first run.
  3. Run for at least 2 weeks to avoid short-term noise.
  4. Review results and only apply when confidence is high.

Best Practices

Experiment Design

  1. Test one thing at a time - Don't change price AND conditions
  2. Use meaningful differences - Test 10-20% changes, not 1-2%
  3. Run long enough - At least 2 weeks, ideally 4
  4. Pick representative periods - Avoid holidays or unusual weeks

What to Test

Good experiment ideas:

TestControlVariant
Price sensitivityR400R450 (+12.5%)
Twilight timing-20% after 4pm-20% after 3pm
Weekend premium+15%+25%
Lead time discount-5% at 14 days-10% at 14 days

What Not to Test

Avoid experiments that:

  • Could upset regular members (test with visitors first)
  • Run during peak tournament season
  • Have too small a price difference to measure

Managing Experiments

Pausing an Experiment

If something goes wrong:

  1. Go to Pricing > Experiments
  2. Find the active experiment
  3. Click Pause
  4. Investigate the issue
  5. Resume or Cancel as appropriate

Ending Early

Sometimes you need to stop early:

  • Clear winner: One variant significantly outperforms (>95% confidence)
  • Clear loser: Variant is hurting revenue badly
  • External factors: Event or situation makes data invalid

Applying Results

After a successful experiment:

  1. Review the Results tab
  2. If variant won, click Apply Winner
  3. This promotes the variant rule to your standard pricing
  4. The experiment is archived

Conflict Prevention

The system prevents conflicting experiments:

  • Only one experiment can run per price rule
  • Overlapping date ranges are blocked
  • You'll see warnings if experiments might interfere

Example: Testing a Weekend Premium

Setup

Name: Weekend Premium Increase
Control: Current weekend rate (+15%)
Variant: Higher weekend rate (+25%)
Split: 50/50
Duration: 4 weeks
Success: Revenue per available slot

Results After 4 Weeks

              Control (+15%)  Variant (+25%)
Bookings: 324 298
Revenue: R149,040 R156,450
Rev/Slot: R298 R313
Confidence: 97.2%

Winner: Variant (+25%)
Insight: 8% fewer bookings but 5% more revenue

Decision

With 97% confidence that the variant generates more revenue, apply the +25% weekend premium as the new standard.

Troubleshooting

Results are not changing

  1. Confirm the experiment status is Running.
  2. Make sure the Traffic % is greater than 0.
  3. Ensure the selected Control and Variant rules are active.

Results show 0 exposure or 0 bookings

  1. Verify the experiment date range includes current dates.
  2. Confirm the experiment scope matches the course you are testing.
  3. Increase Traffic % temporarily to confirm data is flowing.

A variant looks worse but confidence is low

  1. Let the test run longer.
  2. Avoid changing rules mid‑experiment.
  3. If the variant is clearly harming revenue, pause and cancel.