Pricing Experiments
Test pricing strategies with A/B experiments before rolling them out to all customers.
What Are Pricing Experiments?
Experiments let you test different prices with a portion of your traffic to find what works best:
flowchart LR
subgraph Traffic[Incoming Bookings]
A[All Visitors]
end
subgraph Split[Random Assignment]
A --> B[50% Control]
A --> C[50% Variant]
end
subgraph Pricing[Different Prices]
B --> D[Current Price: R400]
C --> E[Test Price: R450]
end
subgraph Results[Measure]
D --> F[Compare Conversions & Revenue]
E --> F
end
Creating an Experiment
Step 1: Define Your Hypothesis
Before starting, be clear about what you're testing:
- "A 10% price increase will not significantly reduce bookings"
- "A twilight discount will increase afternoon utilization"
- "Weather-based pricing will improve wet-day revenue"
Step 2: Create the Experiment
- Go to Pricing > Experiments
- Click New Experiment
- Configure:
| Field | Description |
|---|---|
| Name | Descriptive name (e.g., "Weekend Premium Test") |
| Control Rule | Your current pricing rule |
| Variant Rule | The new pricing to test |
| Traffic Split | Percentage for each (default 50/50) |
| Duration | How long to run (recommended: 2-4 weeks) |
Step 2A: Example Experiment (plain-language)
Name: Weekend Premium Increase
Status: Running
Scope: Course
Traffic %: 50
Control rule: Weekend Premium +15%
Variant rule: Weekend Premium +25%
Start date: This Friday
End date: 4 weeks from start
Step 3: Set Success Metrics
Choose what to measure:
| Metric | What It Tells You |
|---|---|
| Conversion Rate | % of viewers who book |
| Revenue per Slot | Total revenue / available slots |
| Booking Volume | Number of bookings |
| Average Order Value | Revenue per booking |
Experiment Lifecycle
stateDiagram-v2
[*] --> Draft: Create experiment
Draft --> Running: Start experiment
Running --> Paused: Pause if needed
Paused --> Running: Resume
Running --> Completed: End date reached
Running --> Cancelled: Stop early
Completed --> [*]: Apply winner or discard
Status Descriptions
| Status | Meaning |
|---|---|
| Draft | Configured but not active |
| Running | Live and collecting data |
| Paused | Temporarily stopped |
| Completed | Finished, review results |
| Cancelled | Stopped without conclusion |
Reading Results
Key Metrics
After your experiment runs, review:
Experiment: Weekend Premium Test
Duration: 14 days
---------------------------------
Control Variant
Visitors: 1,240 1,235
Bookings: 186 172
Conv. Rate: 15.0% 13.9%
Revenue: R74,400 R77,400
Rev/Visitor: R60.00 R62.67
Statistical Significance
Results include a confidence indicator:
| Confidence | Meaning |
|---|---|
| < 80% | Not enough data, keep running |
| 80-95% | Trending, but not conclusive |
| > 95% | Statistically significant |
Wait for Significance
Don't end experiments early. Let them reach 95% confidence for reliable results.
Quick Start Checklist
- Pick one change to test (price only).
- Set Traffic % to 10-20% for the first run.
- Run for at least 2 weeks to avoid short-term noise.
- Review results and only apply when confidence is high.
Best Practices
Experiment Design
- Test one thing at a time - Don't change price AND conditions
- Use meaningful differences - Test 10-20% changes, not 1-2%
- Run long enough - At least 2 weeks, ideally 4
- Pick representative periods - Avoid holidays or unusual weeks
What to Test
Good experiment ideas:
| Test | Control | Variant |
|---|---|---|
| Price sensitivity | R400 | R450 (+12.5%) |
| Twilight timing | -20% after 4pm | -20% after 3pm |
| Weekend premium | +15% | +25% |
| Lead time discount | -5% at 14 days | -10% at 14 days |
What Not to Test
Avoid experiments that:
- Could upset regular members (test with visitors first)
- Run during peak tournament season
- Have too small a price difference to measure
Managing Experiments
Pausing an Experiment
If something goes wrong:
- Go to Pricing > Experiments
- Find the active experiment
- Click Pause
- Investigate the issue
- Resume or Cancel as appropriate
Ending Early
Sometimes you need to stop early:
- Clear winner: One variant significantly outperforms (>95% confidence)
- Clear loser: Variant is hurting revenue badly
- External factors: Event or situation makes data invalid
Applying Results
After a successful experiment:
- Review the Results tab
- If variant won, click Apply Winner
- This promotes the variant rule to your standard pricing
- The experiment is archived
Conflict Prevention
The system prevents conflicting experiments:
- Only one experiment can run per price rule
- Overlapping date ranges are blocked
- You'll see warnings if experiments might interfere
Example: Testing a Weekend Premium
Setup
Name: Weekend Premium Increase
Control: Current weekend rate (+15%)
Variant: Higher weekend rate (+25%)
Split: 50/50
Duration: 4 weeks
Success: Revenue per available slot
Results After 4 Weeks
Control (+15%) Variant (+25%)
Bookings: 324 298
Revenue: R149,040 R156,450
Rev/Slot: R298 R313
Confidence: 97.2%
Winner: Variant (+25%)
Insight: 8% fewer bookings but 5% more revenue
Decision
With 97% confidence that the variant generates more revenue, apply the +25% weekend premium as the new standard.
Troubleshooting
Results are not changing
- Confirm the experiment status is Running.
- Make sure the Traffic % is greater than 0.
- Ensure the selected Control and Variant rules are active.
Results show 0 exposure or 0 bookings
- Verify the experiment date range includes current dates.
- Confirm the experiment scope matches the course you are testing.
- Increase Traffic % temporarily to confirm data is flowing.
A variant looks worse but confidence is low
- Let the test run longer.
- Avoid changing rules mid‑experiment.
- If the variant is clearly harming revenue, pause and cancel.