CRO · Research · Experimentation · 2022–2024

Turning hypotheses into measurable outcomes.

Hypothesis writing, experiment design, and end-to-end QA — from concept through to post-launch validation. A practice built on the belief that good design should be provable.

Role
UX Design Lead
Experiments
8+
Win rate
68%
Platform
Banking
CRO program · Hypothesis framework · 2023

THE ROLE

Hypothesis to handoff — and everything in between.

My work in the CRO program spanned two modes. For simpler preference tests — layout variants, copy changes, visual hierarchy — I owned hypothesis writing and design end-to-end, with builds handled internally for speed. For more complex concepts, I prepared fully documented design proposals for handoff to our external development partner.

Across both tracks, I also held QA responsibilities: reviewing proposed design concepts for feasibility and consistency, then running pre and post go-live checks to ensure experiments launched and resolved cleanly.

8+ experiments designed and 30+ experiments QA'd. 68% win rate. Every test began with a written hypothesis — and ended with a documented result.

WHAT I DID

Building an experimentation practice from scratch.

  • Wrote hypotheses for every test — belief, rationale, metric, and falsifiability criteria
  • Designed internal-build tests: fast preference variants, copy and layout experiments
  • Prepared handoff-ready design concepts for complex tests going to external dev
  • QA'd proposed design concepts across all experiments before build commenced
  • Ran pre go-live QA to verify experiments launched as designed
  • Conducted post go-live checks to confirm tracking, rendering, and result validity

02 · Experiment results

Eight wins that moved the needle.

Highlights from the experiment program — each shipped after at least one round of qualitative validation, with confidence rated against statistical power and sample stability.

Apply from category page

Deliver shortened application journey from comparision table.

+7.24% CVR ↑·+10% Visit to product page ↑

Confidence: High · Propose: Implementation

Eligibility to apply from Credit Card category page

Directing these users to application options as opposed to extra page on eligibility requirements.

+29.33% CVR ↑·-30% Exit rate ↓

Confidence: High · Propose: Implementation

Online account open

Requested by the product team was intended to improve the readiness of users starting an application.

-6.94% CVR ↓·-11% Lead begin ↓

Confidence: High · Propose: Iteration

Redesign journey for eligiblity check

Introduced on credit card page prominence for eligiblity criteria.

+15.75% CVR ↑·-3.22% Bounce rate ↓

Confidence: Medium · Propose: Implementation

Market validation for product award

Designed a concept to determine the value generated by awards.

+10.95% CVR ↑·+4.6% Engagemnt rate ↑

Confidence: High · Propose: Implementation

Elevate popular transaction account

Increasing the prominence of more popular products, while communicating key differences between products.

+7.2% CVR ↑·+6% Lead begin ↑

Confidence: Medium · Propose: Implementation

Elevate popular savings account

Increasing the prominence of more popular products, while communicating key differences between products.

+23.14% Completion ↑·+13% Lead begin ↑

Confidence: High · Propose: Implementation

Personal page directory

Design is intended to get users to their desired product quickly.

+2.63% Visit to product page ↑·+0.93% Visit to category page ↑

Confidence: High · Propose: Implementation

03 · Principles

How we ran experiments.

1

Hypothesis first

No experiment runs without a written hypothesis: what we believe, why we believe it, what we'll measure, and what result would change our mind. Written, shared, and challenged before a single pixel moves.

2

Qual + Quant together

Numbers tell you what happened. Interviews tell you why. Every major experiment included at least 5 moderated sessions alongside the quantitative data — because a 12% uplift means nothing if you don't understand the mechanism.

3

Learning over winning

A 32% loss rate isn't failure — it's the cost of learning. We treated losing experiments with the same rigour as wins: post-mortem, published insights, design implications documented and shared.

8+
Experiments run
68%
Win rate
3
Major conversion lifts
12wk
Experiment cadence

04 · Outcomes

Results that compounded.

  • Faster experiment velocity achieved by building simple preference tests internally using the existing CSS component library — reducing time from hypothesis to live by up to 60% compared to external dev handoffs
  • Complex navigation restructure handed off to external dev partner with full annotated design documentation — shipped cleanly on first go-live check
  • Zero experiment invalidations due to QA failures across all 40+ tests in the program
  • Shared insights library contains findings which are actively referenced in quarterly planning

"The most valuable outcome wasn't any single experiment. It was teaching a team to ask 'what would change our mind?' before every decision."

05 · Why it matters

Experimentation as a design practice.

CRO isn't a discipline separate from UX. It's how design proves it works.

Evidence replaces opinion

  • Design gets a seat at the tableWhen design decisions come with statistical evidence, they're treated differently in product conversations — not as preferences but as findings.
  • Faster decisionsA well-run experiment resolves debates that might otherwise consume three sprint reviews of subjective feedback.

Learning compounds

  • Every result teaches somethingEven losing experiments narrow the solution space. A 32% loss tells you something that informs the next hypothesis — if you bother to document it.
  • Institutional knowledgeThe insights library means no finding gets lost when someone leaves. The team's understanding grows independently of any individual.
"A well-formed hypothesis is already half the design work done. It forces you to be honest about what you actually believe — and what would change your mind." — Esha Patel

Evidence-led design, every time.

The program changed more than metrics. It changed how the team thinks about certainty, learning, and what it means to be right.

Ready to
work together?

View All Projects