Research + the scientific method + data analysis = optimization
Week 3 of CXL Institute’s Growth Marketing Minidegree covered running growth experiments, conversion research, A/B testing, and statistics. This week’s summary introduces the first layer of growth jargon and processes. We’ve arrived at the (even more) fun stuff.
The growth function
The growth business function optimizes the online experience for increased profits. Growth professionals leverage experiments and testing for the discovery of what matters. One concept from the course to illustrates the idea is the buyer journey as a sequence of funnels. Although not every customer takes the same journey on the site from initial visit to repeat purchase, the sequence can be generalized into stages.
An e-commerce example could look like: home page, “about” page, product page, filtered search, add to cart, check out page, shipping page, order confirmation page. At each of these stages, visitors will either fall off or funnel into the next page. The major, although simplified, goal of growth is to increase the number of users who funnel all the way down the buyer journey (conversions).
The growth process
Growth marketers optimize the buyer journey through experiments. The process of identifying, designing, and prioritizing tests looks a lot like the scientific method. My inner science major found the comparison fascinating. The process looks something like:
- Rank issues in order of priority
- Determine which issues need AB tests
- Outline and prioritize a testing program
- Set a hypothesis
- Execute the test
- Analyze the results
- Implement and learn
- Repeat the cycle
An important note: growth marketing and conversion optimization only exist with sufficient traffic and customers. Until then, all you can do is talk to customers and implement changes based on feedback. The course mentions two benchmarks: 1,000 unique site visitors (per page) per month to start optimization and 1,000 conversions per month to begin A/B testing.
Research is the foundation of all testing (and marketing at large). The digital world holds plenty of data, and the research process finds the right data to answer targeted questions. What behaviors indicate problems in the customer journey? What motivations explain the behavior? What can we do differently based on the selected data? To answer, growth marketers look at the following:
Heuristic analysis: An experience-based assessment for problem solving, learning, and discovery. Heuristic analysis involves going through your company’s website in a structured manner to notice and document the clarity, friction, anxiety, and distraction of each page.
Technical analysis: Does the site perform well on all devices, browsers, browser versions, and operating systems? How fast does the website load?
Web analytics analysis. A deep-dive into Google Analytics and any other supporting web analytics tools. Are your analytics configured properly? Who’s dropping off and where?
Mouse tracking: Heat-maps, session replays, and engagement depth per page. Where do people look? Where do they lose interest and click away?
Qualitative surveys: Why did your recent customers buy? What is someone thinking when on a page? Why do website visitors not buy? Surveys help identify undiscovered sources of friction by simply asking. Other supporting qualitative data can include phone interviews, live chat transcripts, and customer support insights.
User testing: Gather people who understand the product but have never seen your website before. Can they complete specific tasks on the site? How do they go about the customer journey start to finish? Watching can uncover bottlenecks and reveal how people who don’t spend all day looking at your website navigate around.
2. Rank issues in order of priority
CXL recommends a 5-star system: 5 = a severe issue affecting many users, 1 = a minor usability issue affecting a few people.
3. Determine which issues need A/B tests
An A/B test is an experiment where 50% of users see the existing “default” webpage and 50% will see a modified “challenger” version. After 2–4 weeks, the behaviors of both groups are compared and evaluated for a significant difference in conversion rates (desired behavior).
Only one type of issue needs tests: when the research indicates a problem, but the best solution isn’t clear. Non-test issues include:
- Instrumentation: not measuring the data that needs to be measured
- Incorrect data
- Investigation: need more research to determine the true problem
- Just-do-it: no brainer fixes (straight-forward usability issues such as broken buttons)
4. Outline and prioritize a testing program
An average website with average traffic will run 1 A/B test every 4 weeks. So you can’t run unlimited numbers of experiments. In an effective testing program, each proposed test should be scored and prioritized based on: potential for success, impact, power, and ease of testing and implementation.
5. Set a hypothesis
A hypothesis aligns stakeholders and summarizes the problem, proposed solution, and predicted outcome. CXL recommends the format:
If I apply this, then, this behavioral change will happen, (among this group) because of this reason.
6. Execute the test
Design, develop, and quality-assure prioritized A/B tests. Configure the A/B test in your testing tool, calculate the time needed (based on lift and sample size), and monitor the performance throughout the 4 weeks.
7. Analyze the results
Are the results statistically significant? What are the business implications? What else can you test after an inconclusive outcome?
8. Implement and learn
Implement conclusive “winners” right away and learn from “losers” and inconclusive results.
9. Repeat the cycle
The more tests a team does, the more they’ll be able to optimize their optimization process to:
- Test (or make) more effective changes (AKA: test things that matter and make an impact)
- Reduce the cost of optimization
- Improve the speed of experimentation
Week 3 experience
Ironically, the material makes the typos and usability issues on the course’s website more evident. I’ll take it as proof of learning, but I do wonder why the principles haven’t been applied to the CXL site…perhaps because the content lies in post-conversion territory?
A parting suggestion
The value of summary posts, in my experience, lies in the follow-up research. Google all the terms you don’t understand from the outline of how the concepts relate to each other; if you’re new to the subject, voila a DIY “course.”