profile

Anna Holopainen

SaaS reads this week: Figma's data-driven experiments, The experimentation gap, Kahoot business model

Published about 2 years agoΒ β€’Β 6 min read

Hey friend πŸ‘‹

Here are some of my best reads from the past couple of weeks. Let's get to it πŸ‘‡

  1. How data shaped a new comments experience at Figma
  2. The Experimentation Gap: How statistical decision making has evolved, yet most companies haven't kept up
  3. How does Kahoot make money

​

#1: How data shaped a new comments experience at Figma

A few years ago, Figma found that engaging the entire teamβ€”editors and viewers alikeβ€”is key to helping a team grow with Figma. While the data validated that comments are a strong indicator of team growth and engagement (teams that collaborate in their first month are 1.75x more likely to retain and 6.5x more likely to become a customer), comments weren't widely used.

Knowing that giving and receiving feedback is central to design work, they decided to run a series of experiments to understand why the insights from the data didn't match the user behavior they observed.

Experiment 1: Validating that discoverability is key

  • Hypothesis (backed by a series of research sessions): Comments provided value, but users struggled to find it within the product (by clicking on an icon in the upper left corner of the editor). Is low discoverability inhibiting comments usage?
  • Experiment: They ran an experiment that prompted users to leave a comment, targeting developers with view-only access to a file. These users rely on close collaboration with designers in Figma but were historically the least likely to interact with comments.
  • Results: After two weeks of experimentation at a 50/50 split, comment creation in the test group was 45% higherβ€”without affecting the rate at which users returned to comments the following week. Simply making comments easier to discover had a substantial impact on comments usage.

Experiment 2: Making comments even more visible to encourage users

  • After validating that comments were valuable but hard to find, they set out to help the product team find a more discoverable solution.
  • Hypothesis: Moving the entry point for comments from the left to the right side of the menu bar would make it easier to find. While tools on the left offer core creation and canvas functionality for designers, they hypothesized that cross-functional partners might find themselves more frequently using the right side of the editor, where many of the collaboration and viewing features live.
  • Experiment: They ran the experiment with a 50/50 split on new users and tracked the percent of these users discovering comments within seven days of signing up.
  • Results: A 20% drop in comments discoverability across the board. This experiment was a helpful reminder that small changes can have a meaningful impact on the user experience β€” and in fact, we should expect most of our product hypotheses to be wrong.

After continued experimentation, they finally launched changes that improved the discoverability and how users manage, sort, and interact with comments.

​Read article here​

​

#2: The Experimentation Gap: How statistical decision making has evolved, yet most companies haven't kept up

Over the past two decades, controlled experiments have become a core part of how many of the most successful tech companies, from Airbnb to Netflix, develop their products. In reality, most other companies can not realistically achieve an experimentation program anywhere close to the best ones. Most companies are stuck with manual effort, bottlenecked processes, old-school statistical techniques, poor tooling, cultural challenges – and a general inability to run more than a few experiments a month.

The evolution of experimenting: From marketing tests to company-level core processes

  • In the early days, most experiments were simple, client-side optimizations and A/B tests on color, font, and copy on websites – such as Google's most famous early experiment: testing 40 shades of blue on a link in the search results, with an outcome that famously led to $200M more a year in ad revenue. Most experimentation tools such as Optimizely were architected to suit this world.
  • Since then, a lot has changed: 1) Data infrastructure has evolved, 2) Experimentation needs have matured and are more likely to be tied to core product experiences rather than static landing pages, and 3) Subsequently, experimentation it's not just a marketing thing anymore but requires data scientists, data engineers, and analytics engineers and a systematic approach deeply embedded into higher-level product development processes.
  • A few problems arise from this change: tooling hasn't kept pace, experimentation culture is not mature enough, and statistical techniques are sometimes inadequate.

Problem 1: Tooling hasn't kept pace

  • Unfortunately, tooling in this space has not kept pace at all. Teams end up paying fortunes for experimentation tools that come with minimal monitoring, logging, and debuggability capabilities. Data teams become a bottleneck since all experimental design (e.g., calculating runtime & power, metrics modeling) and experiment execution (rollout, monitoring, analysis, reporting) must go through them. As a result, most companies end up doing no more than 1–5 experiments a month (compared to hundreds of experiments per day)
  • A modern experimentation platform should look something like this: 1) A self-service UI (that allows individuals to define, configure, launch, and track experiments), 2) Statistical assignment (assigning experimental groups against the right set of dimensions), 3) Metrics modeling, 4) Data pipelining and orchestration (enriching and automated computation of both metadata and statistical analysis), 5) Monitoring and Debugging (monitors experimental metadata and identifies anomalies or unexpected outcomes), 6) Scheduling, rollout, and experiment management (e.g. incremental rollouts, rollbacks, restarts due to bugs, conflicting experiments, etc.), 7) Experimental review and guardrails (guardrails to ensure experiments consistently adhere to certain standards), 8) Investigation engine (investigation of the root cause of why it succeeded, or failed, or had inconsistent results), and 9) Knowledge capture and reporting (a linkable live repository of the full experimental structure, history, and results).

Problem 2: Statistical techniques are inadequate

  • While you can get quite far with basic testing, some startups are seeing immense value generated from more advanced approaches to experimentation – like quasi-experiments, variance reduction with outlier capping, or advanced assignment strategies.
  • For example, to achieve perfect randomization of users (= the only aggregate difference between treatment groups is the experimental condition), Netflix's experimentation platform natively segments new vs. existing users. They run almost all experiments across these cohorts independently. Some other companies run experiments randomized not against all users but only against the active users of the feature being tested.

Problem 3: Experimentation culture is not mature enough

  • Cultural problems tied to experimentation can surface in many ways: leaders and executives may be used to an environment where they can make gut-based decisions, employees without a statistics background may not understand the utility of experimentation, or may feel overwhelmed due to an unfamiliarity with statistical concepts, and so on.
  • Tooling built on the right principles can provide the foundation for a culture of experimentation: being able to launch experiments (learning by doing), see experiment history (including hypothesis, experimental design, and results), leave comments, share reports in collaboration tools like Slack – all these reinforce experimental literacy and allow peer-to-peer education. Simple self-serve UI and guardrails reduce the perceived barrier to entry and make it easier for people to engage. People will slowly become familiar and comfortable with speaking about features and launches in the context of statistical testing.
  • You'll also want to celebrate the process of rigorous hypothesis testing and the learnings that come from it, regardless of the outcome of any specific test (a win can also be learning that a feature is not a good idea to launch). In addition, top-down support from leaders is critical to cultivating an experimental culture.

​Read article here​

​

​

#3: How does Kahoot make money

Kahoot is a game-based social learning platform used in educational and other institutions founded in 2012. When playing, students gather around a common screen with their devices and connect to a game with a specific PIN. They then use their devices to answer questions, with higher points awarded to students who give a correct answer quickly.

The game can be found in over 200 countries and 87% of the world's global top 500 universities. Revenue for 2021 is forecast to be in the order of $90 to $100 million.
Like many EdTech startups, Kahoot has likely found it hard to sell directly to its core users – teachers and students who often start using the product independently and without a top-down sales process – so the platform is mainly free to use for the players. But how do they make money, then?

Monetization: Free for the players, paid for commercial and educational organizations with particular use cases

The company does charge educational institutions and corporate clients for access to the game via multiple subscription plans.

  • Kahoot! 360 allows businesses to make training, presentations, events, and employee collaboration more fun and engaging. Value features include word clouds, customizable branding, and the ability to track player progress and participation.
  • Kahoot! for schools: In an educational setting, Kahoot! empowers teachers to motivate their students, increase class participation, and assess learning. Value features include unlimited teacher groups, lesson plans, apps, as many as 2000 players per game, advanced tools for teachers and administrators, including school branding, brainstorming, and learning progress reports.
  • Licensing: The company also makes money by licensing its content to third parties as part of its Kahoot! Publisher program. This allows publishers, brands, and other content creators to incorporate Kahoot games into their respective platforms and gamify their content.

​Read article here​

​

​That's it for this week. I'd love to know what was your favorite read this week!

​

Cheers, Anna

​

P.s. Did someone forward this email to you? You can subscribe to weekly updates here.
​
P.p.s. Did you know you can now find all previous summaries here?

​

P.p.p.s. Connect with me on LinkedIn and Twitter! (Warning: I mainly post memes and rants about marketing, but if that's your jam, let's be friends. I'm also Head of Growth at an EdTech startup and a freelance consultant for B2B SaaS startups, but honestly, I'm more into shitposting than personal branding.)

Anna Holopainen

Spending way too much time browsing through mediocre business articles? I got you. Subscribe to get 1-5 curated SaaS, growth, and marketing resources directly to your inbox every Friday. I'm Head of Growth at an EdTech platform startup, freelance consultant for B2B SaaS businesses β€” and a serious learning junkie. See the full resource library at saasreads.com.

Read more from Anna Holopainen

Hey friend πŸ‘‹ Here are some of my favorite reads from the past few weeks again. Hope you enjoy these summaries! Trial or Freemium? Get the best of both with a Reverse Trial The problem with user "engagement" How Blinkist increased conversions by 23% by solving their biggest customer complaint Quick reads: Figma vs Adobe, Data-based ICP for B2B SaaS, Carrying capacity in single player SaaS #1 Trial or Freemium? Get the best of both with a Reverse Trial We're currently in the process of...

over 1 year agoΒ β€’Β 7 min read

Hey friend,You know how sometimes everyone in your networks seems to be talking about the same new shiny thing that just kind of appeared out of nowhere? For the past two months or so, it's been the comeback of Marketing Mix Modeling, at least in my small bubble. Anyway, here are some of my best SaaS reads from the past few weeks (none of which is about MMM because I'm a laggard like that): Product-Led Marketing examples How to get traffic from "zero-volume" keywords Difference between "Aha...

over 1 year agoΒ β€’Β 6 min read

Hey friend πŸ‘‹ Here are some of my favorite SaaS reads from the past couple of weeks again: How to do Root Cause Analysis to figure out what happens if numbers go down The myth of exponential hypergrowth Influence Maps How Yahoo grew – and lost the game #1: How to do Root Cause Analysis to figure out what happens if numbers go down If you wake up one morning to find a 20% drop in your KPIs, you should always know why before your CEO asks. Craftsvilla is an e-commerce portal whose north start...

about 2 years agoΒ β€’Β 6 min read
Share this post