// Insights

Success Comes After the Launch - The Manifold Approach to Iteration and Analysis

Sean Johnson
Founding Partner

Software is not a static thing, or at least it shouldn’t be. The beauty of web and mobile applications is  you can rapidly iterate and improve them to increase user satisfaction and accomplish business goals.

Sadly most companies don’t take advantage of this. The first version is the only version. The opportunities for gathering robust, actionable user data aren’t leveraged. The product doesn’t accomplish its intended goals and often the project is abandoned.

By embracing an iterative, data-oriented process the odds of success with software projects increases dramatically. The following explains how Manifold leverages these ideas once a product goes live.

Walk Before You Run - Avoiding the Big Launch

While Manifold uses customer development prior to launch to uncover user needs and test early iterations of the solution, the first version of a product in production will still have areas for improvement - areas that are frankly impossible to uncover beforehand. These opportunities take several forms:

  • Usability issues that prevent users from accomplishing critical tasks.
  • Features that sound good on paper but rarely get use due to lack of motivation or lack of awareness.
  • Features that either had been tabled or weren’t previously identified, that users think would increase utility.

Because of this, Manifold advocates for a series of releases to progressively wider circles rather than doing “the big launch.” Ideally a product gets the breathing room to identify and implement simple improvement opportunities. Similarly any serious barriers to adoption are best identified at a small scale.

As you solve these problems and make product improvements you increase your user population, through users who have signed up for beta access, increased paid acquisition efforts, or increasing the number of internal stakeholders.

Start With Early Evangelists

The staged launch process usually  begins with a small launch to your most excited prospective users - people who indicated the highest level of interest during customer development.

While it might seem better to solve problems for the most skeptical, the reality is they’re considerably less likely to use the app for any useful length of time and are less likely to provide feedback that is genuinely useful. This can lead to distraction with product priorities and decreased motivation with the product team.

It is made clear to early evangelists that their job is to use the software on an ongoing basis, providing feedback on what works and what doesn’t, and advising on how to make it better.

Manifold works with the client during this initial phase to capture qualitative feedback from these users in the form of micro surveys baked into the experience, more formal surveys, and face-to-face usability and customer development interviews.

At the same time, Manifold leverages quantitative data of actual user behavior. Often what people say and what they actually do are different. Having the right analytics tools with an ability to track actions on a per user basis allows us to see how individuals are actually using the software - not just an average of what the typical user is doing.

Collect The Right Data

It’s imperative that the product team is armed with the right data to make informed product decisions. Avoiding “vanity metrics” like signups, page views, etc is immensely important - these numbers might make the team feel good and sound good to internal stakeholders, but don’t give you actionable information for improving the products.

The right data depends on the type of product, but tends to have the following characteristics:

Not Averaged

Expressing data in averages is dangerous, as people tend to interpret such data as a normal distribution or bell curve. The reality is usually some form of a reverse power-law curve. While many people will use it once and never again, you’ll find some users whose level of adoption is off the charts.

Real time views allow you to see how specific users are engaging with your product. 
The goal is to identify who those people are, what makes them tick, and what they did or thought differently that caused the “aha moment” where they internalize the benefits of the product and habituate into their life. This information is immensely valuable in architecting the sign up, on boarding and retention experiences to maximize the likelihood more users reach that moment.

Action-Based

As previously discussed, data should be tracked at the individual action level, particularly early on. While knowing aggregate numbers is helpful, being able to drill down and see what Dave is doing on your site is extremely helpful.

Action-based metrics also show you how people are using specific features of your site, rather than simply tracking page views. You want to be able to see what % of people accomplished different tasks within an interface, not simply whether they visited it.

Focused on Retention

Retention is ultimately what matters in determining product adoption. Good product teams make retention reports the north star for driving product choices. Retention means different things to different organizations, but the focus usually revolves around the % of users who continue to use the product one day later, one week later, one month later, etc.

Tracking retention in aggregate is helpful, but tracking by cohort is even more useful. If you’re iterating on your product, the product users experience one month from now will fundamentally be different than the one they use today. As such, retention patterns should be different for new users of the revised product. Breaking user populations out by week, month or release can tell you whether your product enhancements are working.

Organize Product Optimization Sprints

The purpose of capturing this data is of course to identify areas for improvement. While some percentage of development can go to new feature development, it’s rarely the case that adding features is the difference between success and failure.

If the core experience remains flawed, users won’t adopt by bolting on additional stuff. This is particularly true for mobile applications. Manifold advocates for a 70/30 split, where 70% of the effort goes toward improving the core experience, and 30% goes to new product efforts.

Optimization sprints are usually organized around improving a single critical metric, mostly in the activation or retention phases of a user lifecycle. By organizing the team around a single metric, speed of iteration is increased and ideas for improvement are more rapidly surfaced, connected and implemented.

Optimization Sprints (as opposed to new feature sprints) can be anywhere from one week to four weeks in nature depending on complexity. iOS apps have a mandatory one week review process with the app store. This can be mitigated through mobile A/B testing (although limited), or through the use of web views (which have a performance hit and limited access to the phone’s functionality but benefit from instantly being live for users without being subject to the review process.)

Commit to a cadence of rapid iteration

What happens post-launch is arguably more important than what happens beforehand. By gathering data, and using it to inform your development process, you maximize the likelihood of getting to a place where your product is successful.Whether you have a product already or you're looking to build a new one, Manifold's digital product development team can help you turn it into something amazing.