Natural Experiments in Product-Market Fit: How to know you don't have it.
I attended the most recent Startup2Startup event and after the presentation, the discussion turned to how one might define Product-Market Fit and what might serve as a proxy for Product-Market Fit, given various types of business models.
The Sean Ellis 40% rule-of-thumb was quickly invoked as were other ideas. However, I thought it worthwhile to share one insight that came from an experienced start-up entrepreneur at the table. While we were talking about triangulating on the various signals available to an entrepreneur as to what constitutes Product-Market Fit, he recounted a story -- really an accidental natural experiment -- on how he unequivocally learned his start-up hadn't achieved Product-Market Fit.
To wit, his site had gone down for a few hours, and he hadn't known about it. In the interim, there had been nothing but silence. None of his users had squawked or had made it publicly known that the site was down and they were angry/frustrated/furious/going to switch providers/fed-up-with-this etc., etc.
This lack of frustration/noise is a data-point. In this case, it meant his start-up had a ways to go on iterating to finding Product-Market Fit.
As a contrast, we might choose to look at what happens when Twitter goes down.
So, for the more intrepid of you out there, perhaps try "accidentally unplugging" your servers and see what happens. (Clearly, this has significant risks such as alienating users, but it may be a useful signal to know when you don' t have Product-Market Fit, if you were wondering.)
BTW, I believe Dave McClure has advocated a very similar idea with regard to features. If I recall correctly, he suggests removing features from a web app and waiting to hear if users complain loudly. The intensity of the complaint is likely correlated with the usefulness of the feature.