March 2023
In 2014, my long time best friend and I started Clara Labs. Clara was a simple product: cc clara@yourcompany.com on your emails like you would with a normal human assistant, sales rep, or recruiting coordinator -- and Clara would handle the e-mail back and forth to schedule a meeting on your behalf according to your message's intent and preferences.
Our product was novel. People found it useful, paid a lot for it, talked about it, tweeted about it, and had a lot of fun using it. There's something about interacting with something that seems intelligent that is universally enticing or attractive to people.. The early engagement and positive feedback was encouraging, but the sea of enthusiasm made it challenging to filter the signal from the noise in the most critical and early days of the company. We were over-confident in the degree to which we had found product-market fit and made many longer-term investments based on this conviction. In reality, there was still a lot more customer development and product refinement needed to reach PM fit; we would have benefited from identifying this earlier.
This is not a new phenomena, it's been happening in AI since ELIZA came out in 1966. ELIZA simulated a psychotherapist of the Rogerian school (in which the therapist often reflects back the patient's words to the patient), and used rules, dictated in the script, to respond with non-directional questions to user inputs. ELIZA generated significant fanfare and response in popular culture at the time, so much so that the inventor was motivated to write a book on the response - "Computer Power and Human Reason" (a great read btw).
Why does this matter in 2023 to founders building AI-enabled products using much more powerful underlying models? Because independent of the quality of the intelligence and the utility of your product, humans are excited/endeared/engaged by anything vaguely anthropomorphic or intelligent. They'll go out of their way to play with the product and use it outside of the context of actually evaluating it as a solution for a job to be done. In the earliest days, this can make it challenging to solicit useful feedback and insights from users, as their experience is likely not reflective of the underlying need your product fulfills.
This can be extremely frustrating as a founder or an investor trying to evaluate if an early-stage AI product is on to something important or not. The incentives for a founder are to leverage early momentum to fundraise.. but it's often too early to truly understand the durability of early engagement and longer term retention data.
No heuristics like this can be treated dogmatically. New and unimagined use-cases from technology can stem from pure novelty; a germ of substance can originate in play.
Ultimately, there's no shortcut to building a great product, you need to solve a real, enduring problem for your customers. Don't let sheer enthusiasm for the novelty of what you've created distract you from finding that.