I am not used to writing short articles, and perhaps neither are you expecting one. Let’s both pretend that (a) I can write succinctly, and (b) I can make it worth the read.
So AI, huh. Insanely scary, but also dumb as my cat sometimes. Like everything nuanced, its abilities lie on a spectrum (at least for now). One end is where the skeptics lay eggs, and the furthest end is where the hypeists believe that AI will soon prove “Why is 42 the answer to life, the universe, and everything?”.
As a common man, I stand somewhere in the middle of the spectrum. Truly skeptical about some parts of it, truly mindboggled at many other parts. This is a brief story of one use case that keeps giving day in, day out for me.
Building software now, I’ve gained a weird superpower that feels obvious on day one of GPT being released but not so easily proven when moving from theory to practice. The superpower is that I went from carefully picking one path at a time to exploring five paths at once. It feels like going from high-stakes betting to low-stakes experimentation. AI gave me what I’d call “cheap optionality” (the term exists somewhere, but I forgot where I read it before). It is the ability to explore the problem and solution spaces without committing to one of them at a time, and to do so cheaply. Let me explain what I mean.
Before, making a technical decision or solving a problem felt like standing at the base of a massive decision tree, where every fork from root to leaf required careful pondering (and this kind of pondering hurt me the most, because I tend to bias for action). I’d spend enormous mental energy trying to predict the optimal path; which branch, which sub-branch, all the way down through diminishing tributaries to a single solution. It was exhausting because many times choosing wrong meant backtracking through the entire flow chart, losing hours or days retracing my steps. Or worse, I commit to the wrong path because it is too late and I can make it work now, and fix later.
When I used to work at Intercom, I learned to counter this via brutal scoping, but even that was best effort and often ended badly.
Now, with LLMs, I can send scouts down multiple branches simultaneously. I can explore five different paths to the edge leaves, compare their fruit, and either commit to the best one or synthesize insights from several. The tree hasn’t changed at all. It’s just as vast and branching; but I’m no longer paralyzed at each fork, agonizing over irreversible choices or crying at how my bias for action and impatience hurt me. Basically, what happened is that the cost of exploration has collapsed, and with it, the anxiety of having to ‘get it right’ on the first try. Finally, I can try as much of the tree traversal I can navigate, and as much in parallel as I can. The bottleneck starts to appear at the end of the day in the form of my brain actually being overloaded.
Turns out there is an actual concept in computer science that describes this: design space exploration.
Take the “contact matching” feature I built recently matching HubSpot contacts against a third-party system that only stores phone numbers. The goal is that I need to find a way to reliably match a HubSpot contact with the 3rd party equivalent contact, and the only reliable identifier we have is the phone number.
At the root, I faced the fundamental question (which I know from experience is a pain in the ass): what’s the contact matching strategy? Do I normalize phone numbers first or match raw? Each choice branched into sub-questions. If I normalize, which format? E.164? National? What about extensions and country codes? If I match raw, how do I handle the inevitable mismatches from different formatting conventions?
Then came the next layer: collision handling. What happens when one phone number maps to multiple HubSpot contacts? Do I match on most recent activity? Primary vs. additional phone fields? Create a manual review queue? Each approach spawned its own complexity tree; confidence scoring systems, fallback hierarchies, data quality checks.
Before LLMs, I’d spend hours paralyzed at each fork, trying to predict which path would dead-end three branches deep. Choosing ‘normalize first’ felt irreversible. If I later discovered I needed raw matching for certain edge cases, I’d have to backtrack through the entire decision canopy, rewriting logic, tests, and product decisions I took along the way.
It is somewhat different now. I send the LLM scouts down (I like to make them work; it is not cruel) multiple branches simultaneously. I’ll start three LLMs doing a deep research on all contact matching literature, all algorithms, all ways people made it work, and any number of complaints users would have online of such systems. Then I’d have one conversation exploring phone normalization libraries and their edge cases. Another prototyping collision resolution with a fuzzy matching approach. A third investigating HubSpot’s API limitations around phone field queries. Within a couple of hours, I’m comparing actual implementations at the leaf nodes rather than anxiously theorizing at the trunk. Let alone actually taking those and letting cursor go wild implementing solutions, and I review.
It feels as if I gained the superpower of time travel where I keep jumping between those tree branches and sub-sub-sub branches at extreme speed, seeing all the possibilities, and picking the right one.

This truly fascinates me because the tree is the same. It’s just as vast and branching, but I’m no longer paralyzed at each fork, agonizing over irreversible choices. The cost of exploration has collapsed, and with it, the anxiety of having to ‘get it right’ on the first try. I can walk five paths, see which ones look more likely to work, and synthesize the best hybrid approach instead of betting everything on one early decision.



