# The hard thing about easy things 5/10/26 My brother visited me in San Francisco recently. He and his friends – first-year medical students – were invited by the [American College of Physicians](https://www.acponline.org/) to talk about a study they'd conducted. They'd gone deep on AI use in med school, the absence of clear standards, and the anxiety the combination produces. I was just happy to host them: I'd just moved to SF, having started in a new (SWE) role that same Monday. ![[bcm-in-sf.png|Baylor College of Medicine takes Mission Bay, SF.|700]] Having _just_ come off of my time at [[Ramp]] – pioneer of both best-in-class finance software but also of AI-age engineering via [[Ramp#^9f3029|Inspect]], [Labs](https://labs.ramp.com/), etc – I didn't think too hard about their premise. Sure, it made sense. Medicine lags on technology, and is still "figuring out" what to do with the latest. Software, on the other hand, has found its *groove*, and is chugging forward. Tokenmaxxing and the like. That's certainly how it felt at Ramp. Claude Code "at home" has also been quite fun: it has enabled me to work with my closest non-technical friends, and help them build their most ambitious ideas. And in these cases – solo, friend-fractional-CTO as I am – I can really just rip, rip, rip, and if it works, it works! Importantly: no *social* dimension to my code. ![[Pasted image 20260510152858.png|]] **It's different at a new job!** I took for _granted_ how fluently and deftly I could surf the tide with these tools at Ramp: I'd either written so many of the legos by hand, or had spent time in incidents, projects, manually, _slowly_ tracing through the code to figure out how it all works. And so an assistant that could take my crisp, informed expression of intent – and produce the precise output I was looking for – genuinely felt like magic. And it was, and is that! At a new job – especially one at an early-stage company where foundations are still being laid – there are all sorts of differences to the experience. I'm envious that my brother and his peers get to think more formally about it. ## AI-forward cos are "big cos" on day one Alice is a micro-manager, Bob is a slop cannon, and I'm the perfect middleground. Or so the confident among us would say. Personally, I go from feeling like I'm gripping the steering wheel of a self-driving Tesla too tightly; to feeling like I'm handsfree in a Prius. %% ![[steve-and-krish.png|]] %% The social dimension is crucial: at no fault of any individual, the environment is now more adversarial. 1. There aren't strong idioms or standards re: workflow. You think I'm a slop cannon; I think you're a micro-manager. We both produce results, so it's not easy to figure out who's right. 2. You know my stuff was co-written with an agent; I know yours was. Which parts, neither of us do. How much of the overarching design? Who even remembers. This is a naturally "low-trust" environment in the narrow scope of the software development lifecycle (everyone *gets along* great and is having a ton of fun, of course). But we're all intern managers, tasked with the combination of: * Getting results from our respective teams * Defending these teams when they've earned our defense * Absorbing the blame for these teams when they screw up Management is more unpleasant when you're not defending humans, and when the blame you're absorbing is not on *behalf* of humans. Leadership is more fun. You have a vision, and you're comfortable going angry-Steve-Jobs on your team until they produce the Apple iMac. The Apple iMac shows up, and it's all worth it. But *management* – which is a distinct discipline, understated at your own peril – is no fun when you take out the *people*. ![[angry-steve.png|]] Management – in _contrast_ to leadership – is a job that emerges past a certain degree of organization-building. Just as [[AI code is legacy code from day one]]: AI companies are “big companies" from day one. I suspect that this will come to entail the sort of primal dynamics that make "big-cos" political. Lots of defense to play, lots of blame to absorb. Lots of triage, orchestration, tradeoffs, prioritization. You now become a big tech engineering manager the moment your engineering org scales from one to two. So that's one challenge in our brave new world. ## Resisting (or accepting) the temptation to design with AI You manage a team of interns (or staff engineers, whatever you believe re: their abilities). If you're working in a codebase that has only ever existed post-Claude Code (etc): there is a risk that these agents have more tenure than you. They wrote a lot of this code – more than you – and you're expected to measure its quality against outcomes & tests rather than by the letter. You're also expected – with a team as powerful & well-staffed as the one you have - to move everything along faster. Even if not explicitly: everyone *else* is moving that fast, so you have to keep up. The agents are (really, *[[AI code is legacy code from day one#^b32d89|were]]*) the SMEs, and you're continuously using them as a resource to *catch up*. You simultaneously want to *design* independently of them – having them be executors of your design – but also *consult* with them given that they know the machinery or can internalize it very quickly. There's a tension between those things: while *consulting* with Claude, is mere *exposure* to Claude's initial opinion enough to hijack your design? ![[dario-and-krish.png|]] How much do you really trust that consultation? 1. If you give it high-level context re: the business problem, do you risk being filled with "empty calories"$^1$ re: a satisfactory solution? 2. If you limit it to only the factual sub-components of the problem, do you stunt its ability to give you the best – right – advice? $^1$ ie. enough to "fill you up" and prevent you from thinking more deeply about the problem (you've "delegated" it), but not enough to actually, durably *solve* the problem the way you would have. It's hard to avoid this ambient feeling that you're the maintainer of [[AI code is legacy code from day one#^ab2d9a|someone else's]] code, all of the time. --- You'll notice I've volunteered ~zero real answers or solutions. I am quite optimistic, though, that the answers will emerge, tooling will form, and we'll be just fine! I just happen to think that – at an industrial-scale – this actually gets "harder" in subtler ways before it ends up becoming truly, reliably, easier: and that it's sometimes tricky to admit that anything *got* harder. *Special gratitude to my engineering colleagues at Natural ([Eric W](https://www.linkedin.com/in/ericww2/), [Walt](https://www.natural.co/blog/natural-moguls), [Klaire](https://www.natural.co/blog/why-klaire-joined-natural), [Eric J](https://www.natural.co/blog/why-eric-joined-natural), [Bhargav](https://www.natural.co/blog/why-bhargav-joined-natural), [Kendall](https://www.natural.co/blog/why-kendall-joined-natural)) and at Ramp (Zack, Matt, Alex, Gian, Arnab, others) for the many conversations whose breadcrumbs have landed in this piece.* %% There is an [[Time & space arbitrages#^d7fd0b|arbitrage]] in being religious about writing your own tech specs, for example, to the furthest extent. *Ensuring* that your inexperienced interns are not simultaneously *tenured* with the work product, over you. You don't want them to explain *their* code to you; you want them to write *your* code. The slop cannons would tell me "for now," and they very well may be right. %%