Three Things I Had to Learn the Slow Way

Photo by Vitaly Gariev on Unsplash
I want to talk about three mistakes I keep making in different forms, and what I’ve slowly figured out about each one.
Asking for Help Is Not a Weakness
Early in my career, I would sit with a problem for hours, sometimes days, before asking anyone for help. I was afraid of looking dumb. I thought senior engineers would judge me for not figuring it out myself. So I’d spiral. Read source code, dig through GitHub issues, open tabs until my browser slowed down, construct elaborate mental models of systems I barely understood.
The thing is, nobody thought I was dumb. That was entirely in my head.
I remember one situation where I was overthinking a system that needed to process some data. I was sketching out architectures, considering edge cases, researching approaches. After days of this, I finally talked to a teammate who looked at the problem and said something like: “Why don’t you just ingest the events?” That was it. A simple pipeline. Ten minutes to explain, an afternoon to build. I’d spent days avoiding it because it felt too simple, like the answer couldn’t possibly be that straightforward.
That pattern repeated. The cost was my time, obviously, but also the anxiety of sitting with a problem alone, the pressure of feeling behind, and missing out on context a teammate already had. The senior engineers I respect most ask questions constantly. Not because they can’t figure things out, but because they know someone else’s five-minute explanation beats their own two-day investigation.
I’ve gotten better at this. Not great. Better. My rule now: if I’ve been stuck for more than an hour without making concrete progress (not just reading, but actually moving forward) I ask someone. The worst that happens is they point me to a doc I missed. The best is they reframe the problem entirely.
Simple Is Not the Same as Easy
Rich Hickey gave a talk called “Simple Made Easy” that I think about a lot. His core point: simple means “not interleaved.” Easy means “familiar” or “close at hand.” They’re not the same thing. Code that looks easy (short, clever, concise) is often complex because it tangles multiple concerns together. And simple code often looks verbose or boring because it keeps things separate.
I used to equate fewer lines with better code. More abstraction, more cleverness, more “elegant” solutions. Steve McConnell’s Code Complete has data on this: defect density increases with code volume, but it also increases with unnecessary complexity. The goal isn’t less code. It’s less interleaving.
I got burned on this. I was working on a project and got deep into building a component that was well-crafted. Clean separation of concerns, good test coverage, nice API. The problem was it wasn’t solving the actual problem we needed to solve. It was adjacent to the solution. Related, but not the thing. I’d built a polished piece of infrastructure that the project didn’t need, and now we had more surface area to maintain.
You can write good code that shouldn’t exist. Every module you add is a module someone has to understand, update, and debug later. The simplest solution isn’t always the shortest one. It’s the one with the fewest concepts a future reader needs to hold in their head.
Sandi Metz has practical rules I find useful: classes under 100 lines, methods under 5 lines, no more than 4 parameters. You can break them, but you should feel the resistance when you do. They’re guardrails that make you ask “is this getting complicated?” before it’s too late.
The hard part is that simplicity has a cost. Sometimes the simple solution means more files, more explicit code, slower initial progress. It means telling a teammate “I think we should do the boring thing” when they’re excited about the clever thing. It means deleting code you spent time writing because you realized the system doesn’t need it. None of that feels productive in the moment. It pays off over months.
Build the Thing That Matters
This connects to the story above about building the wrong thing well.
I used to get caught up in refactoring and optimization. Both matter, but they’re also comfortable. It’s easier to improve code you’ve already written than to sit with the question: “Am I even working on the right problem?” Refactoring feels like progress. Clean diffs, passing tests. But if you’re polishing a feature nobody uses, the diffs don’t matter.
What I do now, and it’s simple enough to sound trivial: before I start writing code, I write one sentence at the top of my PR description explaining what this change does for the user or the system. Not what files it touches. Not what pattern it uses. What it accomplishes. If I can’t write that sentence, I’m not ready to code yet. I need to go back to the ticket, or talk to someone, or admit I don’t understand the goal.
When I catch myself building a generalized solution when a specific one would work, or optimizing something that isn’t slow, that sentence pulls me back.
What I’m Doing with AI Right Now
This part is less about lessons learned and more about something I’m actively figuring out.
GitHub Copilot is in technical preview right now, and I’ve been using it. It’s good at boilerplate. Test setup, repetitive CRUD operations, the kind of code where the pattern is obvious and the typing is the bottleneck.
I’ve also been doing something more experimental with OpenAI’s GPT-3 API directly. Building workflows that go beyond autocomplete. I’d describe it as retrieval-augmented generation: feed relevant context from a codebase or documentation into a prompt, use system-level instructions to constrain the output, and chain multiple calls together to handle tasks that a single completion can’t. A pipeline of specialized prompts, each handling a different step, with context flowing between them.
It’s early and most of what I’ve built is rough. But instead of using AI as a line-by-line autocomplete, you can use it as something that understands your project’s specific context. Your naming conventions, your API patterns, your domain language. You give it the relevant files, tell it what you’re trying to accomplish, and it generates code that actually fits.
The risks are real though. I’ve seen GPT-3 generate code that looks correct on first read but has subtle issues. A useEffect without a cleanup function that causes a memory leak, an Express endpoint with a hardcoded secret, cookie settings that expose tokens to XSS. The code compiles. It might even pass a quick review. But it wasn’t written with an understanding of security or lifecycle management. It was written to be statistically plausible.
// GPT-3 generated this for a user profile component.
// It works, but the useEffect has no cleanup (memory leak if userId
// changes rapidly), and dangerouslySetInnerHTML with unsanitized
// user input is an XSS vulnerability waiting to happen.
function UserProfile({ userId }) {
const [user, setUser] = useState(null);
useEffect(() => {
fetch(`/api/users/${userId}`)
.then(response => response.json())
.then(data => setUser(data));
}, [userId]);
if (!user) return <div>Loading...</div>;
return (
<div>
<h1>{user.name}</h1>
<div dangerouslySetInnerHTML={{ __html: user.bio }} />
</div>
);
}I review AI-generated code the same way I review my own code on a tired Friday afternoon. Assume there are bugs I haven’t seen yet. Don’t let it generate abstractions I can’t explain. And use it for the actual problem, not because the technology is fun to play with.
I think this is going to change how code gets written. The developers who figure out how to direct AI with good context and clear constraints will move faster than everyone else. But you still need to know what you’re building and why. AI just makes it faster to build the wrong thing if you’re not paying attention.