Nick Perkins

Platform Engineer. Volunteer Motorsport Official. ADHD Brain. Bit of a nerd.

AI and the Human Factor - My Observations

Over the last year, I’ve been watching how people use AI coding tools. GitHub Copilot, ChatGPT, Claude - everyone’s got access to them now. But the results? Completely different depending on who’s using them.

Here’s what I’ve noticed: the engineers who were already good are using AI to get even better. The ones who struggled before? They’re producing questionable code at an incredible rate now.

AI just amplifies what you already are

AI tools can be amazing productivity boosters. I’ve seen junior developers build features in hours that used to take days. Senior people are prototyping stuff at lightning speed.

But here’s the catch - AI doesn’t make you better at engineering. It just makes you faster at being the engineer you already are.

If you’re good, AI makes you faster. If you’re not… well, you’re just bad faster now.

What the good engineers do differently

I’ve been watching our best people work with these tools. There’s definitely patterns.

They treat AI output like a rough draft. They’ll get ChatGPT to write a function, then they’ll fix it up, add error handling, write tests, make sure it fits with the rest of the code.

They don’t drop their standards just because AI wrote it quickly. They still care about clean code and following our patterns.

They know what AI doesn’t understand - like our specific codebase, our team conventions, all the unwritten requirements that live in people’s heads.

They’re smart about how they ask for help. Instead of “write me a function that does X”, they give context about the existing code and explain what they need.

The dangerous side

I’ve also watched engineers who already had code quality issues become incredibly productive at creating technical debt.

They copy-paste AI code without understanding it. They don’t think about edge cases. They skip testing because “AI wrote it, so it must be right”.

These aren’t bad people - some of them have years of experience. But they’re missing the foundational skills that make AI helpful instead of harmful.

The scary part is they look super productive in the short term. Features ship fast, tickets get closed quickly. But try maintaining that code in six months.

Skills that matter more now

Working well with AI needs you to get better at some very human things.

You need to think critically about what AI suggests. Can you spot problems? Can you decide what to keep and what to throw away?

You need to understand code quickly. If you can’t read AI-generated code and figure out what it does, you can’t review it or debug it properly.

You need to think about architecture and design. AI can write functions, but it can’t design your whole system or understand how everything fits together.

You need domain knowledge. The more you understand your problem, the better you can guide AI and evaluate its answers.

How I use AI now

I treat AI like a really smart intern. Good at the grunt work, needs supervision for the important stuff.

I use it for boilerplate code, repetitive patterns, getting started on something new. Then I apply my judgement to clean it up and make it fit properly.

I ask for explanations along with code. “What could go wrong with this?” “What are the trade-offs?”

I use it as a learning opportunity. Sometimes AI shows me patterns or libraries I hadn’t considered.

What I’ve learned

AI isn’t replacing engineers anytime soon. But it’s changing what makes a good engineer.

If you’re new to this, don’t let AI stop you from learning the fundamentals. Learn to write good code without AI first. Then use AI to write it faster.

If you’ve been doing this for a while, embrace the tools but keep your standards. Your job isn’t to rubber-stamp AI output - it’s to guide AI toward building good software.

The question isn’t whether AI will change engineering. It already has. The question is whether you’ll use it to become better or just become faster at being average.