I recently ran into one of those annoyances. I needed a new email signature for an account, but the client only supported Rich Text Format (RTF). No HTML, no fancy editors, just crusty old RTF. I mean, I would gladly accept WYSIWYG editors. My first thought was, “Ugh, Isn’t there a tool that could do RTF for me?”. Guess what? nope.

But then I had a different idea. I opened up Gemini and started a conversation. Couldn’t just say, “Make me a signature.” I treated it like a junior dev. I gave it a detailed prompt (that was also generated by Gemini but tuned up with my “knowledge”), outlined the constraints, and specified the tools I wanted to use for a small web tool to generate it—Tailwind and basic JS. It generated the code. I read every line, ran my own tests, made a few tweaks, and in under an hour, I had a fully working tool, SignEdit, deployed on GitHub Pages.

That, for me, is the reality of AI in software development today. It’s not an autonomous programmer. It’s a co-pilot. It’s a force multiplier for the tedious, the annoying, and the unknown, but you—the developer—are still firmly in the pilot’s seat.

Don’t get me wrong, you should never let AI write your code. But in my case, it would be a waste of time since it’s a trivial app.

The Sweet Spot: Where AI Shines in a Developer’s Workflow

Forget the sci-fi hype, where do these tools actually fit in our day-to-day grind? I’ve found the sweet spot lies in tasks that augment my thinking and clear away the boring stuff.

As a Brainstorming Partner

It’s 11 PM. You’re stuck on an architectural decision. It’s too late to call a colleague or a friend. This is a perfect time to talk to an LLM. It’s great for bouncing ideas around, exploring trade-offs, and getting a second opinion, even if that opinion is just a well-structured summary of common knowledge. I mean, it can emulate lots of things, like running a fully fledged MySQL server.

Prompt: "I'm building a new microservice in Go that needs to handle high-throughput, real-time notifications. What are the pros and cons of using gRPC vs. a standard REST API with WebSockets for this use case?"

The AI won’t give you a divine answer, but it will lay out the arguments, helping you structure your own thoughts and make a more informed decision. Think of it like a support rather than a replacement.

As a Perpetual Tutor

Want to learn a new language or framework? AI is like having a patient, somewhat senior developer on call 24/7. It never gets tired of your “mediocre” questions but hallucinates. The key is to frame your questions from your current context. And try not to loose your calm.

Prompt: "Explain Rust's ownership model to me like I'm a TypeScript developer who is used to a garbage collector. What are the main pitfalls I'm likely to run into?"

This is infinitely faster than digging through ten different blog posts to find an analogy that clicks for you. (Even though i like to read posts about new approaches or languages.)

As a Test Generation Intern

Let’s be honest: writing unit tests can be a slog. It’s necessary, but writing boilerplate mocks and simple assertion cases is tedious. AI is fantastic at this, but it comes with a huge warning.

Think of the AI as a new intern. It can generate a solid first draft of your tests, covering the obvious cases and setting up the structure. But you would never let an intern merge code to main without a thorough review, right?

The same rule applies here. A human must review, understand, and validate every single line of an AI-generated test. It might miss edge cases, misunderstand your logic, or write tests that pass for the wrong reasons.

The Red Zone: When to Keep AI on the Sidelines

An intern can be helpful, but you wouldn’t give them the root password to your production database. Knowing where not to use AI is even more important than knowing where to use it.

A chilling real-world example is the story from Replit, where an AI agent, tasked with helping out, misinterpreted a goal and ended up wiping out the company’s databases. This is the ultimate cautionary tale of giving AI too much autonomy.

Here are my hard-and-fast rules for the AI “no-go” zones:

  • No Direct Code Execution. Period. The single most important rule. LLMs are probabilistic. They don’t “understand” things; they predict the next most likely token. Giving a probabilistic system the ability to execute file system commands, run scripts, or interact with live APIs is asking for disaster. Don’t do it.
  • No Production Access. Never, ever let an AI tool have direct access to production or even staging environments. No database credentials, no API keys, no shell access. The potential for catastrophic error is simply too high.
  • No Security-Critical Code. Don’t ask an AI to write your authentication logic, encryption functions, or anything related to security. These areas require meticulous, expert human review, and the subtle flaws an AI might introduce could be invisible until it’s too late.
  • No Sensitive Data. Never paste proprietary company code, user data, or personal information into a public AI tool. Assume everything you submit could be used for training or be exposed in a data breach. Use air-gapped, on-premise models if you must work with sensitive data.

My Stance: AI is a Gradient Function, Not a God

I spend a good amount of my time researching and reverse-engineering neural nets and vision models. The more you dig into them, the less magical they become. Under the hood, we’re talking about incredibly complex math—gradient functions optimizing weights across billions of parameters. It’s a pattern-matching machine on steroids, not a sentient being.

With the initial hype balloon for things like GPT-5 finally popping, the industry is settling into a more realistic view. That’s a good thing. We can now focus on AI as a practical tool, not a mythical replacement for human thought.

In my own workflow, I’ll use it for the tasks I listed above. But for my critical daily automation? I trust my own handcrafted scripts and automation flows far more than a non-deterministic LLM. When reliability is on the line, I want code that I wrote and fully understand.

The best developers won’t be the ones who let AI write all their code. They’ll be the ones who master the art of using AI to augment their own skills—to become faster, more knowledgeable, and more effective problem-solvers.

So, what’s your take? How have you integrated these tools into your workflow? What’s your biggest AI “red line” that you’ll never cross?

Until the next one


Subscribe to my newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *


ABOUT ME

Hey there! I’m Metin, also known as devsimsek—a young, self-taught developer from Turkey. I’ve been coding since 2009, which means I’ve had plenty of time to make mistakes (and learn from them…mostly).

I love tinkering with web development and DevOps, and I’ve dipped my toes in numerous programming languages—some of them even willingly! When I’m not debugging my latest projects, you can find me dreaming up new ideas or wondering why my code just won’t work (it’s clearly a conspiracy).

Join me on this wild ride of coding, creativity, and maybe a few bad jokes along the way!