AI Didn't Teach Me to Code But It Changed How I Build
A data scientist's honest take on what AI actually changes about the work, and what it doesn't.
I’ve been working with data for over 13 years. I started as an analyst, Excel sheets, pivot tables, the works. 3 years later I moved into Python, did a Master’s in Data Science, and over time built up a way of working that felt natural to me. I had a comfortable, established process. I knew how to get from a problem to a solution, but it was a manual, linear path.
Then AI coding assistants became actually useful. And something shifted, not in what I could do, but in how fast I could do it and what the end result looked like. I want to be honest about what that shift actually means in practice, because the current conversation tends to swing between “magic” and “useless,” missing the nuance of the middle ground.
The part nobody talks about
AI is genuinely useful to me when I already understand the problem. When I know what I’m trying to build, what the constraints are, what the right approach looks like, AI helps me get there faster. It fills in the gaps, handles the parts I’d otherwise spend an hour Googling, and helps me move from idea to working code without losing momentum.
But when I don’t fully understand the problem, AI makes things worse, not better. It gives me answers, confident, well-formatted, plausible-sounding answers that go in the wrong direction. And if you don’t have the background to recognize that, you can spend a lot of time going down the wrong path very efficiently.
This is the part that doesn’t get said enough: AI is a speed multiplier, but only on top of existing knowledge. It doesn’t replace understanding the problem. It doesn’t replace knowing your options. It doesn’t replace the judgment that comes from years of doing the work.
What it actually changed for me
I’ll give you a concrete example. I recently built KI-Tax, a free tax estimator for self-employed Canadians. It started as an Excel sheet, which is honestly where a lot of my projects start. I knew Python, I knew Streamlit, I knew enough HTML and CSS to build a decent UI. What AI helped me do was glue those things together faster, catch errors in logic I might have missed, and move from a rough December prototype to a properly built tool covering all 13 Canadian provinces in a fraction of the time it would have taken me before.
But here’s the thing, the hardest part of that project had nothing to do with code. It was verifying the tax logic. Canadian tax rules are genuinely complicated, and I used both Claude and Gemini to cross-check calculations, then verified everything against official CRA and Revenu Québec sources anyway. Because AI got things wrong, sometimes confidently wrong. Knowing enough about the subject to catch those errors was the difference between a tool that works and one that gives people bad numbers about their taxes. That’s not a small distinction.
I’m currently working on an RFM analysis project, customer segmentation work that I’ve done variations of many times over the years. Before, I’d have a working prototype or an MVP with a rough interface. Now I’m building a proper UI for it, something that actually looks like a finished product rather than an analyst’s internal tool. AI is helping me get there. But the analysis itself, the decisions about how to structure it, what to surface, what matters, that’s still coming from a decade-plus of seeing how these models actually perform in production.
What I’d actually say to someone earlier in their career
Learn the fundamentals first. Seriously. Not because AI won’t help you, of course it will, but because without that foundation, you have no way of knowing when it’s actually helping or just leading you into a hallucination. The people getting the most out of these tools are the ones who could have solved the problem without them; it just would have taken longer. AI didn’t make me a better data scientist. It made a competent data scientist faster. That’s a massive shift, but we should be honest about which one it actually is.


You don't need to code. But can you design? Do you know how to QA? Do you know how to transform mere "data" into "information"? That's the key. If so, the AI is the greatest lever invented.
This seems to be true especially the part about AI helping once you already understand the problem.
I’ve felt something similar - it doesn’t really change how you think about the work, just shortens the gap between “i know what to do” and “it’s running”.
The flip side is real and when the understanding isn’t there yet, it kind of just speeds up the confusion.
it also feels like the leverage comes more from clarity than the tool itself.