On Gen-AI
Published 29 August, 2025It's getting increasingly difficult to go a single day without thinking/talking about or interacting with Gen-AI. I look on at major corporations producing suspected AI-generated quotes, or see memes about new models failing to understand how many Ts are in Tennessee, or talk about how AI might revolutionise an industry with a family friend.
Martin Fowler recently posted a conversation he had with Unmesh Joshi on how programmers work with LLMs. It's a bit wordy (in multiple senses - it's fairly long and uses a lot of software engineering terminology, which seems like a good thing if you're from Thoughtworks) but some of the keys for me regarding using LLMs:
- The concept of "upfront design" being problematic and caused by people "driving everything through the LLM". I'll admit it feels bad to have it generate multiple functions and/or classes and attempting to review what was done - the "deep thinking" you would have otherwise conducted to get similar code is missing entirely.
- I liked the comparison of pair programming and code reviews, and how there is a "higher frequency of interaction" in pair programming. I too much prefer the back and forth approach of pair programming and utilising a similar approach with LLMs, despite the async nature of code reviews being "more efficient" or that it feels like a better use of time.
Gaining more competency in programming over the years has (logically) made it more interesting to me. LLMs are turbocharging this, I feel like I can simply pick up a new language/framework/paradigm and have a tutor personalise the content for me - making learning even more fun. At university this year I've been exposed to a lot of new concepts in geography and statistics - which AI has been able to help by filling gaps in my knowledge or assist me in catching up on fundamental concepts that were taught in undergrad courses. In terms of coursework, most classes I'm in require explicit statements on how you used generative AI in your assignments - which sometimes feels like a formality when the lecturer loads up Gemini during class to ask it to debug something. As a result I haven't worked up the courage to outright use generative AI during labs or class, but the guy next to me clearly didn't have that issue and reached for ChatGPT immediately when starting work on the lab exercise while I had several tabs of Google Search open (which to be fair now provides AI responses right at the top, resulting in less traffic for websites that would have otherwise appeared at the top or near it - but that's a whole other kettle of fish).
Continuing with university, a lecturer mentioned much more people are achieving A+ grades compared to last year, which she finds a little suspicious. Lincoln University students are being retested because it was suspected that AI was used to cheat on exams. It does seem a better form of testing to have students reason about code they've written or code that's been placed in front of them but the practicality of this diminishes with the increasing size of classes.
Across various LinkedIn posts and YouTube videos/comments I see a similar sentiment of AI being like having a junior engineer in your pocket, and that vibe coding leads to unmaintainable slop. It's hard to disagree, but I guess time will tell - whether it takes my (future) job or not, targetted and effective use of these LLMs has been a boon to my programming skill and enjoyment.
These were just some quick thoughts I thought I'd jot down, which I may refine later. I feel like I had more ideas for this note but never got around to installing the whiteboard in the shower.