I have been a curmudgeon since an early age. There’s just something about a fad that makes me want to run for the hills. I didn’t watch the Simpsons until years after it premiered. I held my nose during the .com boom, Web 2.0, the era of blockchain, and the crypto gambling age. For the last two years, I have been equally nonplussed with the AI bubble.
However, each of these giant hype bubbles had a core of something actually good. The Simpson’s was a great show and about much more than Bart. The .com bust had spectacular fallout, but was also the genesis of eBay, Google, and Amazon. So, after several years of plugging my ears, I have begun to give AI technology a fair shake.
This first started at work, where my employer purchased a Claude subscription and encouraged us all to use it. I was transitioning from one tech stack to another and encountered some significant issues with the new vendor’s Python SDK. Claude was able to provide an example of how to make the same calls using a lower level of the SDK that didn’t have these issues.
As we have progressed down the path, I continued to ask it many questions about the new platform. The vendor’s documentation is fragmented and often incomplete, with many of the documentation pages being auto-generated from code comments (sometimes in broken English). An LLM basically helps index all of this information along with information from other sources (such as forum posts).
You might think that I’m lazy and should just “look all of this up” in the docs. To that I would answer “have you seen what passes for documentation these days”? When I started programming professionally in the early 90s, you only needed two books – Charles Petzold’s Programming Windows and Kernighan and Ritchie’s The C Programming Language. We live in a totally different world now. There are dozens of languages in common use. There are at least 3 different OSes to consider (Windows, MacOS, and Linux). But more importantly there are thousands upon thousands of pieces of open-source middleware that are available and in widespread use.
Every language, OS, and piece of middleware has its own documentation, in its own website, written in its own style, and organized its own way. Some of it is done well, but some of it is in the horrible-to-non-existent category (jsonpath_ng – I’m looking at you!).
This is nothing new – the tech world has been this way since at least 2010. We solved this in the past by supplimenting vendor-provided information with forum posts, blog posts, and GitHub gists. How did we find these posts? By using search engines such as Google. But that itself was a challenge and one’s ability to effectively leverage a search engine is known as “Google Fu”.
LLMs are simply a refinement of the search engine. It has crawled the same sites and can help you find the same answers, but gives you a much easier interface – just ask a question in plain English.
The only catch (and this is a big catch) is that the answers are sometimes wrong. AIs will “hallucinate” and make up answers. This is bad, but not altogether new. It’s also possible to get inaccurate information from a forum post or misinformed blogger. The challenge is in detecting it.
For coding type things, it’s simple – if the code doesn’t work, it’s wrong. A good LLM (such as Gemini) will cite sources that you can click through to verify what it’s telling you. But, when it’s right it saves you a ton of time so it can afford to be wrong a few times.
Though I am an experienced programmer (30+ years), I am still having to learn new things as things change so quickly. I wish the world didn’t work that way but it does. Using an LLM helps me stay caught up and be more productive. I can spend more of my time designing and crafting than poring through broken documentation.
Obviously a less experienced programmer can use these tools as well to be more productive. However, they still need to possess a basic understanding of sofware fundamentals to be able to ask the right questions and judge the answers. And these tools should not be used to cheat your way through college – you are only cheating yourself out of the learning that you paid big bucks to get. We learn through struggles and mistakes.
So, am I lazy for relying on AIs for this information? Possibly. But I look at it as technology causing a problem that technology created. In our desire to move “forward” at warp speed, we have half-assed a lot of the things that would be done in a more mature field. There’s no time to write coherent books on the subjects and even if we did, no one “has time” to read them.
Is an AI going to replace my job? Not really. Sure it can “write” code, but it only follows the specifications given and that’s the hard part. If I were to write an English description of a program to the specificity required for production software, it would probably be as long as the code itself. English is not a particularly great programming language.
You might be able to “vibe code” an app to do some specialized thing, and that app might work in a narrow use case in a friendly environment. But, there are simply too many things you need to know prior to putting an app on the big bad internet. As Low Level Coder pointed out, this is likely what happened with the Tea Dating App which leaked photo IDs and location data on over 13,000 women who had been promised a “safe” environment.
I do occasionally ask an AI to write code for me. However, I either use it as an example to help write my own, or a “skeleton” that I heavily modify myself. There is no way I am releasing anything that I don’t understand every line of as if I wrote it myself.
Unfortunately, the AI craze is being driven by MBAs making promises of reducing workforces. Yes, an LLM could make a customer service representative’s job easier, but I don’t think you should force your customers to directly interact with an LLM and refuse to help them otherwise.
This brings me to another point. Nearly every tech vendor is offering “AI assistants” with their products. The idea is to “help” you use it. Every one of these I have tried is terrible and does little more than clutter the UI. Make your products easy to use by having well designed user experiences, clear behavioral principles, and reliable execution. The real power of LLMs isn’t in these “vertical” applications (tell me how to use widget X) but in “horizontal ones” tell me how to integrate widget X from vendor A with widget Y from vendor B). That is best left to a general purpose LLM.
I also have major objections to using AI in the creative space. I have no desire to listen to AI-composed music or view AI-generated art. It has no meaning and is a waste of time. Furthermore, you are taking opportunities away from up-and-coming creators whose ranks are already decimated by corporate consolidation of media outlets.
Make no mistake, the AI craze is definitely a bubble. It will come crashing down and a few survivors will remain. But, I think LLMs are here to stay. So, go ahead and use one but don’t turn your brain off in the process. Take time to think through things and speculate prior to having the answer fed to you.
Obligatory Disclaimer – AI did not write this post. I wrote it myself, using Gemini occasionally to look up references.