Though most of my vibe coding — a term I’m not in love with — has been around hobbies, a fair amount is in support of work-related projects.
In fact, my colleague Tyler Shields recently wrote about his experience with vibe coding and how he used it to build a tool that helps with his day-to-day workflows. While I didn’t know the term at the time, it describes a lot of what I’ve been doing since generative AI (GenAI) became mainstream.
I wrote a bit about this in the early days of GenAI, and again when I switched from Mac to Windows for a few months, but I’ve gained a lot more experience since then.
What do organizations need to know about vibe coding?
I thought I’d share a few thoughts on vibe coding topics that keep coming up:
Vibe coding can be done by literally anyone.
AI-written code might be functional, but it’s not necessarily efficient.
It’s likely not secure either.
Oversight from management is needed.
These stand out to me because recent research shows that at least 92% of organizations have deployed or plan to deploy AI chatbots, Microsoft Copilot; AI-assisted code generation tools; AI-integrated software, such as Office 365 or Canva; or AI-enhanced customer service platforms. With these high adoption numbers, I think we’re at the beginning of this discussion, not the end.
You don’t need an AI-assisted coding tool to write code. Copilot and AI chatbots can also do it and suggest it even without the user actually seeking it out. Often, it will provide a simple response such as: “You can’t do that out of the box, but I can write something for you and show you how to run it.”
Vibe coding can be done by literally anyone
When you really sit back and look at what vibe coding is — besides being a term that I’m already learning to hate — it’s more than just AI-assisted programming. Anyone can conceptualize something and produce a proof of concept in the span of minutes. Take, for example, my experience with Teddy Ruxpin.
The tale of Teddy Ruxpin
Teddy Ruxpin is a robot bear that would read books to you using a cassette tape you inserted in its back. One of the stereo channels had the audio that you heard, and the other contained digital commands modulated on to an analog audio signal that told the motors in the bear how to move.
I wanted to make Teddy respond to my own voice, so I spent weeks and weeks learning the ins and outs, not only of pulse-position modulation — the digital command structure — but also how to write that code in Python. It’s now called T-Rux, and it’s on GitHub.
The very first thing I asked ChatGPT was: “How can I use Python to use my own voice to control a Teddy Ruxpin?” In 15 seconds, I had my answer, and it was scarily close to what took me weeks to design.
I’ve since used AI to help, by which I mean it performed 99.99% of the coding, with a number of projects, such as the following:
A time-shifting FM radio called RadioSHIFT.
AutoHotKey “apps” to replicate the behavior of the Mac app Alfred on Windows.
Obsidian plugins to make my experience similar to Evernote and add functionality I’ve always wanted to have, but doesn’t exist.
An uncountable number of scripts to perform individual tasks, such as adding a frame that includes a video’s file name to the beginning of a folder of imported VHS tape videos, or converting Word doc headings into PowerPoint slides to better visualize the contents.
In fact, the Obsidian plugins were the catalyst for this opinion piece.
I didn’t set out to make a plugin; I simply asked Claude if it knew of any plugins that had the functionality I wanted. It suggested a few things, and when I said those wouldn’t work, it replied and said it could make a plugin. Moments later, I had my first Obsidian plugin.
It wasn’t perfect, and it took a few tries to get it right, but in a few hours I had something that was perfect. Well, almost perfect, which leads me to my next point.
AI-written code can be functional, but it’s not necessarily efficient
The Obsidian plugin in question was simple. I wanted a way to use shorthand to call out action items from notes. I use “//” for this in my notes, but I have to scan them afterwards to find the action items. I wanted Obsidian to automatically recognize lines that started with // and, if they exist, create an Action Items section at the top of the page with a bullet list of those lines.
What was ultimately written was functional, but while chatting with Claude and ChatGPT, I learned that it implemented a throttling mechanism. When I asked why it was using throttling, I was told something along the lines of “because it checks the entire doc to see if // exists, and that can be CPU-intensive, so throttling means this only happens every 150 ms.”
Gulp.
The code that was written to determine if a line starts with “//” was scanning the entire document every 150 ms, looking for instances of that keystroke. How inefficient is that? Given the 1.8 million milliseconds in a half-hour meeting, that means my little plugin scanned that note 12,000 times!
Had I not pressed the AI on this, that would’ve continued. I ended up asking it why it wouldn’t just focus on the first two characters in a line and ignore the whitespace. It analyzed this change, agreed, rewrote that module, and now I have something more efficient.
The thing is, had I not known to ask it that question, along with some very rudimentary coding concepts, I would be stuck with a very inefficient plugin. One might not be a problem, but multiple inefficient processes can, and will, add up. These things happen in all the AI-assisted coding projects that I’ve done, which are for relatively small things, not commercial or in-house enterprise applications. These problems also seem to get worse with longer chats and larger projects, which is another thing end users might not be aware of.
It would seem that AI-assisted coding is still not plug-and-play. And, treating it that way will cost resources. Perhaps — even worse — it could hinder organization-wide security as well.
AI-assisted code might not be secure
Given its general goal of providing the functionality you asked for and nothing more, security concerns are also paramount with this type of coding. This isn’t an area I cover, but it’s easy to see that the AI-generated code isn’t working too hard to prevent race conditions. In practice, it’s just slapping band-aids on them, and it’s probably not going to take steps to write with security in mind either.
This could be a matter of prompting or using domain-specific languages designed as virtual coding assistants. However, having used both GitHub Copilot and Cursor, I can honestly say that these inefficiencies still exist. Also, we’re talking about end users here, not developers, though I suspect some of this applies to developers, too.
At the risk of spreading fear needlessly, just search in your phone’s app store for ChatGPT and you’ll see dozens of AI apps that aren’t from OpenAI. Those apps might use ChatGPT on the backend, but they’re also a middleman that is doing something with your inputs. An IT person or developer might know to be cautious of this, and a corporate AI policy might warn people against using things like this, but would a regular user know if they were “writing” code that included malicious content?
And what about code that ships data between different sources — can users verify that it’s being done securely?
For the moment, I see AI-generated code as something that still requires a developer. More than that, it requires one who’s skilled in prompting to ensure the code is written in a secure, efficient and functional way.
Vibe coding requires oversight, at least for now
Given the state of vibe coding and how easy it is for anyone to do this, I can’t help but wonder what this means for end-user management and security. Most of what I’ve mentioned here is thinking of the potential ramifications as if lots of users were doing this. It’s extremely unlikely that this is happening at scale right now. However, the possibility will only grow as organizations deploy and end users learn to use generative AI.
The recent research showed that more than half of knowledge workers said they used AI tools that were not officially authorized or supported by their organization for work-related purposes.
The alarming thing is that much of this can happen under IT’s radar. While I generally trust the big-name large language models to not do anything with malicious intent, end users represent a bit of a wildcard in terms of what tools they use. The recent research showed that more than half of knowledge workers said they used AI tools that were not officially authorized or supported by their organization for work-related purposes.
More benign situations than the security ones above can also have an effect.
Take, for example, my Obsidian plugin. If I left it alone, running inefficiently, and deployed it to a bunch of virtual desktop users, the collective effect of the inefficiency could reduce the capacity of my infrastructure. Yes, this is a lightweight text file thing, so it might not be noticeable. But that’s just one example.
So there’s a lot to think about regarding vibe coding and the power that our end users have. How can IT enable responsible usage and even experimentation without adding unnecessary risk? How do we even identify user-driven AI coding? And when do we decide that we care enough to do something about it?
Whether you’re in IT, security or just curious about what your users are really up to, it’s time to start asking these questions.
Gabe Knuth is the principal analyst covering end-user computing for Enterprise Strategy Group, now part of Omdia.
Enterprise Strategy Group is part of Omdia. Its analysts have business relationships with technology vendors.