AI slop ?

It has recently been a passionate subject, and I’ve seen quite a few people (colleagues, community members) dismissing anything with “ai” in it as “slop”. Personally, I could perhaps understand that argument a year or so ago, but from more recent trials, the agents have progressed far enough that they’ve become very good (although they seem to still struggle a little if you’re asking them to use new APIs/library versions etc). In my work environment, we’ve been using copilot (and friends) heavily to undertake code review, and to scaffold new projects or features with success. It’s been very good at picking up error types that are often hard for a human to spot, and has improved the quality of our code.  Unfortunately, this hasn’t necessarily made us quicker overall, as there’s now just a bottleneck in review, testing and deployment (and I suspect there will soon be more of a bottleneck in requirements gathering where we have to start more “Should we do this?” not “Do we have time to do this?” etc).

For an opensource project I sometimes contribute to (postfixadmin), I used copilot to move from Bootstrap 3 to 5 a few months ago, which it did pretty well (and better than I would have been able to without spending a few hours reading upgrade docs for bootstrap migrations). This led to this short discussion here  – where there was a “slop” knee jerk reaction. I was initially put off a little, and wondered if I’d made a mistake in using an LLM for the task. Thankfully someone else soon contributed a comment along the lines of “No one else contributed/bothered to create a PR… “.

More recently for postfixadmin a new contributor has stepped up (I think because the barrier to involvement is now somewhat lower) and started to submit various PRs generated via Claude.

Again, the bottleneck is on the human side (“Do we actually want to do this?”, “Should we do it like this instead?”).

While this post has been sitting in a draft state, I saw this morning that the Linux kernel has taken a fairly friendly stance towards LLMs ( https://github.com/torvalds/linux/blob/master/Documentation/process/coding-assistants.rst ) – which does leave the human being responsible for whatever they generated with their tools, which seems fair.

Will there be any humans left programming in a few years time though? Would I suggest it as a career for my children? Will LLMs result in an explosion of software (faster/cheaper to produce, possibly easy to clone what you would have paid for as a service) and software graduates etc will just become LLM herders. Will there still be a need for a developer to spec/design/manage the llm/test the code?

In 10 years time, will there just be a few grey beards, sharing a pint somewhere,  saying things like “I remember coding, before there were LLMs ….. those were the days!” ?

Leave a Reply

Your email address will not be published. Required fields are marked *