ChatGPT and the rise of the idiots.

The first thing I need to say here is that this isn’t going to be an AI or ChatGPT-bashing session. I don’t have any particular beef with AI. I do use it from time to time. I use it occasionally for text generation, mostly to generate text on something I don’t feel I have a strong opinion about (which isn’t much!). I was writing content for a page on the website that describes the UK’s Cycle to Work Scheme. Asking ChatGPT to generate a paragraph describing what C2W is, and how it benefits the consumer, gets me a bunch of words that I can do a quick edit on and boom! Job done. Move on to something more interesting. I also use it in Photoshop to edit images that don’t quite fit with what I need. The image below was originally taken in portrait format, but I needed something that was landscape. Photoshop has an excellent ‘generative fill’ feature that will fill out empty space on a canvas with context-aware imagery. If you look closely, you can probably make out some of the glitches or inconsistencies, but for this use case, it works great.

Only the bike is real, the background is totally AI generated.

There have been a couple of prompts over the past week or so to get me to write this rant. Each one was just a minor annoyance, but they started to come together in my brain and irritate me enough to get me to start writing this. The first thing is that I’m becoming more and more aware of people with little or no experience on a subject, spouting off as if they’re experts. I don’t know if this has always been the case, maybe I’m just noticing it more? Part of me is a little bit in awe of the confidence shown by someone to stick their head up and offer an opinion or advice on something they have very limited knowledge of. Most of my exposure to this is in the bike world. I’m on a few different forums, mostly framebuilding, and I like to try and offer help and advice to people new to the game or to anyone that’s looking for a pointer or two. There’s an example of this on one of the forums right now: a newish framebuilder has been tasked with doing quite a complicated repair on a frame they didn’t build. They’ve posted a couple of pictures and asked how they should proceed. A few seasoned framebuilders with many years of experience chime in, offer their viewpoint and some practical advice on how to proceed. Then, from nowhere, a user I’ve never heard of (that’s not to say I have to know everyone in order to validate their worth!) pipes up with a really stupid (but quite detailed) suggestion and caveats that with a statement along the lines of ‘I’ve never actually done anything like that but that’s what I would do’. I dig a little deeper and this forum user is very green. Seems pretty bright, and I’m guessing is young. The problem is that others who come across that particular post will have difficulty knowing what advice is coming from people with real-world experience and what advice is just plain wrong. At this point, it’s just a bunch of opinions, but nothing that sorts out the good from the crap. The bad advice is polluting the pool, but in a way that’s difficult to see.

In a previous life, I used to be a software developer, and all software developers will be aware of the Stack Overflow website. It’s a lifesaver. If you ever have a tech question, no matter how obscure, it’s likely that it will have been asked and answered on Stack Overflow. So you can look it up, get the info you need and move on. If it hasn’t been asked before, and you post the question yourself, you’ll likely have the correct answer within about a minute. But you’ll also have about 20 answers that are wrong, 20 answers to a different question you didn’t post and about 20 suggestions that you didn’t even ask the right question in the first place. But the really useful thing is that users of the site get to vote on the answers and responses that are given, and the ‘good stuff’ gets promoted to the top, and the crazy nonsense gets buried. There’s a really good algorithm in place that helps to qualify the responses. In most regular forum conversations and threads, none of that exists, and so the crazy is just mixed in with the sane, and someone stumbling across a subject has no way of grading that info for quality.

I was watching a video on YouTube, it was coverage of a bike show. The host was interviewing a few framebuilders and talking through the bikes they’d built and brought to the show. I try to be open-minded (really I do!), and I’m practising at being more tolerant, but I was staggered by the amount of absolute shite that was being delivered. Here are some examples (these might not be obviously crap if you’re not a bike geek, so just believe me);

  • ‘So, it’s got roadbike handling but with fatbike wheels.’

  • ‘I use steel from Italy, I feel it’s much stronger than the equivalent UK steel.’

  • ‘It’s got a 71º headtube so it’s a very comfy ride.’

I could go on, but I don’t think I need to. It feels a little unfair to pick on these specific examples because the people are putting a lot of work into what they’re doing and what they’re presenting. It’s bikes and it’s all good really. But I feel we need to be less tolerant of the bullshit. That particular video I was watching was produced by a very knowledgeable bike guy; he’s done tons of videos, and I really like his content. Maybe he doesn't feel like it’s his job, but I wish he’d be a bit more challenging. I wish he’d push the guests to have a little more ownership of the subjects they talk about. A 71º headtube doesn’t make a bike more comfy but now there’s some published content out there that says it does. And nothing challenging it so you’d be forgiven for stumbling onto that and believing it.

But what does all this have to do with ChatGPT and AI? So the other thing that prompted this post was an interaction I had with ChatGPT earlier in the week. I posted something on Instagram, a picture of a part, and I was asking for help in learning what it was called. I had a few people message me with answers, exactly what I was looking for. But then my son (who is very into AI) piped up and said, ‘That’s exactly what ChatGPT is for.’ Why don’t you ask it? So I did. I gave ChatGPT a picture of the part, and within seconds, it came back with an answer and a very compelling description of what it was and how it was used. I was impressed. Except it was totally wrong. Well, not totally wrong, it was quite close, but wrong. If it was totally wrong, it would have been obvious and I could have spotted the mistake; it would have been better actually. But in this case, it was close, and it sort of made sense. In fact, if I hadn’t already been messaged the correct answer, I’d have gone with the ChatGPT suggestion and would have been tearing my hair out within a few minutes, I think.

To follow along with me on this rambling journey and have an idea of where I’m headed, you need to know how ChatGPT (and other AI platforms) get to be so clever. It’s a self-learning technology, it gets fed a shitload of data and then starts to recognise patterns and predict outcomes. The shitload of data it gets fed is basically the internet. All of it. You’re probably ahead of me here, but I’ll lay it out anyway. ChatGPT doesn’t know that the doofus answering a forum post without a clue in the world is talking out of their arse. It still gets sucked up and added to the data used for machine learning. Earlier on I said I used ChatGPT occasionally to generate some text for the website. There are various estimates bandied around that suggest anywhere from 40% to 60% of the content on the internet right now is generated by AI in some way or another. So it doesn’t take a genius to recognise the snake eating its tail. If these AI platforms are sucking in data that was artificially generated (with little to no fact checking, I’ll add) just to output some derivative text that itself gets hoovered up, it doesn’t take long for the ‘actual’ knowledge that exists on the internet to start to get bullied out of the way. It’s a self-perpetuating mess where unchecked and unvalidated content becomes pervasive. I think most of us are used to jumping online to check things, details that we’ve forgotten, methods of getting things done, and just assuming that what we’re reading is correct and factual. I don’t think we can do that anymore.

So what’s my point? I’m not sure, I’ve kinda forgotten at this point. Well actually, I think it’s that we should all start to be a bit more challenging with our interactions online. With so many platforms and outlets to effectively self-publish whatever’s in our heads, it’s no surprise that the quality of data and content that AI platforms are ingesting is hugely varied, and it’s probably safe to say that it’s on a downward trajectory. This post is a perfect example. There are no peer reviews, I don’t have an editor, and there’s no mechanism for criticism. I’m just expecting you all to believe me. Since this is largely an opinion piece, it’s not such a big deal. But it could be littered with made-up facts.

So the next time you’re interacting with something online and something smells fishy, call it out. Challenge it. Do your own research.

Don’t let the idiots drag you down.

Peace out. ✌️

Next
Next

Re-use, obsolescence and shit design.