Tons of future consequences. AI is great, don’t get me wrong. We use it every day and it’s one of the best tools out there to speed up processes, test new angles, and push work further.
The problem comes when people lean on AI like it’s the ultimate truth. These models are built to sound confident and agreeable, even when they’re wrong. If you don’t question the output with some real skepticism, you’re basically setting yourself up for bad decisions.
And beyond leaders cutting digital/marketing roles in favor of AI, the bigger headache is how clients are already over-relying on it. Half the time we hear, “ChatGPT said this about SEO/Digital Marketing,” and it’s blatantly wrong. Managing that ignorance day-to-day is frustrating, because it gives way more credibility to a machine that *sounds* smart over the people who actually know what they’re talking about. That mindset “computer is smart, people are dumb” is dangerous, and it’s exactly why leaning too hard on AI has long-term consequences.
Don’t get me wrong, AI has huge strengths. But people, by definition, are lazy. And that’s okay. Honestly, some of the best programmers *are* the laziest ones, because laziness pushes efficiency. The issue is when that laziness turns into blind reliance, skipping the critical thinking part. That’s where AI stops being a tool and starts being a crutch.
Small rant, but hopefully this lines up with the kind of discussion you’re aiming for. Would love to hear your thoughts too!