-
I realised ChatGPT is quietly acting like a recommendation engine
I used to think of ChatGPT and similar models as “fancy autocomplete for text”. Then a weird pattern started showing up in my own projects.
New users replied to onboarding emails with things like:
“I found you through ChatGPT.”
“I asked an AI assistant what to use and your site came up.”They weren’t searching in the classic sense. They opened an AI, typed something like “best [tool/service] for [niche] in [country/city]”, and just trusted the answer. No ad clicks, no listicles, just whatever the model decided to name first.
From that moment I stopped looking at these models as just Q&A and started seeing them as a kind of recommendation engine. If that’s true, then there’s a new ranking problem: when does the model decide to mention you, when does it ignore you, and why does the answer change when you tweak the wording or the location?
That question pushed me into a rabbit hole and I ended up building a small project called aioscop. The idea is to track when assistants like ChatGPT or Gemini actually recommend a brand versus its competitors, how that changes by prompt and by GEO, and then use that as a starting point to adjust content so the model is more likely to include you.
I’m less interested in “how do I game this” and more in “what signals are these models picking up that make one brand show up and another disappear”. If users are treating AI answers as trusted recommendations, that feels like an important behaviour to understand.
Curious how people here think about that layer. Are we heading toward a world where “AI recommendation optimisation” becomes a thing, or is this just an emergent side effect we should mostly ignore?
Project if anyone’s curious aioscop com
Log in to reply.