Image credit: Pixabay
Generative AI is literally changing the whole process of online search and discovery, curating a new experience that experts are calling generative search. In contrast with conventional search engines that provide users with lists of hyperlinks, generative techniques offer synthesized, dialogue-like responses, changing how customers locate and interact with products or brands.
New tools such as ChatGPT are gradually taking over the role of the traditional online shopping entry point. Instead of entering inquiries on Google or going through Amazon, customers today turn to AI assistants for suggestions. This has led to mounting pressure on brands as visibility, discoverability, and finally sales have ceased to depend on search rankings. The journey of a product or service now relies entirely on the transparency and efficiency of a brand’s data to blend with AI systems of the future.
The Rise of Agentic Commerce
Agentic commerce is no longer theoretical as it is already showing up in everyday buying behavior. As Saleor Commerce CXO Dmytri Kleiner explains, “Everybody uses ChatGPT to shop now. You go into ChatGPT and start having a conversation about whatever products you’re looking at. This shift away from browsing pages and toward conversational product discovery is why Agentic Commerce Protocol (ACP) matters.” Kleiner expounds that brands must ensure AI agents can access accurate, structured, up-to-date information directly from their commerce systems.
ACP serves as the data bridge between merchants and AI systems, and it allows models like ChatGPT to pull product information directly from a brand’s commerce engine rather than piecing together partial or outdated details from the public web. For customers, that means accurate comparisons, real-time availability, and fewer hallucinations. For brands, it marks the first real standard for making their products “AI-discoverable.”
Saleor Commerce was built with this future in mind. The platform emphasizes structured and relational product data including ingredient lists, variant relationships, and richer product attributes. This depth matters because an AI assistant can only evaluate two products correctly if those relationships exist inside the brand’s system. Without that structure, the model relies on what Kleiner describes as guesswork based on whatever partial information it can find, which increases the risk of hallucinations.
This is why ACP is quickly becoming the new version of SEO. In the Google era, visibility depended on optimizing what search engines could crawl. In the generative era, visibility depends on the quality and clarity of a brand’s underlying commerce data. Consumers are now beginning their shopping conversations inside ChatGPT rather than on a webpage, which makes it essential for brands to show up where the agent operates instead of where the search bar once dictated.
Generative Engine Optimization: A New Kind of Visibility
The rise of AI-driven discovery has created a new discipline that many teams are only beginning to understand. Answer Engine Optimization demands a different approach from traditional SEO because large language models do not display links, lists, or page rankings. They absorb content, summarize it, and present only the essentials. The real challenge is keeping a brand visible inside that compressed output.
Fabi AI learned this early. Co-founder Marc Dupuis explains that their articles were being cited by AI models, but their brand was disappearing inside the generated answer. “Our content was being used by the AI chat, but our name Fabi was not appearing,” he notes. The model would summarize the article’s information without attributing the source. That realization changed how his team writes. “You need to be more forward about your brand because the AI is going to extract from the top of the content,” he says. “If you want to be mentioned, you need to make it unmistakably clear from the first lines.”
Dupuis also emphasizes that teams must now decide the intent of each piece of content. Bottom-of-funnel content often performs better when written with the AI assistant in mind rather than for human skimming. “Middle and bottom funnel content is increasingly being consumed inside the LLM,” he explains. “If someone is searching for the best AI-native BI solution, they already know what they want. The AI is doing the explaining that a sales rep used to do.” This creates a sharper divide between general thought leadership, which readers consume for interest, and practical instructional content, which AI systems surface for problem solving.
He also stresses the importance of long-tail specificity. LLMs rely on their training data unless they encounter a novel or highly specific query. “If you are solving a niche problem that did not exist in the model’s training window, you have a real advantage,” Dupuis says. “The AI has to search for new information, and if you rank, you become the answer.” In other words, the more practical and specific the content, the more likely it is to be picked up and cited by AI systems rather than overshadowed by older, generalized information already baked into model weights.
To remain visible, brands must prioritize clear brand placement, structured metadata, and especially the clarity of the content’s opening sections. AI assistants rely heavily on headers, first paragraphs, and concise explanations. As Dupuis puts it, the modern challenge is not only to provide the right answer, but to make sure the AI understands who provided it.
Data Quality and Scraping in AI Discovery
As AI-driven discovery accelerates, data quality has become a defining factor in visibility and accuracy. Generative models can only reason from the information they receive, and when that information is outdated, inconsistent, or poorly structured, the result is often hallucination or misclassification. This is especially risky in commercial contexts where a single incorrect detail can derail an entire workflow.
This challenge has opened the door for more rigorous approaches to data extraction and validation. franconAI’s Private Discovery Engine blends traditional scraping with tightly controlled AI analysis in order to produce verified, high-intent leads. Founder Benedikt Herlt notes that performance improves most when teams increase both volume and reliability. “The combination of quantity increase and quality increase has led to way more opportunities and also to new deal closures,” he explains. His team only allows the AI to analyze data that has already been validated, because relying on the model to retrieve information introduces too much risk. “If you have ten AI-qualified leads and even two are wrong, the whole system becomes useless,” he says.
Herlt warns that many companies underestimate how easily hallucinations creep in. Even when provided with a correct URL, some models still surface older versions of the page or cached information that no longer reflects reality. He has seen models confidently generate entire sets of details based on outdated copies of a site. To avoid this, franconAI keeps direct AI involvement to a minimum in the early discovery stages. They use traditional scrapers to collect reliable data, confirm it, and only then allow the AI to perform reasoning and qualification. “We provide the AI with validated information instead of letting it search on its own,” he explains. “One hallucinated source can destroy the ROI of the entire workflow.”
Legal constraints also shape the process more than many new companies realize. Herlt stresses that scraping is only allowed when the content is publicly accessible without a login. “You cannot scrape LinkedIn,” he says. “A lot of new AI automation companies fall into that trap because the potential value looks enormous, but it is simply not permitted.” He also advises teams to understand the permissions they grant when using AI-native browsers and research tools, since some have access to browsing history or logged-in environments if a user enables it. In his view, treating these systems as black boxes is a major operational risk.
For brands, the takeaway is clear. Maintaining structured and current product information is no longer optional. It is the only way to ensure that AI platforms are interpreting and representing offerings correctly. The more structured and validated the data, the more accurately the AI can perform, and the more visible the brand becomes in an environment where discovery increasingly begins with a conversational assistant.
Staying Competitive in a Generative Search World
As generative AI continues to change the way people search, shop, and decide, brands must shift their focus from optimizing for algorithms to optimizing for answers. Structured data, conversational context, and clear brand integration are now essential components of visibility in the digital marketplace.
At the same time, concerns are growing about gatekeeping by major AI platforms. With a handful of models mediating discovery, questions arise about fairness, access, and who gets visibility in an AI-curated ecosystem.
Adapting to the generative search era isn’t just a technological upgrade; it’s a strategic necessity. To remain visible and competitive, brands must ensure their data speaks fluently to both humans and machines, positioning themselves within the evolving landscape of conversational commerce.