Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The End of SEO As We Know It

The End of SEO As We Know It

Come to SEO Week: https://seoweek.org

4/28 - 5/1

Michael King

April 04, 2025
Tweet

More Decks by Michael King

Other Decks in Marketing & SEO

Transcript

  1. 1 1

  2. 4

  3. 5

  4. 6 The State of the Art and Science Some highlights

    as to why SEO as we know it is over.
  5. 7 7 Traffic is Going Down and It’s Not Coming

    Back This is one of the most universally beloved UGC sites. Although average rankings fluctuated minimally, they have recovered and they see 1.2mm less clicks than they previously saw for that same average position.
  6. 8 8 Wikipedia is Feeling It too Wikipedia has seen

    a loss of 1.6 billion pageviews since the launch of AI Overviews.
  7. 12 12 AI Mode Means the Death of 10 Blue

    Links. In addition to AI Overviews, Google has more recently rolled out a version of the SERP called “AI Mode” where they are giving you all the information you need with very limited search results. This is effectively DeepResearch in the SERPs.
  8. 13 Delphic Costs Google understands the cognitive overhead of search

    and they want to eliminate it by “doing the Googling for you.” https://arxiv.org/pdf/2308.07525
  9. 14 14 AI Mode Means the Death of 10 Blue

    Links. Google is Going in the Direction of these More Bespoke interfaces
  10. 22 22 Redistribution of Search Volume IN 2023 I predicted

    the redistribution of of search volume to chunky middle and long tail from head terms.
  11. 23 23 “...we are seeing an increase in search usage

    among people who use the new AI overviews as well as increased user satisfaction with the results.”
  12. 24 24 This is a redistribution of searches that is

    creating a more qualified user when they click through to sites, but…
  13. 29 Seer’s Research Has Proven That to Be True Shout

    out to the Seer Interactive team for their excellent data-driven research into the impact of AI Overviews: https://www.seerinteractive.com/insigh ts/how-ai-overviews-are-impacting-ctr-5- initial-takeaways
  14. 30 30 This Suggests Search is Sending More Qualified Users

    Less traffic, but more conversions is a potential indication that users are learning more in the SERP and only clicking through when they are ready to buy. AI Overviews are effectively making web referral traffic more efficient.
  15. 31 31 OpenAI attacks Google on two fronts: 1. Polluting

    the index. 2. Disrupting the modality.
  16. 32 32 Google Has Four Advantages Over Any Challenger to

    Search Multiple products with 1B+ Users More behavioral data They invented the tech They don’t need Nvidia
  17. 33 Government Remedies Are the Only True Threat to Google

    Search If the US government issues remedies that destroy Google’s monopoly on Search, that could change the complexion of the competition, but right now none of the competitors, including OpenAI, can stand a chance. Google is positioned to chill out and watch everyone burn through all their cash while Google refines its product.
  18. 34 34 Remedies Could Also Include Breaking off Chrome or

    Android This would be somewhat meaningless because Chrome and Android are open source projects and Google could continue to steer them by dominating contribution.
  19. 35 35 Google Made Rank Tracking A Lot More Expensive

    Google finally made it so you have to render the page to get rankings data. This is likely a move to stop conversational search tools from scraping Google, but the second order effect is the brief impact it had on the SEO community.
  20. 36 36 We Know More About Google Than Ever Before

    In the past 18 months, we’ve gotten a deeper understanding of how Google actually works. 1. DOJ Antitrust Trial Testimony 2. Leaked API documentation 3. IR Data Exploit
  21. 38 Mark Also Trained a Classifier Based on the 90

    million queries he scraped in the Google exploit, he trained a classifier that helps you classify your keywords. https://rqpredictor.streamlit.app/
  22. 39 39 We simply can’t go back to being the

    301 redirect, content, and links people.
  23. 40 40 It’s Time to Stop Being the Janitors of

    the Internet In the video I talk about everything required to make the Her chatbot and assistant real. We have had all these things since 2016 and they have only gotten significantly better since. It’s time for SEOs to stop being the janitors of the internet.
  24. 41 41 Google Crowdsourced Improving the Speed of the Web

    Through Us Page speed is virtually a solved problem across the web because Google rallied us to go out there and make it happen. The chart on the left shows the average trend of core web vitals across all sites. Google gave us a deadline and collectively made the web faster which made crawling and rendering cheaper and faster for them. We do Google’s dirty work.
  25. 42 42 In most cases the work we’re doing is

    “engineering” not “optimization.”
  26. 43 Search is a Brand Channel and Always Has Been

    It’s time for us to stop undervaluing our channel
  27. 44 44 Brand vs Performance Channels Typically, marketing channels are

    segmented into two groups: Performance which expects a user to take an action in response to experiencing content and brand which is about raising awareness. Performance expects near-perfect measurement of short term ROI while brand expects long term impact with limited measurement.
  28. 45 45 There has never been a time in search

    where every user clicked through to a result. Sure, a percentage of those are unsuccessful sessions, but another percentage find the information they need without directly in the SERP without clicking through and take action elsewhere. The CTR Curve Never Added Up to 100%; That Wasn’t Always a Bad Thing!
  29. 46 46 The User Has Always Learned from the SERP

    Itself A user’s need state can change within the same SERP because they are educated by SERP features and may never need to click through to a website.
  30. 47 Zero-Click Search is Not Necessarily a Bad Thing In

    every other channel, an impression is valuable. In Search, it’s not considered as such, hence the perceived threat of Zero-Click Searches. Users have always learned information and discovered brands from seeing them in the SERP.
  31. 48 Conversational Search Sends Limited Referral Traffic Based on data

    from Semrush only about 30k unique domains are receiving referral traffic from ChatGPT worldwide! For sites that perform well in Organic Search, this traffic is a rounding era and is certainly not offsetting what is being lost from Google.
  32. 50 50 The Helpful Content Update was about Brand? Source:

    https://moz.com/blog/helpful-content-update-not-what-you-think
  33. 51 Brand Authority Brand Authority™ is a score (1-100) developed

    by Moz that measures the total strength of a brand. Contrasting the BA:DA ratio of tens of thousands of sites in the Moz corpus allowed me to test a theory—that Google's Helpful Content updates heavily leveraged brand signals. It looks like they do! ~ Tom Capper
  34. 52 The Helpful Content Update was about BRAND? Based on

    Tom Capper’s findings, there is a high correlation between the ratio of authority and brand performance and performance in the HCU updates.
  35. 53 53 Action: Track Brand Visibility Track your rankings, impressions,

    and visibility in SERP features in a separate report to indicate brand health. Specifically, you should track average position, rankings, impressions, and presence in featured snippets, AI Overviews, and People Also Ask.
  36. 54 Use this Dashboard as a Start We’ve created a

    Looker Studio dashboard that combines data from Semrush and Google Search Console to track brand visibility. https://lookerstudio.google.com/u/0/re porting/create?c.reportId=3ad27fb0-dd4 1-4a6b-a509-7d61895d93a7&r.reportNa me=iPullRank%20%7C%20SERP%20Feat ure%20Brand%20Visibility%20Tracker&c .mode=edit
  37. 56 56 There Are More AIOs Shown Logged In than

    Logged Out Most rank tracking tools are only showing you the partial information on AI overviews because they track them logged out. We’ve seen as much as 60% more AI Overviews logged in.
  38. 57 57 ZipTie indicates AI Overviews are Showing for 19.04%

    of Queries https://dashboard.ziptie.dev/aio-monitor
  39. 64 64 All you can do is a before and

    after assessment until Google gives us data in GSC.
  40. 65 Things You Need To Know Find out if there’s:

    1. Is there an AI Overview 2. What’s your position in it 3. What’s your position in the SERP 4. CTR and Clicks Before 5. CTR and Clicks After Do your analysis before and after May 14, 2024.
  41. 66 66 Track Where You Appear in Generative Search Companies

    like Profound are popping up with solutions to track how brands are showing up in ChatGPT, Perplexity, Gemini, and AI Overviews. As these platforms grow in usage, there is value in understanding how you appear. Profound is an enterprise solution that also includes bot tracking.
  42. 67 67 ZipTie is a solid SMB solution and has

    been doing a lot of great thought leadership
  43. 69 69 How Conversational Search Works Conversational Search is primarily

    built from the concepts of Retrieval Augmented Generation.
  44. 70 Users Shift Back to Queries when Prompting SearchGPT In

    the video I talk about everything required to make the Her chatbot and assistant real. We have had all these things since 2016 and they have only gotten significantly better since.
  45. 71 71 Different Models of Conversational Search In the video

    I talk about everything required to make the Her chatbot and assistant real. We have had all these things since 2016 and they have only gotten significantly better since.
  46. 73 73 Crawlers Don’t Render In the video I talk

    about everything required to make the Her chatbot and assistant real. We have had all these things since 2016 and they have only gotten significantly better since.
  47. 74 74 Systems don’t Cache In the video I talk

    about everything required to make the Her chatbot and assistant real. We have had all these things since 2016 and they have only gotten significantly better since.
  48. 75 75 In the video I talk about everything required

    to make the Her chatbot and assistant real. We have had all these things since 2016 and they have only gotten significantly better since. There are No Guidelines
  49. 76 Cloaking Works Just Fine Again, there are no guidelines

    for ChatGPT. So, who says you can’t cloak?
  50. 77 77 The Cloaking Use Case GenAI crawlers don’t render,

    so don’t show them the unique parts of your content. Render the unique passages with JS and block the file for genAI user agents. User-agent: GPTBot User-agent: ChatGPT-User User-agent: anthropic-ai User-agent: ClaudeBot User-agent: PerplexityBot Disallow: /js/bot-detector.js // Simple Bot Detection and Conditional Content Display document.addEventListener("DOMContentLoaded", function() { const bots = [ /ChatGPT/i, /Google-Extended/i, /GPTBot/i, /BingPreview/i, /Bard/i, /Anthropic/i, /ClaudeBot/i, /Gemini/i, /PerplexityBot/i, /OpenAI/i, /bot/i, /crawler/i, /spider/i, /robot/i, /GenerativeAI/i, /CCBot/i, /CommonCrawl/i ]; const userAgent = navigator.userAgent; let isBot = bots.some(botRegex => botRegex.test(userAgent)); if (isBot) { // Bot-specific content document.body.innerHTML = ` <h1>Welcome, Bot!</h1> <p>This content is tailored specifically for bots and crawlers.</p> `; } else { // User-specific content document.body.innerHTML = ` <h1>Welcome, Human!</h1> <p>This content is visible to real human visitors only.</p> `; } });
  51. 80 80 Google cannot use the detection of generative AI

    content as a signal in isolation.
  52. 81 81 No Correlation between GenAI Content and Losses “We

    checked that on a random 10k SERPs sample (part of 1M study that comes soon), analyzing 200k documents (top 20) to check if there is anything. It turned out to be 0.017, which is less than the HTML file size or any other factor like that. The red bell is almost perfectly random. To compare, on the same sample, there's a blue one which shows quite a strong correlation with Surfer Content Score. We used our own detector, which is better(detectability and false positives rate) in benchmarks than the one starting with the letter O.” -Michal Suski
  53. 82 82 How Google May Determine GenAI Leaked information has

    clarified how Google understands content’s value through user interactions.
  54. 84 This is a Also a Function of Click Models

    Google Search has expectations of performance for every position. This is a function of what is a called a click model. If your content falls below performance expectations for user satisfaction in the click model, it gets demoted.
  55. 86 86 Relevance Feedback Based on how the content and

    content that looks like it performs, Google will adjust its rankings or decide to not crawl or index it.
  56. 90 90 Google is Using Passage Indexing to Try to

    Drop the User Into the Right Spot
  57. 91 91 Use Logical Chunking To Get Users to the

    Information Faster https://www.nngroup.com/articles/in-page-links-content-navigation/
  58. 95 95 Build Pages that Are Easy to Parse Create

    semantically relevant content Build a table of contents Drop anchor links throughout the page to help Google understand where the user is meant to go.
  59. 99 SEO Has a Terrible Reputation German researchers did a

    year-long longitudinal study of Google Search and showcase that “SEO content” primarily driven by affiliate marketing has made search worse. https://downloads.webis.de/publication s/papers/bevendorff_2024a.pdf
  60. 10 0 10 0 SEO Has No Standards In the

    video I talk about everything required to make the Her chatbot and assistant real. We have had all these things since 2016 and they have only gotten significantly better since.
  61. 10 2 10 2 I Don’t Know Who This Guy

    Is, But He’s Wrong (I literally searched on LinkedIn for “it’s just SEO” and grabbed the first post.)
  62. 10 3 103 Anyone saying that is operating from fear

    of change, but the whole discussion misses the point.
  63. 10 6 10 6 No One Expects GenAI to Be

    Free We should not be cutting ourselves off from the spend expectations involved in engineering for generative AI.
  64. 10 8 What Is Relevance Engineering? Relevance Engineering is the

    intersection of information retrieval, user experience, artificial intelligence, content strategy, and digital PR to give visibility in Organic and Conversational Search.
  65. 11 1 11 1 Search Engines Work based on the

    Vector Space Model Documents and queries are plotted in multidimensional vector space. The closer a document vector is to a query vector, the more relevant it is.
  66. 11 2 11 2 The lexical model counts the presence

    and distribution of words. Whereas the semantic model captures meaning. This was introduced through an innovation called Word2Vec. This was the huge quantum leap behind Google’s Hummingbird update and most SEO software has been behind for over a decade. Google Shifted from Lexical to Semantic a Decade Ago
  67. 11 4 11 4 Vector Embeddings = Words Converted to

    Multi-dimensional Coordinates in Vector Space
  68. 11 5 11 5 Relevance is a Function of Cosine

    Similarity When we talk about relevance, the question of similarity is determined by how similar the vectors are between documents and queries. This is a quantitative measure, not the qualitative idea of how we typically think of relevance.
  69. 11 7 11 7 I’m getting tired of the vagaries

    around appearing in Conversational Search.
  70. 12 2 12 2 Scroll to Text You can capture

    the copy used to inform the AI snapshots by scraping the Scroll to Text copy from the page.
  71. 12 3 12 3 There’s a Nearly Linear Relationship Between

    Fraggle Relevance and AI Overview Appearance Relevance against the chunks to keyword: Relevance against AI Snapshot:
  72. Embrace Structured Data There are three models gaining popularity: 1.

    KG-enhanced LLMs - Language Model uses KG during pre-training and inference 2. LLM-augmented KGs - LLMs do reasoning and completion on KG data 3. Synergized LLMs + KGs - Multilayer system using both at the same time https://arxiv.org/pdf/2306.08302.pdf Source: Unifying Large Language Models and Knowledge Graphs: A Roadmap
  73. 12 7 Action: Revisit Schema.org and Incorporate Anything Relevant to

    Your Content Historically, we’ve only incorporated structured data that has yielded rich results. Now is the time to use anything that is relevant to your content because generative systems use it all.
  74. 131 13 1 They Share Their Prompts in Their Code

    The GEO team also shared the ChatGPT prompts that help them improve their visibility. You can augment them and put them to work right away. https://github.com/GEO-optim/ GEO/blob/main/src/geo_functi ons.py
  75. 13 5 13 5 Clearly Structure Your Content into Semantic

    Units Chunking in RAG is a function of Passage Indexing. Shorter punchy paragraphs work best. Break down your content into concise paragraphs or sections, each covering a clearly defined topic. This structure helps embedding models generate focused embeddings for each passage. Example (Clear Semantic Units): Instead of a long paragraph: "The best SEO tools provide actionable insights, allowing marketers to optimize content effectively. Tools like Ahrefs and Semrush offer keyword research, backlink analysis, and competitor tracking. These capabilities help websites achieve higher rankings." Use distinct semantic units: Heading: Best SEO Tools for Marketers Passage: SEO tools such as Ahrefs and Semrush provide keyword research, backlink analysis, and competitor tracking. These tools enable marketers to optimize content effectively for higher rankings.
  76. 13 6 Why Does This Work? Improved Embedding Clarity Shorter

    paragraphs tend to capture distinct semantic concepts or ideas more cleanly, resulting in clearer vector embeddings. Long paragraphs often blend multiple ideas, making embeddings noisier and less specific. More Precise Retrieval Shorter passages improve retrieval accuracy because they focus on a single topic or idea. When the embedding model creates a vector representation, it's easier to match precise user queries. Ideal paragraph length for passage indexing: • Optimal length: 50–150 words • Content: Single topic, tightly focused idea or concept • Structure: Use headings/subheadings to clearly separate sections
  77. 13 7 13 7 Embedding models capture semantic relationships best

    when content explicitly outlines relationships. Semantic triples (subject-predicate-object) significantly boost retrieval accuracy and content relevance. Example (Semantic Triple clearly embedded): Instead of vague phrasing: "SEO helps businesses get more traffic." Use clear subject-predicate-object structure: "SEO (subject) increases (predicate) organic traffic (object) for businesses." More explicitly: • Semantic Triple Example: ◦ SEO → boosts → Organic Traffic ◦ Organic Traffic → increases → Business Revenue ◦ Keyword Research → identifies → Search Intent Use Explicit Semantic Triples or Clear Subject-Predicate-Object Patterns
  78. 13 8 13 8 Incorporate Rich Contextual Keywords and Entities

    Explicitly mention closely related keywords, synonyms, or entities to enhance semantic understanding. This increases the chance of being retrieved and accurately cited. Example (Context-rich entities): Instead of: "AI tools can help with SEO." Prefer contextually rich: "AI tools like ChatGPT, Claude, and Google's Gemini can improve SEO workflows by automating keyword research, generating meta descriptions, and analyzing SERP intent."
  79. 13 9 13 9 Provide Unique, Highly Specific, or Exclusive

    Insights Unique content or proprietary data increases the likelihood that your page is retrieved and cited as authoritative in RAG pipelines. Example (Exclusive insight): "Our analysis of 1 million search queries showed that pages optimized using semantic triples achieved a 22% higher retrieval rate in RAG pipelines compared to unstructured text."
  80. 14 0 14 0 Avoid Ambiguity Clearly defined, straightforward sentences

    reduce embedding noise and retrieval errors. Example (Ambiguity avoidance): Ambiguous: "They improved performance." Clear and specific: "Our SEO strategies improved the average organic search ranking by 15% within six months."
  81. 14 1 Example of Content that is easy to extract

    in a RAG Pipeline How Semantic Triples Improve Content Retrieval in AI Pipelines Semantic triples explicitly represent relationships between concepts, boosting the accuracy of content retrieval systems. What are Semantic Triples? Semantic triples consist of three elements: subject, predicate, and object. For example: • Subject: "SEO" • Predicate: "increases" • Object: "Organic Traffic" Benefits of Semantic Triples in RAG Pipelines: • Clearer semantic embedding • Higher retrieval accuracy • Improved content citation likelihood Key Takeaway: Semantic triples significantly improve your content's performance within Retrieval-Augmented Generation systems by clarifying relationships and increasing citation potential.
  82. 14 2 14 2 These are the three things we

    should be doing as content engineers to develop and optimize content that performs. The Three Things You Should Do Topical Clustering Content Pruning Embrace Retrieval Augmented Generation
  83. 14 3 Oh, and Read Up on the Model Context

    Protocol The Model Context Protocol is a framework for integrating external data source with generative models. Read up on it and use it for grounding your content in your own data. https://www.anthropic.com/news/mode l-context-protocol
  84. 14 4 Combine your RAG Pipeline with a Relevance Agent

    When we use generative AI for content, we build it on the component level. When a component is generated, it’s sent to a relevance agent to verify that it meets our relevance requirement. If not, it’s sent back to the generation pipeline to try again.
  85. 14 6 We are building the future at SEO Week.

    Come be a part of it https://seoweek.org
  86. Thank You | Q&A [email protected] Award Winning, #GirlDad Featured by

    Get your tickets for https://seoweek.org Mike King Chief Executive Officer @iPullRank