$30 off During Our Annual Pro Sale. View Details »

How the Future of Search Works

How the Future of Search Works

Michael King

November 09, 2023
Tweet

More Decks by Michael King

Other Decks in Marketing & SEO

Transcript

  1. 1 How the Future of Search Works Michael King iPullRank

    Speakerdeck.com/ipullrank @iPullRank
  2. 4

  3. 5

  4. 7 7

  5. 15 15 This is a huge problem because SEO software

    still operates on the lexical model.
  6. 17 At I/O Google Announced a Dramatic Change to Search

    The experimental “Search Generative Experience” brings generative AI to the SERPs and significantly changes Google’s UX.
  7. 18 18 Queries are Longer and the Featured Snippet is

    Bigger 1. The query is more natural language and no longer Orwellian Newspeak. It can be much longer than the 32 words that is has been historically in order 2. The Featured Snippet has become the “AI snapshot” which takes 3 results and builds a summary. 3. Users can also ask follow up questions in conversational mode.
  8. 19 19 Sundar is All In. In Sundar’s recent press

    run he keeps saying how Google will be doubling down on SGE. So it’s going to be a thing moving forward.
  9. 20 20 The Search Demand Curve will Shift With the

    change in the level of natural language query that Google can support, we’re going to see a lot less head terms and a lot more long tail term.
  10. 21 21 The CTR Model Will Change With the search

    results being pushed down by the AI snapshot experience, what is considered #1 will change. We should also expect that any organic result will be clicked less and the standard organic will drop dramatically. However, this will likely yield query displacement.
  11. 22 Rank Tracking Will Be More Complex As an industry,

    we’ll need to decide what is considered the #1 result. Based on this screenshot positions 1- 3 are now the citations for the AI snapshot and #4 is below it. However, the AI snapshot loads on the client side, so rank tracking tools will need to change their approach.
  12. 23 23 Context Windows Will Yield More Personalized Results SGE

    maintains the context window of the previous search in the journey as the user goes through predefined follow questions. This will need to drive the composition of pages to ensure they remain in the consideration set for subsequent results.
  13. 46 46 This is Called “Retrieval Augmented Generation” Neeva (RIP),

    Bing, and now Google’s Search Generative Experience all use pull documents based on search queries and feed them to a language model to generate a response.
  14. 47 47 Google’s Initial Version of this is called Retrieval-Augmented

    Language Model Pre-Training (REALM) from 2021 REALM identifies full documents, finds the most relevant passages in each, and returns the single most relevant one for information extraction.
  15. 48 48 DeepMind followed up with Retrieval-Enhanced Transformer (RETRO) DeepMind's

    RETRO (Retrieval-Enhanced Transformer) is a language model that combines a large text database with a transformer architecture to improve performance and reduce the number of parameters required. RETRO is able to achieve comparable performance to state-of-the-art language models such as GPT-3 and Jurassic-1, while using 25x fewer parameters.
  16. 49 Google’s Later Innovation Retrofit Attribution using Research and Revision

    (RARR) RARR does not generate text from scratch. Instead, it retrieves a set of candidate passages from a corpus and then reranks them to select the best passage for the given task.
  17. 50 50 SGE is built from REALM/RETRO/RARR + PaLM 2

    and MUM MUM is the Multitask Unified Model that Google announced in 2021 as way to do retrieval augmented generation. PaLM 2 is their latest (released) state of the art large language model. The functionality from REALM, RETRO, and RARR is also rolled into this.
  18. 51 51 If You Want More Technical Detail Check Out

    This Paper https://arxiv.org/pdf/2002.08909.pdf
  19. 61 61 AvesAPI + Llama Index + ChatGPT = Raggle

    Rankings data Vector index & operations Clearly you know what this does.
  20. 62 62 It’s pretty simple # Make an index from

    your documents index = VectorStoreIndex.from_documents(documents) # Setup your index for citations query_engine = CitationQueryEngine.from_args( index, # indicate how many document chunks it should return similarity_top_k=5, # here we can control how granular citation sources are, the default is 512 citation_chunk_size=155, ) response = query_engine.query("Answer the following query in 150 words: " + query)
  21. 63 63 Limitations of my POC It doesn’t do follow

    up questions It’s not responsive It only does the informational snippet
  22. 66 66 Search Works Based on the Vector Space Model

    Let’s go back to the vector space model again. This model is a lot stronger in the neural network environment because Google can capture more meaning in the vector representations.
  23. 68 Dense Retrieval You remember “passage ranking?” This is built

    on the concept of dense retrieval wherein there are more embeddings representing more of the query and the document to uncover deeper meaning.
  24. 70 70 It’s all about the chunks. So use Llama

    Index to determine the your chunks and improve the similarity to the query.
  25. 74 74 What is Mitigation for SGE? 1. Manage expectations

    on the impact 2. Understand the keywords under threat 1. Re-prioritize your focus to keywords that are not under threat 1. Optimize the passages for the keywords you want to save
  26. There’s a Lot of Synergy Between KGs and LLMs There

    are three models gaining popularity: 1. KG-enhanced LLMs - Language Model uses KG during pre-training and inference 2. LLM-augmented KGs - LLMs do reasoning and completion on KG data 3. Synergized LLMs + KGs - Multilayer system using both at the same time https://arxiv.org/pdf/2306.08302.pdf Source: Unifying Large Language Models and Knowledge Graphs: A Roadmap
  27. Organizations are doing RAG with Knowledge Graphs • Anyone can

    feed their data into an LLM as a fine-tuning measure to improve the output. • People are currently using their knowledge graphs to support this.
  28. 78 78 The code is not much different sitemap_url =

    "[SITEMAP URL]" sitemap = adv.sitemap_to_df(sitemap_url) urls_to_crawl = sitemap['loc'].tolist() ... # Make an index from your documents index = VectorStoreIndex.from_documents(documents) # Setup your index for citations query_engine = CitationQueryEngine.from_args( index, # indicate how many document chunks it should return similarity_top_k=5, # here we can control how granular citation sources are, the default is 512 citation_chunk_size=155, ) response = query_engine.query("YOUR PROMPT HERE")
  29. 81 ChatGPT Responses without RAG vs with RAG RAG yields

    content that is more likely to be factually correct. Combined with AIPRM’s prompts, you’re able to better counteract the more bland content that is flooding the web. RAG
  30. Fact Verification • Although Google has historically said they do

    not verification of facts. • LLM + KG integrations make this a possibility and Google needs to combat the wealth of content being produced with LLMs. So, it’s likely they will use this functionality. Source: Fact Checking in Knowledge Graphs by Logical Consistency Source: FactKG: Fact Verification via Reasoning on Knowledge Graphs
  31. Thank You | Q&A [email protected] Award Winning, #GirlDad Featured by

    Get Your SGE Threat Report: https://ipullrank.com/sge- report Play with Raggle: https://www.raggle.net Download the Slides: https://speakerdeck.com/ipullrank Mike King Chief Content Goblin @iPullRank