Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The AI Revolution Will Not Be Monopolized: How open-source beats economies of scale, even for LLMs

The AI Revolution Will Not Be Monopolized: How open-source beats economies of scale, even for LLMs

With the latest advancements in Natural Language Processing and Large Language Models (LLMs), and big companies like OpenAI dominating the space, many people wonder: Are we heading further into a black box era with larger and larger models, obscured behind APIs controlled by big tech monopolies?

I don’t think so, and in this talk, I’ll show you why. I’ll dive deeper into the open-source model ecosystem, some common misconceptions about use cases for LLMs in industry, practical real-world examples and how basic principles of software development such as modularity, testability and flexibility still apply. LLMs are a great new tool in our toolkits, but the end goal remains to create a system that does what you want it to do. Explicit is still better than implicit, and composable building blocks still beat huge black boxes.

As ideas develop, we’re seeing more and more ways to use compute efficiently, producing AI systems that are cheaper to run and easier to control. In this talk, I'll share some practical approaches that you can apply today. If you’re trying to build a system that does a particular thing, you don’t need to transform your request into arbitrary language and call into the largest model that understands arbitrary language the best. The people developing those models are telling that story, but the rest of us aren’t obliged to believe them.

Ines Montani

April 05, 2024
Tweet

Video


Resources

Behind the scenes

https://speakerdeck.com/inesmontani/the-ai-revolution-will-not-be-monopolized-behind-the-scenes

A more in-depth look at the concepts and ideas behind the talk, including academic literature, related experiments and preliminary results for distilled task-specific models.

More Decks by Ines Montani

Other Decks in Technology

Transcript

  1. Modern scriptable annotation tool for machine learning developers prodigy.ai PRODIGY

    9k+ 800+ users companies Alex Smith Developer Kim Miller Analyst
  2. WHY OPEN SOURCE? transparent no lock-in up to date programmable

    extensible community-vetted runs in-house easy to get started
  3. WHY OPEN SOURCE? transparent no lock-in up to date programmable

    extensible community-vetted runs in-house easy to get started also free!
  4. task-specific models small, often fast, cheap to run, don’t always

    generalize well, need data to fine-tune OPEN-SOURCE MODELS
  5. encoder models ELECTRA T5 task-specific models small, often fast, cheap

    to run, don’t always generalize well, need data to fine-tune OPEN-SOURCE MODELS
  6. encoder models ELECTRA T5 task-specific models small, often fast, cheap

    to run, don’t always generalize well, need data to fine-tune OPEN-SOURCE MODELS
  7. encoder models ELECTRA T5 task-specific models small, often fast, cheap

    to run, don’t always generalize well, need data to fine-tune relatively small and fast, a ordable to run, generalize & adapt well, need data to fine-tune OPEN-SOURCE MODELS
  8. encoder models ELECTRA T5 task-specific models small, often fast, cheap

    to run, don’t always generalize well, need data to fine-tune relatively small and fast, a ordable to run, generalize & adapt well, need data to fine-tune OPEN-SOURCE MODELS large generative models Falcon MIXTRAL
  9. encoder models ELECTRA T5 task-specific models small, often fast, cheap

    to run, don’t always generalize well, need data to fine-tune relatively small and fast, a ordable to run, generalize & adapt well, need data to fine-tune OPEN-SOURCE MODELS large generative models Falcon MIXTRAL very large, often slower, expensive to run, generalize & adapt well, need little to no data
  10. encoder models large generative models ENCODING & DECODING TASKS network

    trained for specific tasks using model to encode input 🔮 model 📖 text vectors 🔮 task model task output 🧬 task network labels
  11. encoder models large generative models ENCODING & DECODING TASKS model

    generates text that can be parsed into task-specific output 📖 text 🔮 model raw output ⚙ parser task output 💬 template prompt network trained for specific tasks using model to encode input 🔮 model 📖 text vectors 🔮 task model task output 🧬 task network labels
  12. encoder models ELECTRA T5 task-specific models small, often fast, cheap

    to run, don’t always generalize well, need data to fine-tune relatively small and fast, a ordable to run, generalize & adapt well, need data to fine-tune OPEN-SOURCE MODELS large generative models Falcon MIXTRAL very large, often slower, expensive to run, generalize & adapt well, need little to no data
  13. encoder models ELECTRA T5 task-specific models small, often fast, cheap

    to run, don’t always generalize well, need data to fine-tune relatively small and fast, a ordable to run, generalize & adapt well, need data to fine-tune OPEN-SOURCE MODELS large generative models Falcon MIXTRAL very large, often slower, expensive to run, generalize & adapt well, need little to no data
  14. output costs OpenAI Google ECONOMIES OF SCALE access to talent,

    compute etc. API request batching high tra ff ic 💧 💧 💧 💧 💧 💧 💧 💧 low tra ff ic batch 💧 💧 💧 💧 💧 💧 💧 💧 …
  15. output costs OpenAI Google you 🤠 ECONOMIES OF SCALE access

    to talent, compute etc. API request batching high tra ff ic 💧 💧 💧 💧 💧 💧 💧 💧 low tra ff ic batch 💧 💧 💧 💧 💧 💧 💧 💧 …
  16. human-facing systems machine-facing models ChatGPT GPT-4 most important di erentiation

    is product, not just technology AI PRODUCTS ARE MORE THAN JUST A MODEL
  17. human-facing systems machine-facing models ChatGPT GPT-4 most important di erentiation

    is product, not just technology UI / UX marketing customization AI PRODUCTS ARE MORE THAN JUST A MODEL
  18. human-facing systems machine-facing models ChatGPT GPT-4 swappable components based on

    research, impacts are quantifiable most important di erentiation is product, not just technology UI / UX marketing customization AI PRODUCTS ARE MORE THAN JUST A MODEL
  19. human-facing systems machine-facing models ChatGPT GPT-4 swappable components based on

    research, impacts are quantifiable most important di erentiation is product, not just technology cost speed accuracy latency UI / UX marketing customization AI PRODUCTS ARE MORE THAN JUST A MODEL
  20. human-facing systems machine-facing models ChatGPT GPT-4 swappable components based on

    research, impacts are quantifiable most important di erentiation is product, not just technology cost speed accuracy latency UI / UX marketing customization AI PRODUCTS ARE MORE THAN JUST A MODEL But what about the data?
  21. human-facing systems machine-facing models ChatGPT GPT-4 swappable components based on

    research, impacts are quantifiable most important di erentiation is product, not just technology cost speed accuracy latency UI / UX marketing customization AI PRODUCTS ARE MORE THAN JUST A MODEL But what about the data? User data is an advantage for product, not the foundation for machine-facing tasks.
  22. human-facing systems machine-facing models ChatGPT GPT-4 swappable components based on

    research, impacts are quantifiable most important di erentiation is product, not just technology cost speed accuracy latency UI / UX marketing customization AI PRODUCTS ARE MORE THAN JUST A MODEL But what about the data? User data is an advantage for product, not the foundation for machine-facing tasks. You don’t need specific data to gain general knowledge.
  23. human-facing systems machine-facing models ChatGPT GPT-4 swappable components based on

    research, impacts are quantifiable most important di erentiation is product, not just technology cost speed accuracy latency UI / UX marketing customization AI PRODUCTS ARE MORE THAN JUST A MODEL But what about the data? User data is an advantage for product, not the foundation for machine-facing tasks. You don’t need specific data to gain general knowledge.
  24. USE CASES IN INDUSTRY predictive tasks 🔖 entity recognition 🔗

    relation extraction 👫 coreference resolution 🧬 grammar & morphology 🎯 semantic parsing 💬 discourse structure 📚 text classification generative tasks 📖 single/multi-doc summarization 🧮 reasoning ✅ problem solving ✍ paraphrasing 🖼 style transfer ⁉ question answering
  25. USE CASES IN INDUSTRY predictive tasks 🔖 entity recognition 🔗

    relation extraction 👫 coreference resolution 🧬 grammar & morphology 🎯 semantic parsing 💬 discourse structure 📚 text classification generative tasks 📖 single/multi-doc summarization 🧮 reasoning ✅ problem solving ✍ paraphrasing 🖼 style transfer ⁉ question answering many industry problems have remained the same, they just changed in scale structured data
  26. supervised learning programming & rules rules or instructions ✍ machine

    learning examples 📝 EVOLUTION OF PROBLEM DEFINITIONS
  27. supervised learning programming & rules rules or instructions ✍ in-context

    learning rules or instructions ✍ machine learning examples 📝 EVOLUTION OF PROBLEM DEFINITIONS
  28. supervised learning prompt engineering programming & rules rules or instructions

    ✍ in-context learning rules or instructions ✍ machine learning examples 📝 EVOLUTION OF PROBLEM DEFINITIONS
  29. supervised learning prompt engineering programming & rules rules or instructions

    ✍ in-context learning rules or instructions ✍ machine learning examples 📝 EVOLUTION OF PROBLEM DEFINITIONS instructions: human-shaped, easy for non-experts, risk of data drift ✍
  30. supervised learning prompt engineering programming & rules rules or instructions

    ✍ in-context learning rules or instructions ✍ machine learning examples 📝 EVOLUTION OF PROBLEM DEFINITIONS instructions: human-shaped, easy for non-experts, risk of data drift ✍ 📝 examples: nuanced and intuitive behaviors, specific to use case, labor-intensive
  31. supervised learning prompt engineering programming & rules rules or instructions

    ✍ in-context learning rules or instructions ✍ machine learning examples 📝 EVOLUTION OF PROBLEM DEFINITIONS instructions: human-shaped, easy for non-experts, risk of data drift ✍ 📝 examples: nuanced and intuitive behaviors, specific to use case, labor-intensive
  32. prompting large general- purpose model continuous evaluation baseline domain- specific

    data WORKFLOW EXAMPLE iterative model-assisted data annotation prodigy.ai
  33. prompting large general- purpose model continuous evaluation baseline domain- specific

    data WORKFLOW EXAMPLE iterative model-assisted data annotation prodigy.ai
  34. prompting large general- purpose model distilled task- specific model transfer

    learning continuous evaluation baseline domain- specific data WORKFLOW EXAMPLE iterative model-assisted data annotation prodigy.ai
  35. prompting large general- purpose model distilled task- specific model transfer

    learning continuous evaluation baseline distilled model domain- specific data WORKFLOW EXAMPLE iterative model-assisted data annotation prodigy.ai
  36. processing pipeline in production swap, replace and mix components github.com/explosion/spacy-llm

    prompt model & transform output to structured data processing pipeline prototype PROTOTYPE TO PRODUCTION
  37. processing pipeline in production swap, replace and mix components github.com/explosion/spacy-llm

    prompt model & transform output to structured data processing pipeline prototype PROTOTYPE TO PRODUCTION
  38. processing pipeline in production swap, replace and mix components github.com/explosion/spacy-llm

    prompt model & transform output to structured data structured machine-facing Doc object processing pipeline prototype PROTOTYPE TO PRODUCTION
  39. modular testable flexible predictable transparent no lock-in programmable extensible run

    in-house cheap to run DISTILLED TASK-SPECIFIC COMPONENTS
  40. modular testable flexible predictable transparent no lock-in programmable extensible run

    in-house cheap to run DISTILLED TASK-SPECIFIC COMPONENTS
  41. The Zen of Python >>> import this Beautiful is better

    than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess.
  42. The Zen of Python >>> import this don’t abandon what’s

    made software successful Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess.
  43. control resource regulation compounding economies of scale network e ects

    MONOPOLY STRATEGIES human-facing products vs. machine-facing models
  44. control resource regulation compounding economies of scale network e ects

    MONOPOLY STRATEGIES human-facing products vs. machine-facing models
  45. THE AI REVOLUTION WON’T BE MONOPOLIZED The software industry does

    not run on secret sauce. Knowledge gets shared and published. Secrets won’t give anyone a monopoly.
  46. THE AI REVOLUTION WON’T BE MONOPOLIZED The software industry does

    not run on secret sauce. Knowledge gets shared and published. Secrets won’t give anyone a monopoly. Usage data is great for improving a product, but it doesn’t generalize. Data won’t give anyone a monopoly.
  47. THE AI REVOLUTION WON’T BE MONOPOLIZED The software industry does

    not run on secret sauce. Knowledge gets shared and published. Secrets won’t give anyone a monopoly. LLMs can be one part of a product or process, and swapped for di erent approaches. Interoperability is the opposite of monopoly. Usage data is great for improving a product, but it doesn’t generalize. Data won’t give anyone a monopoly.
  48. THE AI REVOLUTION WON’T BE MONOPOLIZED The software industry does

    not run on secret sauce. Knowledge gets shared and published. Secrets won’t give anyone a monopoly. LLMs can be one part of a product or process, and swapped for di erent approaches. Interoperability is the opposite of monopoly. Usage data is great for improving a product, but it doesn’t generalize. Data won’t give anyone a monopoly. Regulation could give someone a monopoly, if we let it. It should focus on products and actions, not components.