Think running LLMs requires massive data centers? Think again. A new generation of efficient models is making on-device AI a reality, even on Android.
This session cuts through the hype to explore the practicalities of "Edge Inference." Learn why you'd ditch the cloud API calls for on-device processing.
We'll compare Google's AI Edge SDK and the MediaPipe LLM Inference API, highlighting the trade-offs Android developers face today. Understand the current limitations, the surprising capabilities of modern small models, and why experimenting with on-device LLMs now could give your app a competitive edge tomorrow.