on-device foundation models starting from Android 14 • AICore enables Low Rank Adaptation (LoRA) fine tuning with Gemini Nano. • AICore takes advantage of new ML hardware like the latest Google Tensor TPU and NPUs in flagship Qualcomm Technologies, Samsung S.LSI and MediaTek silicon
Fast, everything is process On-Device Indirect Internet Access Advantage AICore does not have direct internet access. All internet requests, including model downloads, are routed through the open-source Private Compute Services
is an open source, secure environment that is isolated from the rest of the operating system and apps • Introduced with Android 12, It is a secure environment that isolates sensitive data to perform private computations while ensuring that this data never leaves the device
Compute Core is completely isolated from the rest of the system and from third-party applications, which means that not even Google has access to the data. Only local machine learning APIs, which are strictly controlled, can interact with private data.
not have direct internet access. All internet requests, including model downloads, are routed through the open-source Private Compute Services companion APK
not have direct internet access. All internet requests, including model downloads, are routed through the open-source Private Compute Services companion APK • Learn more : bit.ly/private-compute
Reply: Generate relevant responses within a chat Proofreading: Correct spelling and grammatical errors. Condense lengthy documents into concise summaries Use cases
AICore is currently available on Pixel 9 series devices, Google Pixel 8 Series devices, Samsung S24 Series devices, Realme GT 6, Motorola Edge 50 Ultra, Motorola Razr 50 Ultra, Xiaomi 14T/Pro, and Xiaomi MIX Flip
AICore is currently available on Pixel 9 series devices, Google Pixel 8 Series devices, Samsung S24 Series devices, Realme GT 6, Motorola Edge 50 Ultra, Motorola Razr 50 Ultra, Xiaomi 14T/Pro, and Xiaomi MIX Flip • Supported Modalities: AICore currently supports text modality for Gemini Nano
a Gemini model appropriate for your use case modelName = "gemini-1.5-flash", // Access your API key as a Build Configuration variable apiKey = "gemini-key" )
= "user") { text("Hello, I have 2 dogs in my house.") }, content(role = "model") { text("What would you like to know?") } ) ) coroutineScope.launch { val response = chat.sendMessage("How many paws are in my house?") }
waiting for the entire result from the model generation, and instead use streaming to handle partial results. Use generateContentStream() to stream a response.