Upgrade to Pro
— share decks privately, control downloads, hide ads and more …
Speaker Deck
Features
Speaker Deck
PRO
Sign in
Sign up for free
Search
Search
[ACL 2026 Demo] Fast-MIA: Efficient and Scalabl...
Search
Shotaro Ishihara
May 12, 2026
Research
19
0
Share
[ACL 2026 Demo] Fast-MIA: Efficient and Scalable Membership Inference for LLMs
https://arxiv.org/abs/2510.23074
https://github.com/Nikkei/fast-mia
Shotaro Ishihara
May 12, 2026
More Decks by Shotaro Ishihara
See All by Shotaro Ishihara
Fast-MIA: Efficient and Scalable Membership Inference for LLMs
upura
0
8
JAPAN AI CUP Prediction Tutorial
upura
2
1k
情報技術の社会実装に向けた応用と課題:ニュースメディアの事例から / appmech-jsce 2025
upura
0
370
日本語新聞記事を用いた大規模言語モデルの暗記定量化 / LLMC2025
upura
0
640
Quantifying Memorization in Continual Pre-training with Japanese General or Industry-Specific Corpora
upura
1
100
JOAI2025講評 / joai2025-review
upura
0
1.6k
AI エージェントを活用した研究再現性の自動定量評価 / scisci2025
upura
1
240
JSAI2025 企画セッション「人工知能とコンペティション」/ jsai2025-competition
upura
0
120
生成的推薦の人気バイアスの分析:暗記の観点から / JSAI2025
upura
0
390
Other Decks in Research
See All in Research
はじまりの クエスチョンブック —余暇と豊かさにあふれた社会とは?
culturaltransition
PRO
0
400
LLM Compute Infrastructure Overview
karakurist
2
1.3k
IEEE AIxVR 2026 Keynote Talk: "Beyond Visibility: Understanding Scenes and Humans under Challenging Conditions with Diverse Sensing"
miso2024
0
180
R&Dチームを起ち上げる
shibuiwilliam
1
240
2026年度 生成AI を活用した論文執筆ガイド/ワークショップ / 2026 Academic Year Guide to Writing Papers Using Generative AI - Workshop
ks91
PRO
0
120
2026 東京科学大 情報通信系 研究室紹介 (大岡山)
icttitech
0
2.8k
The Landscape of Agentic Reinforcement Learning for LLMs: A Survey
shunk031
4
900
ICCV2025参加報告_採択されやすいワークショップの選び方
kobayashi31
0
150
NII S. Koyama's Lab Research Overview AY2026
skoyamalab
0
190
台湾モデルに学ぶ詐欺広告対策:市民参加の必要性
dd2030
0
310
都市交通マスタープランとその後への期待@熊本商工会議所・熊本経済同友会
trafficbrain
0
200
「なんとなく」の顧客理解から脱却する ──顧客の解像度を武器にするインサイトマネジメント
tajima_kaho
10
7.5k
Featured
See All Featured
Building AI with AI
inesmontani
PRO
1
960
[SF Ruby Conf 2025] Rails X
palkan
2
1k
State of Search Keynote: SEO is Dead Long Live SEO
ryanjones
0
190
[RailsConf 2023] Rails as a piece of cake
palkan
59
6.5k
Building a Modern Day E-commerce SEO Strategy
aleyda
45
9k
How Fast Is Fast Enough? [PerfNow 2025]
tammyeverts
3
560
KATA
mclloyd
PRO
35
15k
The Director’s Chair: Orchestrating AI for Truly Effective Learning
tmiket
1
160
Practical Tips for Bootstrapping Information Extraction Pipelines
honnibal
25
1.9k
Mind Mapping
helmedeiros
PRO
1
180
Building Flexible Design Systems
yeseniaperezcruz
330
40k
Google's AI Overviews - The New Search
badams
0
1k
Transcript
Hiromu Takahashi and Shotaro Ishihara ACL 2026 System Demonstrations Fast-MIA:
Efficient and Scalable Membership Inference for LLMs
uv run --with vllm python main.py \ --config config/sample.yaml 1.
High-throughput batch inference using vLLM (about 5 times faster individually) 2. Cross-method caching architecture (Reduce the total processing time for benchmarking multiple methods) https://github.com/Nikkei/fast-mia Fast-MIA: Efficient and Scalable 2 LLM LOSS vLLM backend batch inference Shared Cache Reuse across methods PPL/zlib Min-K% Prob DC-PDD Lowercase PAC ReCaLL Con-ReCall SaMIA ……
Membership Inference Attack (MIA) on LLMs 3 LLM Is this
text included? Text Pre-training Data • Calculate the log-likelihood, etc. • Various methods have been proposed.
Challenges in MIA on LLMs 4 LLM Is this text
included? Text Pre-training Data • Calculate the log-likelihood, etc. • Various methods have been proposed. 1. Growing computational demands for individual MIA methods. 2. Redundant computation across methods for benchmarking.
We introduce Fast-MIA 5 1. Growing computational demands for individual
MIA methods. 2. Redundant computation across methods for benchmarking. LLM LOSS vLLM backend batch inference Shared Cache Reuse across methods PPL/zlib Min-K% Prob DC-PDD Lowercase PAC ReCaLL Con-ReCall SaMIA …… 1. High-throughput batch inference using vLLM. 2. Cross-method caching architecture.
uv run --with vllm python main.py \ --config config/sample.yaml How
to Use: https://github.com/Nikkei/fast-mia 6 model: model_id: "huggyllama/llama-30b" data: data_path: "swj0419/WikiMIA" format: "huggingface" text_length: 32 methods: - type: "loss"
AUC Reproducibility and Speed 7 Left: Fast-MIA Right: Transformers-based implementations
Inference time (the number of inferences) The cache is working
8
uv run --with vllm python main.py \ --config config/sample.yaml 1.
High-throughput batch inference using vLLM (about 5 times faster individually) 2. Cross-method caching architecture (Reduce the total processing time for benchmarking multiple methods) https://github.com/Nikkei/fast-mia Contributions Welcome 9 LLM LOSS vLLM backend batch inference Shared Cache Reuse across methods PPL/zlib Min-K% Prob DC-PDD Lowercase PAC ReCaLL Con-ReCall SaMIA ……