Google has introduced VaultGemma, its largest open model trained from scratch with differential privacy (DP), making it the most capable DP-based LLM to date. VaultGemma is built with 1B parameters and it balances compute, privacy, and utility using newly established DP scaling laws. VaultGemma reduces memorisation risks by adding calibrated noise during training, ensuring strong sequence-level privacy guarantees. Despite higher compute costs and training challenges, it achieves performance comparable to non-private models from about five years ago. Google released the model’s weights on Hugging Face and Kaggle along with a technical report to advance private AI research. Gemini Nano Banana AI 3D Figurines Trend: Know How To Create Your Own Viral 3D AI Figurine With Google’s Gemini 2.5 Flash Image; Check Limits and Prompt.
Google Introduces VaultGemma, A Differentially Private LLM
Introducing VaultGemma, the largest open model trained from scratch with differential privacy. Read about our new research on scaling laws for differentially private language models, download the weights, & check out the technical report on the blog →https://t.co/tvgseWTcyP pic.twitter.com/caQyttLCnS
— Google Research (@GoogleResearch) September 12, 2025
(SocialLY brings you all the latest breaking news, fact checks and information from social media world, including Twitter (X), Instagram and Youtube. The above post contains publicly available embedded media, directly from the user's social media account and the views appearing in the social media post do not reflect the opinions of LatestLY.)













Quickly


