AI & Tech News: GPT-5 Release, Google's LLM Breakthrough, and Quantum Computing Advances

How Google AI’s New Method Reduces LLM Training Data by 10,000x Google Research has unveiled a groundbreaking method for fine-tuning large language models (LLMs) that reduces required training data by up to 10,000 times. This innovative approach utilizes active learning to focus expert labeling on the most informative examples, especially “boundary cases” where model uncertainty is highest. In experiments with Gemini Nano models, this technique matched or surpassed the quality of models trained on 100,000 random labels with as few as 250 to 450 targeted examples. This development promises to make AI model development significantly leaner, more agile, and more cost-effective. ...

August 11, 2025 · 6 min · 1127 words · Omer