What Is Samsung TRM? New AI Model by Researcher Alexia Jolicoeur-Martineau Beats DeepSeek R1, OpenAI o3-Mini & Google Gemini 2.5 Pro in Reasoning

Samsung TRM is developed under the guidance of Alexia Jolicoeur-Martineau, a senior AI researcher at Samsung Advanced​ Institute of Technology. It comes with 7 million parameters. Despite its small size, TRM is said to outperform major language models, including Deepseek R1, OpenAI’s o3-mini, and Google’s Gemini 2.5 Pro, on challenging reasoning tasks, achieving high accuracy on ARC-AGI benchmarks without complex hierarchical structures or large networks.

Samsung, Alexia Jolicoeur-Martineau (Photo Credits: Wikimedia Commons, LinkedIn)

New Delhi, October 9: Samsung’s Advanced Institute of Technology (SAIT) has reportedly unveiled a new AI model called the Tiny Recursion Model ( TRM). The Samsung TRM model is said to be extremely small, containing 7 million parameters, yet it reportedly performs comparably with some of the largest language models (LLMs) available. Despite its compact size, TRM is said to compete with models that are 10,000 times larger, demonstrating its abilities.

Samsung TRM is developed under the guidance of Alexia Jolicoeur-Martineau, a senior AI researcher at Samsung Advanced​ Institute of Technology. As per reports, Samsung’s new open reasoning model, TRM, is said to perform better than models that are 10,000 times its size. The model is said to outperform major language models, which include Deepseek R1, OpenAI’s o3-mini and Google’s Gemini 2.5 Pro, on challenging reasoning tests.Ā Grok Code Fast 1 Now Available on Microsoft Visual Studio: Elon Musk.

Samsung Tiny Recursive Model (TRM)Ā 

Samsung TRM Outscored Several Models, Including DeepSeek R1, Gemini 2.5 Pro, OpenAI o3 Mini

Samsung TRM With 7 Million Parameter

What is Samsung TRM?

Samsung's senior AI researcher has introduced the TRM, which uses a simpler recursive reasoning method. TRM is said to achieve better generalisation than the Hierarchical Reasoning Model (HRM) while operating with a single tiny network with two layers. The HRMl is described as using two compact neural networks that recurse at different frequencies. This approach outperforms large language models on tasks like Sudoku, Maze, and ARC-AGI, even when trained with small models of 27 million parameters. While HRM shows solving complex problems with small networks, it is reportedly not fully understood and may not be fully optimised.Ā Grok Imagine V0.9 Released: Elon Musk Asks Grok Users To Generate Videos From Still Images Using Newly Improved Version.

As per a paper submittedĀ byĀ arXiv.org, the TRM model with 7 million parameters achieves 45% test accuracy onĀ  ARC-AGI1 and 8% on ARC-AGI-2. The performance is said to surpass many large language models, which include Deepseek R1, OpenAI o3-mini, and Google Gemini 2.5 Pro, despite TRM having less than 0.01% of their parameter size. The arXiv paper noted, "Contrary to HRM, TRM requires no complex mathematical theorem, hierarchy, nor biological Arguments."

Rating:3

TruLY Score 3 – Believable; Needs Further Research | On a Trust Scale of 0-5 this article has scored 3 on LatestLY, this article appears believable but may need additional verification. It is based on reporting from news websites or verified journalists ( arXiv.org), but lacks supporting official confirmation. Readers are advised to treat the information as credible but continue to follow up for updates or confirmations

(The above story first appeared on LatestLY on Oct 09, 2025 11:10 AM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).

Share Now

Share Now