Intel hedges its bet for High-NA EUV with the 14A process node — an alternate Low-NA technique has identical yield and design rules

Intel hedges its bet for High-NA EUV with the 14A process node — an alternate Low-NA technique has identical yield and design rules

Intel explained the rationale behind its High-NA EUV strategy at its Intel Foundry Direct 2025 conference this week. Despite persistent questions around cost-effectiveness, Intel has championed its use of the new High-NA EUV chipmaking tool with its forthcoming 14A process. However, Intel has not yet fully committed to using the new tool in production, but…

Read More
Intel Foundry Roadmap Update – New 18A-PT variant that enables 3D die stacking, 14A process node enablement

Intel Foundry Roadmap Update – New 18A-PT variant that enables 3D die stacking, 14A process node enablement

Intel’s new CEO Lip Bu-Tan took to the stage at the company’s Intel Foundry Direct 2025 event here in San Jose, California, to outline the company’s progress on its foundry initiative. Tan announced that the company is now engaging lead customers for its upcoming 14A process node (1.4nm equivalent), the follow-on generation of its 18A…

Read More
TSMC’s 2nm N2 process node enters production this year, A16 and N2P arriving next year

TSMC’s 2nm N2 process node enters production this year, A16 and N2P arriving next year

TSMC is on track to start high-volume production of chips on N2 (2nm-class), its first production technology that relies on gate-all-around (GAA) nanosheet transistors, in the second half of this year, the company revealed at its North American Technology Symposium 2025.  This new node will enable numerous products launching next year, including AMD’s next-generation EPYC…

Read More
Process Reinforcement through Implicit Rewards (PRIME): A Scalable Machine Learning Framework for Enhancing Reasoning Capabilities

Process Reinforcement through Implicit Rewards (PRIME): A Scalable Machine Learning Framework for Enhancing Reasoning Capabilities

Reinforcement learning (RL) for large language models (LLMs) has traditionally relied on outcome-based rewards, which provide feedback only on the final output. This sparsity of reward makes it challenging to train models that need multi-step reasoning, like those employed in mathematical problem-solving and programming. Additionally, credit assignment becomes ambiguous, as the model does not get…

Read More
This AI Paper Explores Reinforced Learning and Process Reward Models: Advancing LLM Reasoning with Scalable Data and Test-Time Scaling

This AI Paper Explores Reinforced Learning and Process Reward Models: Advancing LLM Reasoning with Scalable Data and Test-Time Scaling

Scaling the size of large language models (LLMs) and their training data have now opened up emergent capabilities that allow these models to perform highly structured reasoning, logical deductions, and abstract thought. These are not incremental improvements over previous tools but mark the journey toward reaching Artificial general intelligence (AGI). Training LLMs to reason well…

Read More