image: Potential framework for fully automated processor chip design, including three core components: a) A domain-specific LLM to comprehend the specification and generate a primary design; b) An automated repair mechanism based on functional verification to guarantee the design’s correctness; c) An automated search mechanism based on performance feedback to address the problem of enormous solution space.
Credit: ©Science China Press
Processor chips are the basic engines of the digital world, powering everything from smartphones and personal computers to cloud servers and Internet of Things (IoT) devices. As demand for computing continues to soar, chip design has become a critical bottleneck: it is slow, expensive, and heavily dependent on scarce human expertise.
In a new Perspective article in National Science Review, researchers from the State Key Laboratory of Processors,Institute of Computing Technology, Chinese Academy of Sciences, argue that incremental automation is no longer enough. They call for a fully automated processor chip design paradigm that can take high-level functional requirements and automatically deliver verified, high-performance hardware and software stacks.
Traditional electronic design automation (EDA) tools and recent AI methods have significantly accelerated individual design steps such as logic synthesis, placement, routing and design-space exploration. However, most current AI-driven approaches act as local optimizers inside a conventional flow. They improve efficiency at specific stages but do not fundamentally change how entire chips are conceived and built, and thus cannot keep pace with exploding demand and growing design complexity.
The authors identify three challenges that block the path toward fully automated processor design:
1) Specification comprehension: In real projects, requirements for a processor are usually written in informal, sometimes ambiguous natural language, while existing tools expect precise formal inputs such as C/C++ or hardware description languages (HDLs) like Verilog and VHDL. Bridging this gap still requires substantial manual work by experts.
2) Correctness guarantee: Processor chips must meet extremely strict correctness standards. For example, the functional verification of a modern CPU may target 99.99999999999% correctness or higher. Yet large language models (LLMs), which operate on probabilistic generation, cannot directly satisfy such deterministic guarantees.
3) Enormous solution space: A processor design spans foundational software, logic and circuit design, and physical implementation. Modeling this at the bitstream level leads to an astronomically large design space. For example, a 32-bit CPU possesses a solution space whose size is approximately 10^10^540.
To overcome these challenges, the Perspective proposes a three-part framework centered on a domain-specialized “Large Processor Chip Model”:
1) Domain-specific LLM for specification comprehension
The first component is a large language model trained specifically on processor design data. Its job is to read informal natural-language specifications, resolve ambiguities, and generate an initial formal design in HDLs or other suitable representations. Because high-quality training data for processor design is scarce, the authors highlight the role of LLM-based data synthesis and cross-verification to automatically build better corpora at scale—an approach already shown effective in recent reasoning-enhanced RTL design work.
2) Automated repair driven by functional verification
The second component addresses correctness. Instead of trusting a single model output, the framework integrates automatic verification tools to check intermediate designs, and uses their feedback to repair errors. When verification detects a functional bug, the system rolls back to a previously verified version and regenerates the faulty part based on error signals, iterating until the design passes all checks. This idea has already been validated in the fully automated CPU “Enlightenment-1” (QiMeng-CPU-v1), whose logic is represented with a novel graph structure called the Binary Speculation Diagram (BSD). Using Boolean distance as a verification metric and BSD expansion for repair, QiMeng-CPU-v1 reportedly reaches over 99.99999999999% functional accuracy and can successfully boot Linux, providing a concrete proof-of-concept for correctness-aware automation.
3) Performance-feedback-driven search in an enormous solution space
The third component tackles performance optimization under a huge design space. The authors suggest organizing candidate designs as a hierarchical search tree. Performance predictions or real measurements are fed back at intermediate nodes, allowing the system to prune poor branches and focus exploration on promising regions. Similar search-with-feedback ideas have already been applied to automated foundational software design. Systems such as QiMeng-TensorOp and QiMeng-Xpiler use Monte Carlo Tree Search (MCTS) guided by real execution time to automatically generate high-performance tensor operators and to transcompile tensor programs across platforms. Extending such performance-aware search frameworks from software into full processor design is expected to dramatically reduce the effective solution space while still discovering highly optimized designs.
Importantly, the authors emphasize that this fully automated framework is not meant to replace the existing EDA ecosystem. Instead, AI models can orchestrate and call mature tools—such as logic optimizers, floorplanning and placement engines, and formal verification suites—by generating appropriate scripts and constraints. In this way, a “large processor chip model” acts as an intelligent conductor sitting on top of today’s design tools, coordinating them to deliver end-to-end automated solutions.
The State Key Laboratory of Processors, housed at the Institute of Computing Technology of the Chinese Academy of Sciences (ICT, CAS), is one of the first national key laboratories formally approved for construction by CAS. The laboratory is chaired by Academician Ninghui Sun, who serves as the Chair of the Academic Committee, and is directed by Prof. Yunji Chen. In recent years, the laboratory has achieved a series of landmark accomplishments. It received the first-ever National Natural Science Award in the field of processor chips, along with six national-level science and technology awards. The laboratory consistently ranks first in China in terms of publications at leading international conferences in processor architecture and chip design. Internationally, it has pioneered several influential research directions, including deep learning processors, which have since become major global research hotspots. The laboratory has also played a pivotal role in fostering China’s domestic processor ecosystem: it has directly or indirectly incubated several leading Chinese processor companies with a combined market valuation of hundreds of billions of RMB, significantly advancing the country’s strategic capabilities in high-performance and AI-oriented chip technologies.