RISC-V basics: The truth about custom extensions

RISC-V basics: The truth about custom extensions



The era of universal processor architectures is giving way to workload-specific designs optimized for performance, power, and scalability. As data-centric applications in artificial intelligence (AI), edge computing, automotive, and industrial markets continue to expand, they are driving a fundamental shift in processor design.

Arguably, chipmakers can no longer rely on generalized architectures to meet the demands of these specialized markets. Open ecosystems like RISC-V empower silicon developers to craft custom solutions that deliver both innovation and design efficiency, unlocking new opportunities across diverse applications.

RISC-V, an open-source instruction set architecture (ISA), is rapidly gaining momentum for its extensibility and royalty-free licensing. According to Rich Wawrzyniak, principal analyst at The SHD Group, “RISC-V SoC shipments are projected to grow at nearly 47% CAGR, capturing close to 35% of the global market by 2030.” This growth highlights why SoC designers are increasingly embracing architectures that offer greater flexibility and specialization.

 

RISC-V ISA customization trade-offs

The open nature of the RISC-V ISA has sparked widespread interest across the semiconductor industry, especially for its promise of customization. Unlike fixed-function ISAs, RISC-V enables designers to tailor processors to specific workloads. For companies building domain-specific chips for AI, automotive, or edge computing, this level of control can unlock significant competitive advantages in optimizing performance, power efficiency, and silicon area.

But customization is not a free lunch.

Adding custom extensions means taking ownership of both hardware design and the corresponding software toolchain. This includes compiler and simulation support, debug infrastructure, and potentially even operating system integration. While RISC-V’s modular structure makes customization easier than legacy ISAs, it still demands architectural consideration and robust development and verification workflows to ensure consistency and correctness.

In many cases, customization involves additional considerations. When general-purpose processing and compatibility with existing software libraries, security frameworks, and third-party ecosystems are paramount, excessive or non-standard extensions can introduce fragmentation. Design teams can mitigate this risk by aligning with RISC-V’s ratified extensions and profiles, for instance RVA23, and then applying targeted customizations where appropriate.

When applied strategically, RISC-V customization becomes a powerful lever that yields substantial ROI by rewarding thoughtful architecture, disciplined engineering, and clear product objectives. Some companies devote full design and software teams to developing strategic extensions, while others leverage automated toolchains and hardware-software co-design methodologies to mitigate risks, accelerate time to market, and capture most of the benefits.

For teams that can navigate the trade-offs well, RISC-V customization opens the door to processors truly optimized for their workloads and to massive product differentiation.

Real world use cases

Customized RISC-V cores are already deployed across the industry. For example, Nvidia’s VP of Multimedia Arch/ASIC, Frans Sijstermans, described the replacement of their internal Falcon MCU with customized RISC-V hardware and software developed in-house, now being deployed across a variety of applications.

One notable customization is support for 2KB beyond the standard 4K pages, which yielded a 50% performance improvement for legacy code. Page size changes like this are a clear example of modifications with system-level impact from processor hardware to operating system memory management.

Figure 1 The view of Nvidia’s RISC-V cores and extensions taken from the keynote “RISC-V at Nvidia: One Architecture, Dozens of Applications, Billions of Processors.”

Another commercial example is Meta’s MTIA accelerator, which extends a RISC-V core with application-specific instructions, custom interfaces, and specialized register files. While Meta has not published the full toolchain flow, the scope of integration implies an internally managed co-design methodology with tightly coupled hardware and software development.

Given the complexity of the modifications, the design likely leveraged automated flows capable of regenerating RTL, compiler backends, simulators, and intrinsics to maintain toolchain consistency. This reflects a broader trend of engineering teams adopting user-driven, in-house customization workflows that support rapid iteration and domain-specific optimization.

Figure 2 Meta’s MTIA accelerator integrates Andes RISC-V cores for optimized AI performance. Source: MTIA: First Generation Silicon Targeting Meta’s Recommendation Systems, A. Firoozshahian, et. al.

Startup company Rain.ai illustrates that even small teams can benefit from RISC-V customization via automated flows. Their process begins with input files that define operands, vector register inputs and outputs, vector unit behavior, and a C-language semantic description. These instructions are pipelined, multi-cycle, and designed to align with the stylistic and semantic properties of standard vector extensions.

The input files are extended with a minimal hardware implementation and processed through a flow that generates updated core RTL, simulation models, compiler support, and intrinsic functions. This enables developers to quickly update kernels, compile and run them on simulation models, and gather feedback on performance, utilization, and cycle count.

By lowering the barrier to custom instruction development, this process supports a hardware-software co-design methodology, making it easier to explore and refine different usage models. This approach was used to integrate their matrix multiply, sigmoid, and SiLU acceleration in the hardware and software flows, yielding an 80% reduction in power and a 7x–10x increase in throughput compared to the standard vector processing unit.

Figure 3 Here is an example of a hardware/software co‑design flow for developing and optimizing custom instructions. Source: Andes Technology

Tools supporting RISC-V customization

To support these holistic workflows, automation tools are emerging to streamline customization and integration. For example, Andes Technology provides silicon-proven IP and a comprehensive suite of design tools to accelerate development.

Figure 4 ACE and CoPilot simplify the development and integration of custom instructions. Source: Andes Technology

Andes Custom Extension (ACE) framework and CoPilot toolchain offer a streamlined path to RISC-V customization. ACE enables developers to define custom instructions optimized for specific workloads, supporting advanced features such as pipelining, background execution, custom registers, and memory structures.

CoPilot automates the integration process by regenerating the entire hardware and software stack, including RTL, compiler, debugger, and simulator, based on the defined extensions. This reduces manual effort, ensures alignment between hardware and software, and accelerates development cycles, making custom RISC-V design practical for a broad range of teams and applications.

RISC-V’s open ISA broke down barriers to processor innovation, enabling developers to move beyond the constraints of proprietary architectures. Today, advanced frameworks and automation tools empower even lean teams to take advantage of hardware-software co-design with RISC-V.

For design teams that approach customization with discipline, RISC-V offers a rare opportunity: to shape processors around the needs of the application, not the other way around. The companies that succeed in mastering this co-design approach won’t just keep pace, they’ll define the next era of processor innovation.

Marc Evans, director of Business Development & Marketing at Andes Technology, brings deep expertise in IP, SoC architecture, CPU/DSP design, and the RISC-V ecosystem. His career spans hands-on processor and memory system architecture to strategic leadership roles driving the adoption of new IP for emerging applications at leading semiconductor companies.

Related Content

  • Top five fallacies about RISC-V
  • Startups Help RISC-V Reshape Computer Architecture
  • Accelerating RISC-V development with network-on-chip IP
  • Why RISC-V is a viable option for safety-critical applications
  • Codasip: Toward Custom, Safe, Secure RISC-V Compute Cores

The post RISC-V basics: The truth about custom extensions appeared first on EDN.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *