LLVM 16 Enable scalable vectoring by default for RISC-V


With LLVM 15 branched and main now open for LLVM 16, one of the first changes for this next compiler release cycle is to enable scalable vectoring by default for RISC-V with supported targets for RISC vector instructions -V.

LLVM developer Philip Reames enabled scalable vectoring by default for supported RISC-V targets with Zve or V extensions. He explained with the change:

This change enables vectoring (using only scalable vectoring, fixed vectors are not yet enabled) for RISCV when vector instructions are available for the target configuration.

At this point, the resulting setup should be both stable (e.g. no crashes) and profitable (i.e. few cases where scalar loops beat vector loops), but won’t do particularly well regulated (i.e. we emit the best possible vector loop). The goal of this change is to align testing across organizations and ensure that the default configuration matches as closely as possible what downstream users are using.

This exposes a large amount of code that was not enabled by default and therefore may not have been fully utilized. Given this, having trouble falling is not unexpected. If you encounter any issues, be sure to include as much information as possible when undoing this change.

Two days have passed and so far no feedback, so hopefully it’s going well. More details for those interested via reviews.llvm.org. Since this change is early in the LLVM 16 development cycle, there is still plenty of time to improve the compiler’s RISC-V vectoring support before the stable release, not until around March.

RISC-V “V” is the full vector mathematical extension for this royalty-free processor architecture, while Zve is a “modest” subset of it intended for small cores in embedded devices and microcontrollers. The RISC-V Vector Extension 1.0 has been frozen since last year and considered stable enough to begin software work. The RISC-V V 1.0 specification for those interested can be found via GitHub.


Comments are closed.