RISC-V Architectural Certification Test Developer's Guide: Developing Certification Test Plan, Coverpoints, and Tests
All extensions require a Certification Test Plan (CTP), coverpoints, and tests. The process of developing the CTP, coverpoints, and tests for a new suite differs for table-driven unprivileged tests and for spreadsheet-driven privileged tests. Each is described below.
- Certification Test Plan
- Test Hierarchy
- Test YAML Header
- Table-Driven Unprivileged Coverpoints and Tests
- Spreadsheet-Driven Privileged Tests
- Debugging Coverage
- Adding a New Simulator or DUT Config
Each test suite needs a section in the CTP describing the coverpoints, the mapping of normative rules to coverpoints, and any UDB parameters that affect the suite.
For unprivileged suites, only non-standard coverpoints need to be defined. See the CTP section "C Compressed Extension" for examples. For privileged suites, there are no standard coverpoints and instead the testplan links to a Google Sheet. See the CTP Section "Sm Machine-Mode CSRs and Instructions" for an example.
Both privileged and unprivileged suites need a mapping between the normative
rules and coverpoints. This mapping is a YAML file in coverpoints/norm
containing a list of rule names and the coverpoints that exercise them. There
should be one YAML for each test suite.
Instead of typing this YAML from scratch, it is easier to make an outline
from the normative rules already in the riscv-isa-manual repo. Make sure
you have a current copy of riscv-isa-manual and have run make successfully in that repo
to build the normative_rule_defs subdirectory and build/norm-rules.json.
Then invoke generators/ctp/generate_norm_rule_coverpoint_templates.py to
create one yaml file per ISA manual chapter in coverpoints/norm/yaml/chapters.
(You may need to edit riscv_isa_manual_dir in the Python file to point to
its location in your tree). Then copy the yaml from the chapter related to the
test suite up two levels (e.g. cp coverpoints/norm/yaml/chapters/machine.yaml coverpoints/norm/Sm.yaml) and edit it.
When you run make in the ctp directory, the YAML file is parsed to build an
ASCIIDoc file (in ctp/norm) with a table of normative rule names, definitions, and associated coverpoints. Include this file in the CTP with
include::norm/Sm_norm_rules.adoc[]
Similar YAML files in coverpoints/param are used to make a list of the UDB parameters that apply to a test suite. Each parameter has a name (corresponding to the UDB), coverpoint (one or more coverpoints that it affects), and effect (string summarizing what it does, such as coverpoint only applying to certain parameter values, or result depending on the parameter value).
The generate_param_table.py script turns these into .adoc files in ctp/src/param listing the parameter name, description (from UDB), coverpoints it applies to, and effect on the coverpoints. If there is a yaml for normative rules but not for parameters, the parameter adoc just indicates no parameters. The script also makes a summary.adoc table listing all of the UDB parameters used anywhere in the test plan, and UDB parameters not yet mentioned in the test plan.
This script is also run automatically when making the CTP. Hence, all the developer must do is create YAML files in coverpoints/param for test suites with parameters.
The testgen package organizes generated tests into four levels:
test suite
└── test file
└── test chunk
└── testcase
-
Testcase: The smallest unit of testing. Each testcase checks a single bin of a coverpoint. In the generated assembly, a testcase corresponds to one call to
test_data.add_testcase(), which creates a label and debug string for that specific bin. For example, testing thataddwrites tox5is one testcase of thecp_rdcoverpoint. -
Test chunk (
TestChunk): An unsplittable group of one or more testcases. A test chunk is the building block of test files. Test chunks are never split across multiple files. Standard coverpoint generators (e.g.,cp_rd,cp_imm_edges) produce one chunk per testcase viaformat_single_testcase(). Special coverpoint generators and privileged tests bundle multiple testcases into a single chunk usingtest_data.begin_test_chunk()/test_data.end_test_chunk(), typically because the testcases share setup code. -
Test file: A complete
.Sassembly file that is compiled into a self-checking ELF. Each test file contains one or more test chunks. When an instruction has many testcases (e.g., hundreds of register/immediate combinations), the framework splits the chunks across multiple test files usingTESTCASES_PER_FILEas the limit. Test files are named likeI-add-00.S,I-add-01.S, etc., where the suffix indicates the file index. -
Test suite: All test files in a given directory. Each test suite corresponds to one extension or combination of extensions (e.g.,
I,Zcb,ZcbZbb,ExceptionsSm) and maps to a single coverage file. Unprivileged test suites contain one or more test files per instruction. For privileged tests, a test suite typically contains a single test file covering all coverpoints for that feature.
Every assembly test file (.S) must include a YAML configuration header that
describes the test's requirements. The framework uses this header to determine
which tests to select and how to compile them for a given DUT configuration.
The header is embedded in assembly comments between two marker lines:
##### START_TEST_CONFIG #####
# REQUIRED_EXTENSIONS: ['I', 'Zba']
# params:
# MXLEN: 32
# MARCH: rv32i_zba
##### END_TEST_CONFIG #####The framework strips the leading # comment characters from each line and
parses the remaining content as YAML. The header must appear before any
assembly code in the file.
The following top-level keys are recognized. No other keys are permitted (the parser uses strict validation and will reject unknown keys).
| Key | Type | Required | Description |
|---|---|---|---|
REQUIRED_EXTENSIONS |
list of strings | Yes | RISC-V extensions required by this test. The test is only selected for a DUT whose implemented extensions list contains all of these extensions. |
MARCH |
string | Yes | The -march string passed to the compiler. Must match the pattern rv(32|64|${XLEN})(i|e|g)... (e.g., rv32i_zba, rv64ifd_zfh). |
params |
mapping | No | A dictionary of parameter constraints that must match the DUT's UDB configuration for the test to be selected. |
A YAML list of extension name strings. Both quoted and unquoted styles are accepted:
# Quoted style (common in generated tests)
REQUIRED_EXTENSIONS: ['I', 'Zba']
# Unquoted style (common in hand-written tests)
REQUIRED_EXTENSIONS: [I, S, Zicsr, Sm]During test selection, the framework checks that every extension in this list is present in the DUT's implemented extensions (derived from the UDB configuration). A test is skipped if any required extension is missing.
The compiler march string determines the available extensions during compilation.
It will usually contain the same list of extensions as REQUIRED_EXTENSIONS, but
certain privileged extensions are omitted (the compiler does not accept them).
The REQUIRED_EXTENSIONS list and march string may also differ for tests that
conditionally include extra testcases depending on the DUT configuration.
It follows the standard RISC-V ISA string naming convention:
- Single-letter extensions are concatenated without separators:
rv32imafd - Multi-letter extensions are separated by underscores:
rv64i_zba_zbb - Privilege-mode extensions (
Sm,S,U) are omitted from the march string
For privileged tests that need to support both RV32 and RV64, use the
${XLEN} placeholder:
MARCH: rv${XLEN}i_zicsrThe framework substitutes the actual XLEN value (32 or 64) at compile time based on the DUT configuration.
An optional mapping of parameter names to required values. Each parameter must exist in the DUT's UDB configuration and match the specified value for the test to be selected.
params:
MXLEN: 32Parameters support both exact matching and comparison operators. Comparison operators are specified as string-prefixed values:
| Operator | Example | Meaning |
|---|---|---|
| (none) | MXLEN: 32 |
Exact equality (equivalent to ==) |
== |
MXLEN: '==64' |
Exact equality |
> |
NUM_PMP_ENTRIES: '>0' |
Greater than |
>= |
VLEN: '>=64' |
Greater than or equal |
< |
VLEN: '<256' |
Less than |
<= |
PMP_GRANULARITY: '<=4' |
Less than or equal |
!= |
PMP_GRANULARITY: '!=0' |
Not equal |
Comparison operator values support both decimal and hexadecimal (e.g.,
'>=0x80', '<0xFF'). Comparison values must be quoted in YAML since they
start with special characters.
Minimal header (unprivileged test, single extension, fixed XLEN):
##### START_TEST_CONFIG #####
# REQUIRED_EXTENSIONS: ['I']
# params:
# MXLEN: 32
# MARCH: rv32i
##### END_TEST_CONFIG #####Privileged test (multi-XLEN, no params):
##### START_TEST_CONFIG #####
# REQUIRED_EXTENSIONS: [I, S, Zicsr, Sm]
# MARCH: rv${XLEN}i_zicsr
##### END_TEST_CONFIG #####Note that MARCH does not include S or Sm because the compiler does not need those extensions.
Test with parameter constraints (PMP requirements):
##### START_TEST_CONFIG #####
# REQUIRED_EXTENSIONS: ['I', 'Zca', 'Sm']
# params:
# MXLEN: 32
# NUM_PMP_ENTRIES: '>0'
# PMP_GRANULARITY: '<=2'
# MARCH: rv32i_zca_zicsr
##### END_TEST_CONFIG #####This header would correspond to a PMP test that uses NA4 mode. NA4 does not exist if the PMP_GRANULARITY is >2 and PMP in general does not exist if NUM_PMP_ENTRIES is 0, so both of these param constraints are needed to make sure the test can run on the DUT.
Unprivileged tests are tests that exercise individual instructions and do not trap. Unprivileged tests always require a CSV testplan and updates to the instruction decoder. They may also require new coverpoint generators and/or instruction formatters.
Unprivileged tests do not set up a trap handler (because they must run without any machine mode features, including mtvec). Therefore, they will enter an infinite loop if they trap. Halt the simulation and look at the log file to find the root cause. Tests that might trap must be written in the privileged style described later in this document.
Unprivileged test generation is driven by a CSV testplan that specifies all instructions
in the extension along with the coverpoints that apply to each instruction. Each extension
should have a testplan named <extension_name>.csv in the testplans directory.
All testplan CSVs must include the following keys:
Instruction: The instruction mnemonic. For example,add,mul,fadd.d, etc.Type: The instruction type. Note that these types are more specific than the ISA manual types and take the kind of register, size of immediate, etc. into account. For example,R,I,IS,ISW. TODO: Document the list of instruction types?RV32/RV64: Which XLENs the instruction exists for. Place anxin the relevant columns.- coverpoints: Which coverpoints apply to the instruction. Place an
xin the column corresponding to the relevant coverpoints in each instruction's row.- Some coverpoints have multiple variants. To indicate that a variant of the coverpoint should be used for a particular instruction, use the variant's suffix in the CSV instead of an
x. See the20bitvariant of thecp_imm_edgescoverpoint for theauipcinstruction below.
- Some coverpoints have multiple variants. To indicate that a variant of the coverpoint should be used for a particular instruction, use the variant's suffix in the CSV instead of an
An example of a few instructions from the I extension is included below:
Instruction,Type,RV32,RV64,cp_asm_count,cp_rs1,cp_rs2,cp_rd,cp_rs1_edges,cp_rs2_edges,cr_rs1_imm_edges,cr_rs1_rs2_edges,cmp_rs1_rs2,cmp_rd_rs1,cmp_rd_rs2,cmp_rd_rs1_rs2,cp_offset,cp_uimm,cp_imm_edges,cp_align,cp_memval,cp_custom
add,R,x,x,x,x,x,x,x,x,,x,x,x,x,x,,,,,,
addi,I,x,x,x,x,,x,x,,x,,,x,,,,,x,,,
auipc,U,x,x,x,,,x,,,,,,,,,,,20bit,,,
...See I.csv for a complete example.
Most new extension testplans will be able to reuse existing coverpoints and instruction formats. If any new coverpoints, coverpoint variants, or instruction formats are added, make sure to follow adding new coverpoints or adding new instruction formats respectively.
Unprivileged instructions are decoded in disassemble.svh.
All new instructions need to be added to the case statement.
disassemble.svh translates the encoding
into an instruction mnemonic and instruction arguments. The encodings themselves come
from the auto-generated RISCV_imported_decode_pkg.svh header.
This header is generated using riscv-opcodes
and should not be manually modified.
Adding a new coverpoint requires adding a template for the coverpoint itself along with a Python generator to generate tests for that coverpoint.
All coverpoints (and coverpoint variants) need a template file in
generators/coverage/src/covergroupgen/templates.
These templates should be named <coverpoint_name>.sv or
<coverpoint_name>_<variant>.sv.
The coverpoint templates are directly included in a larger covergroup,
so they must contain a complete and valid SystemVerilog coverpoint.
See the generators/coverage/src/covergroupgen/templates
directory for example coverpoints. A few hints are included below:
- All data about the instruction is accessed using the
insobject. - There are many pre-built functions to make writing coverpoints for RISC-V easier. Be sure to look through some of the example coverpoints before implementing any complex logic from scratch. TODO: Add documentation of the riscvISACOV functions/enums/etc.
- If no
binsare specified for a coverpoint, bins will automatically be created for all possible states of the sampled signal. - All unprivileged coverpoints should have an
iff (ins.trap == 0)check to ensure they are only satisfied when the hart is not trapping.
Each coverpoint needs a Python generator that produces an assembly language test that exercises the relevant behaviors.
The following applies to all coverpoint test generators:
-
All coverpoint test generators must go in
generators/testgen/src/testgen/coverpoints. All Python files in that directory are automatically discovered and imported. -
All coverpoint generator functions must be decorated with the
@add_coverpoint_generator("<coverpoint_name>")decorator. This tells the framework which coverpoints to use this generator for. Multiple comma-separated coverpoints can be specified if necessary. -
All coverpoint generator functions must use the following signature:
def make_cp_name(instr_name: str, instr_type: str, coverpoint: str, test_data: TestData) -> list[TestChunk]:
instr_nameis the instruction currently being tested. This allows coverpoint test generators to be reused for multiple instructions.instr_typeis the type of the instruction currently being tested. This allows the correct instruction formatter (see below) to be selected.coverpointis the full name of the coverpoint, including any variant suffix. Coverpoint test generators can match multiple variants of a coverpoint. This argument allows different values, registers, etc. to be selected based on the variant.test_datais the generation context that is passed to all parts of the test generation process and manages register allocation, test counting, and the activeTestChunk.- The generator must return a list of
TestChunkobjects. EachTestChunkis an unsplittable group of one or more testcases. It holds its own assembly code, data values, debug strings, and signature update count. The framework uses these to split test chunks across test files and combine their data for the final output.
Coverpoint test generators can largely be broken into two categories: standard and special.
Standard generators use the instruction formatters and can be applied to a wide range
of instructions. Examples include cp_rs1, cp_imm_edges, and cr_rs1_rs2_edges.
Special generators include all of the test code inline and are used for coverpoints
that apply to only a small set of instructions. Examples include cp_custom_fence
and cp_align.
Standard coverpoint generators are used for many instructions and make up the majority
of the coverpoint generators. A good example to get familiar with the structure of a
coverpoint generator is the cp_rd generator in
cp_regs.py.
It is also included below with many additional comments added to explain how it works.
# All coverpoint generators use the add_coverpoint_generator decorator to specify
# which coverpoints they apply to.
@add_coverpoint_generator("cp_rd")
# Coverpoint generators all use the standard signature described above.
def make_rd(instr_name: str, instr_type: str, coverpoint: str, test_data: TestData) -> list[TestChunk]:
"""Generate tests for destination register coverpoints."""
# Determine which rd registers to test based on the coverpoint variant.
# Multiple variants can match to the same generator. This is useful when
# the difference between variants is minor (e.g. just the register values).
if coverpoint == "cp_rd":
rd_regs = list(range(test_data.int_regs.reg_count))
elif coverpoint.endswith("_nx0"):
rd_regs = list(range(1, test_data.int_regs.reg_count)) # Exclude x0
elif coverpoint.endswith("rd_p"):
rd_regs = list(range(8, 16)) # x8-x15 for compressed instructions
else:
# Raise an error if an unexpected variant was matched to this coverpoint
# to make debugging easy.
raise ValueError(f"Unknown cp_rd coverpoint variant: {coverpoint} for {instr_name}")
# Initialize a list of TestChunk objects to collect results
test_chunks: list[TestChunk] = []
# Generate tests
# A common pattern is to use a loop to iterate over some value that is being tested
# in a particular coverpoint. This could be register numbers, register values,
# immediate values, etc.
for rd in rd_regs:
# Any registers that are explicitly used must be marked as used using the
# test_data.int_regs.consume_registers function. This will automatically move
# any reserved registers to ensure the desired register is free.
asm_setup = test_data.int_regs.consume_registers([rd])
# The generate_random_params function will populate any instruction parameters
# used by the provided instruction type that are not explicitly specified with
# random (legal) values. In this case, only rd is specified, so rs1, rs2, imm, etc.
# will get random values.
params = generate_random_params(test_data, instr_type, rd=rd)
desc = f"{coverpoint} (Test destination rd = x{rd})"
# format_single_testcase is the key part of standard coverpoint generators. It takes
# the provided instruction parameters (created above) and produces a TestChunk object
# containing the assembly code and associated data. It also calls test_data.add_testcase
# to add a label and debugging string.
tc = format_single_testcase(instr_name, instr_type, test_data, params, desc, f"b{rd}", coverpoint)
# If consume_registers returned setup code (register moves), prepend it to the TestChunk
if asm_setup:
tc.code = asm_setup + "\n" + tc.code
test_chunks.append(tc)
# Once registers are no longer in use, they need to be marked as available again
# so that the register allocator knows that they can be reused.
return_test_regs(test_data, params)
# Return the list of TestChunk objects. The framework will use these to split test chunks
# across test files (based on num_testcases counts) and combine their data for the final output.
return test_chunksAdditional documentation for all of these functions (and many other helper functions) is available as docstrings in the Python files where they are defined. Other standard coverpoint generators can also be used as examples.
Special coverpoint generators should only be used when the coverpoint being tested requires a more complex sequence of instructions or requires a different pattern than most other coverpoints that apply to a particular instruction. They use significantly more handwritten assembly and need support to be explicitly added for each instruction type (or in some cases each individual instruction).
Special coverpoint generators vary widely, so it is impossible to provide a complete guide,
but they usually follow the same initial flow as a standard coverpoint and then diverge
where the call to format_single_testcase would be. Instead of calling format_single_testcase,
special coverpoint generators use test_data.begin_test_chunk() and test_data.end_test_chunk()
to wrap their inline assembly in a single TestChunk. The typical pattern is:
tc = test_data.begin_test_chunk()
test_lines: list[str] = []
# ... build assembly lines, call test_data.add_testcase(), load_int_reg(), write_sigupd(), etc. ...
tc.code = "\n".join(test_lines)
return [test_data.end_test_chunk()]While most of this code is handwritten, you are still encouraged to use helper Python
functions. The most useful helpers for special coverpoints tend to be load_int_reg and
write_sigupd. See Python Instruction Formatters for
details on those functions.
If you are writing a new special coverpoint generator, it is highly encouraged to look at
several examples from the generators/testgen/src/testgen/coverpoints/special directory.
Adding a new instruction format requires adding a new SystemVerilog sample template and a Python instruction formatter.
All instruction formats need a template file in
generators/coverage/src/covergroupgen/templates.
These templates should be named sample_<INSTRUCTION_TYPE>.sv.
The instruction format templates are directly included in a SystemVerilog
case statement.
All instruction sample templates must match the following format:
"INSTR" : begin
ins.add_rd(0);
ins.add_rs1(1);
ins.add_rs2(2);
endINSTRwill be replaced by the instruction name and is the key in a case statement.insis a data structure that holds all information about the current instruction. The purpose of the sample function is to populate the data structure.- The various
add_*functions assign parameters from the instruction's assembly string to variables. The number indicates which parameter from the assembly string should be assigned to the specified variable. For example, in the code above, the first parameter is assigned tord, the second tors1, and the third tors2. - For a full list of all the
add_*functions, seeRISCV_instruction_base.svh.
See the generators/coverage/src/covergroupgen/templates
directory for example instruction format sample sequences.
The standard coverpoint generators rely on instruction formatters to produce the necessary assembly to test each instruction. Each instruction type needs a Python generator that produces an assembly language test.
The following applies to all instruction formatters:
-
All instruction formatters must go in
generators/testgen/src/testgen/formatters/types. All Python files in that directory are automatically discovered and imported. -
All instruction formatter functions must be decorated with the
@add_instruction_formatter("<TYPE_NAME>", <type_name>_config)decorator. This tells the framework which instruction type to use this generator for and how to generate the parameters for it.- The
<type_name>_configargument is anInstructionTypeConfigobject that contains therequired_paramsfor an instruction type along with constraints on those parameters, likereg_range,imm_range, etc. See theInstructionTypeConfigdocstring ingenerators/testgen/src/testgen/formatters/registry.pyfor more details.
- The
-
All instruction formatter functions must use the following signature:
def format_name_type(instr_name: str, test_data: TestData, params: InstructionParams) -> tuple[list[str], list[str], list[str]]:
instr_nameis the instruction currently being tested. This allows instruction formatters to be reused for multiple instructions of the same type.test_datais a dataclass that is passed to all parts of the test generation process and stores the signature count, test values, debug strings, etc.paramsis a dataclass containing values for all of the instruction arguments (rs1, rs1val, immval, etc.). See its definition ingenerators/testgen/src/testgen/data/params.pyfor all of the options.- The generator must return a tuple of three lists of strings:
- Code to set up the test.
- The test itself (usually just the instruction being tested).
- Code to check the results of the test (usually signature checks).
A good example to get familiar with the structure of an instruction formatter is the
r_type.
It is also included below with many additional comments added to explain how it works.
# The InstructionTypeConfig object is used when generating random parameters.
# At a minimum, it specifies the `required_params` that must be populated with values.
# It can also optionally specify constraints or additional details for these parameters,
# including reg_range, imm_bits, imm_signed, etc.
r_config = InstructionTypeConfig(required_params={"rd", "rs1", "rs1val", "rs2", "rs2val"})
# All instruction formatters use the add_instruction_formatter decorator to specify
# what instruction type it applies to and what configuration object to use.
@add_instruction_formatter("R", r_config)
# Instruction formatters all use the standard signature described above
def format_r_type(instr_name: str, test_data: TestData, params: InstructionParams) -> tuple[list[str], list[str], list[str]]:
"""Format R-type instruction."""
# The assert statements are used to satisfy the type checker and help ensure
# none of the necessary params are left out of the required_params above.
assert params.rs1 is not None and params.rs1val is not None
assert params.rs2 is not None and params.rs2val is not None
assert params.rd is not None
# setup is a list of strings of assembly code that should be run before the test.
# The most common thing to do here is populate registers with specified values.
# The load_int_reg and load_float_reg helper functions load values from memory
# to ensure the instruction sequence is consistent and to simplify the process
# of populating floating-point values. The functions will automatically include
# the values in the data section at the end of the test.
setup = [
load_int_reg("rs1", params.rs1, params.rs1val, test_data),
load_int_reg("rs2", params.rs2, params.rs2val, test_data),
]
# test is a (usually one item) list of strings with the assembly to actually
# run the test. Note that all of the arguments to the instruction come from
# the params object that is passed to the formatter. This allows the coverpoint
# generators to customize the instruction arguments as needed.
test = [
f"{instr_name} x{params.rd}, x{params.rs1}, x{params.rs2} # perform operation",
]
# check is a list of strings of assembly code that validate the results of the test.
# While check can contain anything, it is usually made up of calls to the
# write_sigupd helper function. This function inserts a RVTEST_SIGUPD macro with
# all of the appropriate arguments populated.
check = [write_sigupd(params.rd, test_data, "int")]
# The three lists of strings are returned as a tuple. They are usually joined
# with newlines and then passed back to the coverpoint generator.
return (setup, test, check)Additional documentation for all of these functions (and many other helper functions) is available as docstrings in the Python files where they are defined. Other instruction formatters can also be used as examples.
Privileged tests are much less structured than unprivileged instruction tests. Therefore, their testsplans are expressed in English on spreadsheets. They are described with hand-written SystemVerilog coverpoints using RVVI to access architectural state. The tests are generated with Python scripts that insert the necessary signature handling to be self-checking.
Although most unprivileged tests involve instructions that are easiest to automatically test through CSV tables described above, unstructured unprivileged tests can be generated with the privileged test approach. See ZicsrF for an unprivileged example.
Privileged tests should be partitioned into suites that generally can run for a certain combination of extensions (e.g. ExceptionsZc requires Sm for general exception capability + Zca for compressed instructions). Putting exceptions for compressed instructions in ExceptionsSm would not be a good organization because one would attempt to run them on all systems with machine mode, even if compressed instructions did not exist, and the behavior of running a compressed instruction on a machine without Zca is Unspecified.
Privileged tests should work for both RV32 and RV64 so there is not a need for separate suites based on XLEN. The testplan, coverpoints, and tests can call out portions of a test that differ based on XLEN.
Privileged tests are described with Google Sheets spreadsheets hosted in CSC/WorkGroups/TestPlan accessible to RVI CSC members. There should be one spreadsheet for each category of test suites (e.g. Exceptions, Interrupts), with one tab per test suite (e.g. ExceptionsS, ExceptionsZc, ExceptionsZaamo).
Each tab should have the following columns:
- Coverpoint: the name that will be consistently used across coverpoints, tests, and linkage to normative rules.
- Goal: brief summary. Avoid words like "test."
- Description: a precise statement of the conditions being checked, suitable for somebody other than the author to turn into coverpoints and tests.
- Expectation: what will happen (e.g. trap, CSR takes on a value, etc.)
- Bins: Number of bins, expressed as a product of independent states where possible to help the test writer confirm the intended number of possibilities have been exercised. (e.g. "2 MIE * 2 TW", where each of these signals has two possibilities, giving 4 bins).
- Normative Rule: (optional) name of associated normative rule. Not all coverpoints have to be driven by normative rules; some may exercise combinations of features.
There should be one coverage file for each tab of a testplan spreadsheet. Create coverpoints/priv/<suite>_coverage.svh and coverpoints/priv/<suite>_coverage_init.svh. Look at ExceptionsZc_coverage.svh and ExceptionsZc_coverage_init.svh for reference. Use the same idioms; don't get creative. Names should exactly match, subject to capitalization restrictions.
Write SystemVerilog coverpoints. Complex coverpoints are normally a cross-product of simpler coverpoints.
The coverpoints use architectural state conveyed over Extended RVVI (see Certification Test Plan for signals available). It is easiest to write coverpoints in terms of ins.current and ins.prev, the current and previous instructions. If the test is too complicated to express just in terms of these, it may be necessary to leave out some conditions. For example, virtual memory coverpoints don't specify all of the page table entries.
As with unprivileged tests, add a YAML file with the Normative Rule - Coverpoint Mapping.
The <suite>_coverage.svh file can include
`include "general/RISCV_coverage_standard_coverpoints.svh"that defines useful standard coverpoints such as priv_mode_m applicable to many suites.
The preferred idiom to check the current instruction or instruction field is
csrrw: coverpoint ins.current.insn {
wildcard bins csrrw = {CSRRW};
}
mcause: coverpoint ins.current.insn[31:20] {
bins mcause = {CSR_MCAUSE};
}There is a complete listing of instruction and CSR names in framework/src/fcov/coverage/RISCV_imported_decode_pkg.svh. Do not modify that file by hand. It is generated using riscv-opcodes. To add new instructions or CSRs, add them to riscv-opcodes and then regenerate the file.
An alternate idiom is to specify bitfields directly. For example, this is necessary for compressed instructions that are not in RISCV_imported_decode_pkg.svh. Observe how the coverpoint uses insn[15:0] and wildcard bins with ? for don't care in some bitfields of the instruction. Also observe how the coverpoint uses `ifdef to define bins that only apply to a certain XLEN or if a certain extension or parameter is supported.
storeops: coverpoint ins.current.insn[15:0] {
wildcard bins c_sw = {16'b110_???_???_??_???_00};
wildcard bins c_swsp = {16'b110_??????_?????_10};
`ifdef ZCB_SUPPORTED
wildcard bins c_sb = {16'b100010_???_??_???_00};
wildcard bins c_sh = {16'b100011_???_0?_???_00};
`endif
`ifdef XLEN64
wildcard bins c_sd = {16'b111_???_???_??_???_00};
wildcard bins c_sdsp = {16'b111_??????_?????_10};
`endif
}The preferred idiom to check the value of a CSR bitfield is to use the get_csr_val function, specifying the CSR name and bitfield (mstatus and tsr). `SAMPLE_BEFORE means to get the value before the instruction retires, while `SAMPLE_AFTER means to get the value after the instruction retires. The CSR names and fields match the ISA manual, and are listed in framework/src/act/fcov/coverage/RISCV_coverage_csr.svh.
old_mstatus_tsr: coverpoint get_csr_val(ins.hart, ins.issue, `SAMPLE_BEFORE, "mstatus", "tsr")[0] {
}get_csr_val returns an XLEN bit vector with the relevant value in the least significant bits. If you are planning to use implicit bins, make sure to extract the relevant bits so that it doesn't try to fill in bins for all XLEN bits.
CSRs with no bitfields can be accessed by passing the CSR name again as the field name.
An alternate idiom is to refer to the RVVI structure, which holds the value of the CSR before (ins.prev) or after (ins.current) the instruction.
mtvec_stvec_ne: coverpoint {ins.current.csr[CSR_MTVEC] != ins.current.csr[CSR_STVEC]} {
bins notequal = {1};
}The coverpoints given in the spreadsheet are usually cross-products of simpler coverpoints. The following example shows how to define coverpoints for the three lsbs of the address, and for whether an address is illegal, and then cross them with the storeops defined above to create up to 6 bins of store ops * 8 bins of address lsbs for cp_store_address_misaligned, for a total of 48 bins.
adr_LSBs: coverpoint {ins.current.rs1_val + ins.current.imm}[2:0] {
// auto fills 000 through 111
}
illegal_address: coverpoint ins.current.imm + ins.current.rs1_val {
bins illegal = {`RVMODEL_ACCESS_FAULT_ADDRESS};
}
cp_store_address_misaligned: cross storeops, adr_LSBs;
cp_store_access_fault: cross storeops, illegal_address;If additional state is absolutely necessary, it could be added to the Extended RVVI specification. This involves changing the spec and tools that read and write it, so should not be done if there is any other reasonable way to write a "good enough" coverpoint. Open an issue to discuss other potential options before proceeding down this route.
Each privileged test needs a Python generator that produces an assembly language test that exercises the relevant behaviors. Privileged test generators use similar methods to Special Generators, so make sure to read that portion of this guide first.
The following applies to all privileged test generators:
-
All privileged test generators must go in
generators/testgen/src/testgen/priv/extensions. All Python files in that directory are automatically discovered and imported. -
All privileged generator functions must be decorated with the
@add_priv_test_generator("<test_name>", required_extensions=["<extension_name>", "<extension_name>"])decorator. This tells the framework what to name the test and which extensions are required to run it on a target. Optionally,march_extensions=["<extension_name>", "<extension_name>"]can also be specified to indicate which extensions should be passed in themarchstring to the compiler. If not specified, this defaults to the list fromrequired_extensions. -
All privileged generator functions must use the following signature:
def make_name(test_data: TestData) -> list[str]:
test_datais a dataclass that is passed to all parts of the test generation process and stores the signature count, debug strings, etc.- The generator must return a list of strings. They will be combined with newlines separating each string in the final output test.
The body of most privileged test generator functions is a series of calls to other functions that generate the code for each coverpoint. For example, the main generator from Sm.py is included below:
# All priv test generators use the @add_priv_test_generator decorator to specify the
# name and required extensions.
@add_priv_test_generator("Sm", required_extensions=["Sm", "Zicsr"])
# All priv test generators must use the standard function signature.
def make_sm(test_data: TestData) -> list[str]:
"""Generate tests for Sm machine-mode testsuite."""
lines: list[str] = []
# Priv test generators call other internal functions to build up the test
lines.extend(_generate_mcause_tests(test_data))
lines.extend(_generate_mstatus_sd_tests(test_data))
lines.extend(_generate_priv_inst_tests(test_data))
lines.extend(_generate_mret_tests(test_data))
lines.extend(_generate_sret_tests(test_data))
lines.extend(_generate_mcsr_tests(test_data))
lines.extend(_generate_mcsr_cntr_tests(test_data))
# A list of assembly strings is returned. These strings will be joined together
# with newlines in the final output file.
return linesThere are a few important gotchas to keep in mind when writing privileged tests:
- There should be no loops in the assembly code. Loops make debugging difficult and prevent testcases from being uniquely associated with debug strings. Instead, use loops in the Python generator to emit repetitive assembly.
- The trap handler skips 4 bytes when returning to the test. This means that every instruction that could trap must be followed by a
nop(or twoc.nopif compressed instructions are supported). Alternatively, this skipped instruction can be used to change a counter/indicator of some kind to detect if a trap was taken. This is generally not necessary because the total number of traps is always checked at the end of a test. - Different implementations may trap on different CSRs, so always assume a CSR access could trap. The
CSRRW,CSRRS,CSRR, etc. macros include anopafter the CSR access and should always be used in place of raw CSR instructions.
For examples of how to write the individual coverpoint helper functions for privileged test generators, review Sm.py and ExceptionsZc.py. Here are a few additional notes that apply to all privileged test helper functions:
- Do not hardcode register numbers. Instead use the register allocator described above for unprivileged coverpoints (
test_data.int_regs.get_registers(3), etc.). - Begin each coverpoint with a call to
comment_banner(coverpoint, "comments")to add a descriptive marker to the generated test. - Include a call to
test_data.add_testcaseat the beginning of each testcase within a coverpoint. This creates the appropriate labels and debug strings. If possible, put the call right before the instruction being tested. - To the extent possible, reuse functions and define new helper functions if a snippet of assembly seems like it will be useful in multiple tests. See
csr.pyfor a few examples includinggen_csr_read_sigupd,gen_csr_write_sigupd, andcsr_walk_test. - Test are automatically formatted as follows:
- Pre-processor directives (
#ifdef, etc.), comments, and labels are unindented. - Code (instructions and macros) is indented by 2 spaces.
- If deviations from this help the readability of a test (most often indenting certain comments), use the
INDENTglobal at the beginning of the line (e.g.f"{INDENT}# comment).
- Pre-processor directives (
After writing initial drafts of coverpoints and tests, run them with make coverage EXTENSIONS=ExceptionsZc. Omit the --jobs flag so they run in order and it is easier to localize which one failed. By giving the name of the test suite (e.g. ExceptionsZc), you only run the new suite of interest, saving runtime.
You can expect syntax errors in the tests that are easy to locate based on the compiler messages.
Once those are resolved, you may have bugs that cause an infinite loop. If the test is taking a long time to run, halt it. Look at the log file in (e.g.) work/sail-rv64-max/build/priv/ExceptionsZc/ExceptionsZc.sig.trace. Scroll through until you find the misbehavior that put the system into an infinite loop.
You can expect syntax errors in the coverpoints that are easy to locate based on the filename and line number reported by the HDL simulator. Look in work/sail-rv64-max/coverage/priv/ExceptionsZc/ExceptionsZc.ucdb.log for messages.
Once these are resolved, look in a coverage report directory such as work/sail-rv64-max/reports/_overall_summary.txt. Expect to have less than 100% coverage on the new coverpoints on the first try. Look in the same directory at <suite>_report.txt and <suite>_uncovered.txt for details about the coverpoint bins being hit and missed.
Diagnosing missing coverage can be difficult. The bug could be in the coverpoint or the tests. It is helpful to add a statement to display RVVI signals relevant to the coverpoint after each instruction executes, so you can compare them against expectation and localize the problem. In (e.g.) the exceptionszc_sample function of coverpoints/priv/ExceptionsZc_coverage.svh, add a statement like:
$display("mode: %b, medel: %b, funct3: %b, rs1_1_0: %b, pc_1: %b, offset: %b ",
ins.current.mode,
ins.current.csr[12'h302],
ins.current.insn[14:12],
ins.current.rs1_val[1:0],
ins.current.pc_rdata[1],
ins.current.imm[1:0]);Then look in the work/sail-rv64-max/coverage/priv/ExceptionsZc/ExceptionsZc.ucdb.log file to see how these RVVI signals change after each instruction. Find the instruction that should have hit a bin, and see which coverpoint input(s) aren't taking on the necessary values. It is often useful to compare the *.ucdb.log file with the *.trace file in work/sail-rv64-max/coverage/priv/ExceptionsZc.
The Makefile and CI workflow auto-discover configs from the config/ directory. No changes to the Makefile or GitHub Actions workflow are needed when adding a new config.
Create a configuration directory following the instructions in the Configuration section of the README. In addition to the config files described there, add a run_cmd.txt file containing a single-line shell command to run an ELF. The ELF path is appended to the end of the command by run_tests.py. See config/spike/spike-rv64-max/run_cmd.txt for a reference example.
The command can include {debug:...} placeholders for DUT-specific trace flags that are only enabled when running with DEBUG=1. For example:
spike {debug:-l --log-commits --log=__TRACEFILE__} --isa=rv64gc
When DEBUG=1 is set, the placeholder expands to its contents (e.g., spike -l --log-commits --log=<trace_file> --isa=rv64gc). Otherwise, it is removed (e.g., spike --isa=rv64gc). stdout and stderr are captured in the existing log files under work/<config>/logs/.
When debug mode enables simulator tracing, trace output can interleave with RVCP-SUMMARY lines and prevent run_tests.py from detecting pass/fail. Two placeholders solve this by redirecting output to per-test files:
-
__TRACEFILE__— Use when the simulator can redirect its trace output to a file.run_tests.pysubstitutes this with a per-test.trace.logpath so trace output goes to a separate file, keepingRVCP-SUMMARYlines clean in the main log. Examples:spike {debug:-l --log-commits --log=__TRACEFILE__} --isa=rv64gc qemu-system-riscv64 {debug:-d in_asm,int -D __TRACEFILE__} -nographic ... sail_riscv_sim {debug:--trace --trace-output __TRACEFILE__} --config ... -
__SUMMARYFILE__— Use when the simulator cannot redirect trace but can redirect its console output (which containsRVCP-SUMMARY) to a file. When present,run_tests.pyreadsRVCP-SUMMARYfrom this.summary.logfile instead of the main log. Example:wsim --sim verilator {debug:--sim questa --lockstepverbose --args '+UART_LOG=1 +UART_LOG_FILE=__SUMMARYFILE__'} rv64gc --elf
Both placeholders should be placed inside {debug:...} blocks since they are only needed when trace output is enabled. When debug is off, the placeholders are stripped along with the rest of the block.
Once the config directory exists and has a run_cmd.txt file, the following Make targets are automatically available:
make <config-name> # Build ELFs and run tests for this config
make <group> # Build and run all configs in the groupThe <group> can be any ancestor directory name. For example, configs under config/cores/cvw/ produce targets for both make cvw (all CVW configs) and make cores (all configs under cores/).
To run a simulator's configs in GitHub Actions CI, create a ci.yaml file in the simulator's group directory (e.g., config/<group>/ci.yaml):
ci_enabled: true # Set false to skip in CI
exclude_extensions: "Ext1,Ext2" # Extensions to skip (optional)
apt_packages: "libfoo libbar" # apt packages needed at runtime (optional)
install_script: ".github/scripts/install-<sim>.sh" # Build script, skipped on cache hit (optional)
setup_script: ".github/scripts/setup-<sim>.sh" # Setup script, always run before running testsField details:
ci_enabled: Controls whether configs under this group appear in the CI matrix. Defaults totrueif omitted.exclude_extensions: Comma-separated list of extensions to exclude when running this simulator's tests in CI. Use for known failures with the simulator so CI passes until bugs are resolved upstream.apt_packages: Space-separated list of apt packages required to run the simulator. These are installed unconditionally (even on cache hit).install_script: Path to a shell script that builds and installs the simulator. Receives the install directory as its first argument. The built simulator is cached — the script only runs on cache miss. The cache key is derived from the script's content hash, so updating the script (e.g., bumping a version) automatically invalidates the cache.setup_script: Path to a shell script that sets up the simulator environment. This script is always run before running tests.