Skip to content

Commit 760febd

Browse files
committed
move tests .csv files into tests/tests_tables/
- update READMEs
1 parent c7e18e4 commit 760febd

14 files changed

+19
-15
lines changed

README.md

+4-3
Original file line numberDiff line numberDiff line change
@@ -210,10 +210,11 @@ _You can also use the `flipjump.assemble_run_according_to_cmd_line_args(cmd_line
210210
- inout/ - Contains the .in and .out files for each test.
211211
- conftest.py - The pytest configuration file. The tests are being generated here.
212212
- test_fj.py - The base test functions for compilation and running ([how to run](tests/README.md#run-the-tests)).
213-
- test_compile_*.csv - Arguments for the compile tests ([compile test arguments format](tests/README.md#compile-csvs-format)).
214-
- test_run_*.csv - Arguments for the run tests ([run test arguments format](tests/README.md#run-csvs-format)).
215213
- conf.json - The tests groups+order lists.
216-
- xfail_*.csv - [xfail](https://docs.pytest.org/en/7.1.x/how-to/skipping.html#xfail-mark-test-functions-as-expected-to-fail) these tests.
214+
- [tests_tables/](tests/tests_tables)
215+
- test_compile_*.csv - Arguments for the compile tests ([compile test arguments format](tests/README.md#compile-csvs-format)).
216+
- test_run_*.csv - Arguments for the run tests ([run test arguments format](tests/README.md#run-csvs-format)).
217+
- xfail_*.csv - [xfail](https://docs.pytest.org/en/7.1.x/how-to/skipping.html#xfail-mark-test-functions-as-expected-to-fail) these tests.
217218

218219

219220
# Read More - Extra Documentation

flipjump/stl/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Offers outputting constant chars/strings.
1818

1919
**@note**: It should be the first file in the compilation order.
2020

21-
### [bit/](bit/)
21+
### [bit/](bit)
2222
Defines the `bit` data-structure (for binary variables).
2323

2424
Offers macros for manipulating binary variables and vectors (i.e. numbers):
@@ -35,7 +35,7 @@ Offers macros for manipulating binary variables and vectors (i.e. numbers):
3535
- [div.fj](bit/div.fj) - div10, {i}div, {i}div_loop (i stands for signed)
3636
- [pointers.fj](bit/pointers.fj) - bit-vec pointers: flip, jump, xor_to, xor_from, inc/dec; pointers init
3737

38-
### [hex/](hex/)
38+
### [hex/](hex)
3939
Defines the `hex` data-structure (for hexadecimal variables).
4040

4141
**They are smaller and faster than 4 `bit`s.**
@@ -53,7 +53,7 @@ Offers macros for manipulating hexadecimal variables and vectors (i.e. numbers):
5353
- [mul.fj](hex/mul.fj) - add_mul, mul (works for signed & unsigned)
5454
- [div.fj](hex/div.fj) - div, idiv (signed)
5555
- [tables_init.fj](hex/tables_init.fj) - initializes the "results-tables" for the next hex macros: or,and, add,sub, cmp, mul
56-
- [pointers/](hex/pointers/) - hex-vec pointers subdirectory: [flip](hex/pointers/xor_to_pointer.fj), [jump](hex/pointers/basic_pointers.fj), [xor_to](hex/pointers/xor_to_pointer.fj), [xor_from](hex/pointers/xor_from_pointer.fj); [stack](hex/pointers/stack.fj)/[pointers](hex/pointers/basic_pointers.fj) init. [pointer arithmetics](hex/pointers/pointer_arithmetics.fj), [stack arithmetics + push/pop](hex/pointers/stack.fj).
56+
- [pointers/](hex/pointers) - hex-vec pointers subdirectory: [flip](hex/pointers/xor_to_pointer.fj), [jump](hex/pointers/basic_pointers.fj), [xor_to](hex/pointers/xor_to_pointer.fj), [xor_from](hex/pointers/xor_from_pointer.fj); [stack](hex/pointers/stack.fj)/[pointers](hex/pointers/basic_pointers.fj) init. [pointer arithmetics](hex/pointers/pointer_arithmetics.fj), [stack arithmetics + push/pop](hex/pointers/stack.fj).
5757

5858
### [casting.fj](casting.fj)
5959
Offers casting between bits and hexes.

tests/README.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ Then add a new line to the relevant compile-csv and run-csv files, according to
106106

107107
### Compile CSVs format:
108108

109-
(files with the next format: ```test_compile_*.csv```)
109+
(files with the next format: ```tests_tables/test_compile_*.csv```)
110110

111111
| test name | .fj paths | out .fjm path | memory width | version | flags | use stl | treat warnings as errors |
112112
|--------------|-------------------------------------------------------------|------------------------------|--------------|---------|-------|---------|--------------------------|
@@ -118,7 +118,7 @@ Note that you can specify a single file, or a '|' separated list of files in the
118118

119119
### Run CSVs format:
120120

121-
(files with the next format: ```test_run_*.csv```)
121+
(files with the next format: ```tests_tables/test_run_*.csv```)
122122

123123
| test name | .fjm path | input file path | output file path | is input a binary file | is output a binary file |
124124
|--------------|------------------------------|---------------------------|-----------------------------|------------------------|-------------------------|
@@ -130,6 +130,6 @@ Note that you can also emit specifying a file in the input/output cell, and leav
130130

131131
### Xfail Lists
132132

133-
If you want to add your test, but you want it to [xfail](https://docs.pytest.org/en/7.1.x/how-to/skipping.html#xfail-mark-test-functions-as-expected-to-fail), you can add your test name to:
134-
- ```xfail_compile.csv``` - to mark its compilation as expected to fail.
135-
- ```xfail_run.csv``` - to mark its run as expected to fail.
133+
If you want to add your test, but you want it to [xfail](https://docs.pytest.org/en/7.1.x/how-to/skipping.html#xfail-mark-test-functions-as-expected-to-fail), you can add your test name (in its own line) to:
134+
- ```tests_tables/xfail_compile.csv``` - to mark its compilation as expected to fail.
135+
- ```tests_tables/xfail_run.csv``` - to mark its run as expected to fail.

tests/conftest.py

+7-4
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@
3333

3434

3535
TESTS_PATH = Path(__file__).parent
36+
TESTS_TABLES_PATH = TESTS_PATH / 'tests_tables'
3637
with open(TESTS_PATH / 'conf.json', 'r') as tests_json:
3738
TESTS_OPTIONS = json.load(tests_json)
3839

@@ -361,8 +362,8 @@ def get_tests_from_csvs(get_option: Callable[[str], Any]) -> TestsType:
361362

362363
types_to_run__heavy_first = get_test_types_to_run__heavy_first(get_option)
363364

364-
compile_xfail_list = [line[0] for line in argument_line_iterator(TESTS_PATH / "xfail_compile.csv", 1)]
365-
run_xfail_list = [line[0] for line in argument_line_iterator(TESTS_PATH / "xfail_run.csv", 1)]
365+
compile_xfail_list = [line[0] for line in argument_line_iterator(TESTS_TABLES_PATH / "xfail_compile.csv", 1)]
366+
run_xfail_list = [line[0] for line in argument_line_iterator(TESTS_TABLES_PATH / "xfail_run.csv", 1)]
366367

367368
save_debug_file, debug_info_length = get_option(DEBUG_INFO_FLAG)
368369
if is_parallel_active():
@@ -371,7 +372,7 @@ def get_tests_from_csvs(get_option: Callable[[str], Any]) -> TestsType:
371372
compile_tests: List[ParameterSet] = []
372373
if check_compile_tests:
373374
compiles_csvs = {
374-
test_type: TESTS_PATH / f"test_compile_{test_type}.csv" for test_type in types_to_run__heavy_first
375+
test_type: TESTS_TABLES_PATH / f"test_compile_{test_type}.csv" for test_type in types_to_run__heavy_first
375376
}
376377
for test_type in types_to_run__heavy_first:
377378
compile_tests.extend(
@@ -381,7 +382,9 @@ def get_tests_from_csvs(get_option: Callable[[str], Any]) -> TestsType:
381382

382383
run_tests: List[ParameterSet] = []
383384
if check_run_tests:
384-
run_csvs = {test_type: TESTS_PATH / f"test_run_{test_type}.csv" for test_type in types_to_run__heavy_first}
385+
run_csvs = {
386+
test_type: TESTS_TABLES_PATH / f"test_run_{test_type}.csv" for test_type in types_to_run__heavy_first
387+
}
385388
for test_type in types_to_run__heavy_first:
386389
run_tests.extend(
387390
get_run_tests_params_from_csv(run_csvs[test_type], run_xfail_list, save_debug_file, debug_info_length)
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)