Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Behavior similar to RSpec's --only-failures flag #316

Open
robhanlon22 opened this issue Sep 22, 2022 · 3 comments
Open

Behavior similar to RSpec's --only-failures flag #316

robhanlon22 opened this issue Sep 22, 2022 · 3 comments
Labels
feature New functionality help wanted Extra attention is needed

Comments

@robhanlon22
Copy link
Contributor

https://relishapp.com/rspec/rspec-core/docs/command-line/only-failures

RSpec can optionally emit metadata about tests that failed to a text file, which can then be consumed by subsequent runs via --only-failures (run all tests that failed) and --next-failure (run all failed tests but fail fast).

This could be really useful in REPL-driven workflows:

(kaocha.repl/run :only-failures)
(kaocha.repl/run :next-failure)
@alysbrooks
Copy link
Member

@robhanlon22 Thanks for the idea! FYI, we do have bits and pieces of this functionality:

  • --watch automatically reruns failures, but I don't think using watch is particularly ergonomic in the REPL.
  • You can use the print-invocations plugin to show you the command to run the failing tests.
  • You can print all results with --print-result

Would having --only-failures in the command line be useful to you as well?

@plexus plexus added the feature New functionality label Nov 25, 2022
@plexus
Copy link
Member

plexus commented Nov 25, 2022

I moved this to a new "New features" column on the CT board, while I'm not sure about the exact syntax proposal I do like the idea, but we'll have to think this through a bit more.

@alysbrooks alysbrooks added the help wanted Extra attention is needed label Jul 3, 2023
@alysbrooks
Copy link
Member

I think this functionality is useful enough that we want to keep this open. I also think we could build other functionality using a feature that tracks previous runs. For example, we could profile across runs and show, e.g., major increases in the time it takes a test to run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New functionality help wanted Extra attention is needed
Projects
Status: 🎅 New features
Development

No branches or pull requests

3 participants