Skip to content

Releases: pester/Pester

5.0.0-rc8

16 May 08:38
Compare
Choose a tag to compare
5.0.0-rc8 Pre-release
Pre-release
  • Fix mock counting across modules

Comparison with previous pre-release: 5.0.0-rc7...5.0.0-rc8

5.0.0-rc7 (GA2)

12 May 07:35
Compare
Choose a tag to compare
5.0.0-rc7 (GA2) Pre-release
Pre-release

Fixes in this release:

  • Set-ItResult translates both Pending and Inconclusive to skipped
  • Mock splatting is fixed (but mocking Get-PackageSource no longer works)
  • TestRegistry and TestDrive teardown correctly when running Pester in Pester
  • Fixed resolving paths on PowerShell 5.1

5.0.0-rc6...5.0.0-rc7

5.0.0-rc6

04 May 07:55
Compare
Choose a tag to compare
5.0.0-rc6 Pre-release
Pre-release

Pester v5 - RC6 (GA)

💵 I am spending most of my weekends making this happen. These release notes for example took multiple days to write and update. Consider sponsoring me or sponsoring Pester, please.

🙋‍ Want to share feedback? Go here, or see more options in Questions?.

What is new?

🔥 Interested only in breaking changes? See breaking changes below.

🕹 want to see a demo? Here is my talk What is new in Pester 5 from #BridgeConf

Pester 5 RC6 (GA) is here! 🥳 This version is stable enough to be used for new projects, and is the recommended choice if you just started learning Pester. If you own any project, please give it a try and report back to help me identify any major bugs.

Discovery & Run

The fundamental change in this release is that Pester now runs in two phases: Discovery and Run. During discovery, it quickly scans your test files and discovers all the Describes, Contexts, Its and other Pester blocks.

This powers many of the features in this release and enables many others to be implemented in the future.

To reap the benefits, there are new rules to follow:

Put all your code into It, BeforeAll, BeforeEach, AfterAll or AfterEach. Put no code directly into Describe, Context or on the top of your file, without wrapping it in one of these blocks, unless you have a good reason to do so.

All misplaced code will run during Discovery, and its results won't be available during Run.

This will allow Pester to control when all of your code is executed, and scope it correctly. This will also keep the amount of code executed during discovery to a minimum. Keeping it fast and responsive. See discovery and script setup article for detailed information.

Put setup in BeforeAll

If your test suite already puts its setups and teardowns into Before* and After*. All you need to do is move the file setup into a BeforeAll block:

BeforeAll {
    # DON'T use $MyInvocation.MyCommand.Path
    . $PSCommandPath.Replace('.Tests.ps1','.ps1')
}

Describe "Get-Cactus" {
    It "Returns 🌵" {
        Get-Cactus | Should -Be '🌵'
    }
}

See migration script for a script that does it for you. Improvements are welcome, e.g. putting code between Describe and It into BeforeAll. See discovery and script setup and importing ps files article for detailed information.

Review your usage of Skip

This also impacts -Skip when you use it with -Skip:$SomeCondition. All the code in the describe block, including your skip conditions and TestCases will be evaluated during Discovery. Prefer static global variables, or code that is cheap to executed. It is not forbidden to put code to figure out the skip outside of BeforeAll, but be aware that it will run on every discovery.

This won't work. BeforeAll runs after Discovery, and so $isSkipped is not defined and ends up being $null -> $false, so the test will run.

Describe "d" {
    BeforeAll {
        function Get-IsSkipped {
            Start-Sleep -Second 1
            $true
        }
        $isSkipped = Get-IsSkipped
    }

    It "i" -Skip:$isSkipped {

    }
}

Changing the code like this will skip the test correctly, but be aware that the code will run every time Discovery is performed on that file. Depending on how you run your tests this might be every time.

function Get-IsSkipped {
    Start-Sleep -Second 1
    $true
}
$isSkipped = Get-IsSkipped

Describe "d" {
    It "i" -Skip:$isSkipped {

    }
}

Consider settings the check statically into a global read-only variable (much like $IsWindows), or caching the response for a while. Are you in this situation? Get in touch via the channels mentioned in Questions?.

Review your usage of TestCases

-TestCases, much like -Skip are evaluated during discovery and saved for later use when the test runs. This means that doing expensive setup for them will be happening every Discovery. On the other hand, you will now find their complete content for each TestCase in Data on the result test object. And also don't need to specify param block.

Tags

Tags on everything

The tag parameter is now available on Describe, Context and It and it is possible to filter tags on any level. You can then use -Tag and -ExcludeTag to run just the tests that you want.

Here you can see an example of a test suite that has acceptance tests and unit tests, and some of the tests are slow, some are flaky, and some only work on Linux. Pester5 makes running all reliable acceptance tests, that can run on Windows is as simple as:

Invoke-Pester $path -Tag "Acceptance" -ExcludeTag "Flaky", "Slow", "LinuxOnly"
Describe "Get-Beer" {

    Context "acceptance tests" -Tag "Acceptance" {

        It "acceptance test 1" -Tag "Slow", "Flaky" {
            1 | Should -Be 1
        }

        It "acceptance test 2" {
            1 | Should -Be 1
        }

        It "acceptance test 3" -Tag "WindowsOnly" {
            1 | Should -Be 1
        }

        It "acceptance test 4" -Tag "Slow" {
            1 | Should -Be 1
        }

        It "acceptance test 5" -Tag "LinuxOnly" {
            1 | Should -Be 1
        }
    }

    Context "unit tests" {

        It "unit test 1" {
            1 | Should -Be 1
        }

        It "unit test 2" -Tag "LinuxOnly" {
            1 | Should -Be 1
        }

    }
}
Starting test discovery in 1 files.
Discovering tests in ...\real-life-tagging-scenarios.tests.ps1.
Found 7 tests. 482ms
Test discovery finished. 800ms

Running tests from '...\real-life-tagging-scenarios.tests.ps1'
Describing Get-Beer
  Context acceptance tests
      [+] acceptance test 2 50ms (29ms|20ms)
      [+] acceptance test 3 42ms (19ms|23ms)
Tests completed in 1.09s
Tests Passed: 2, Failed: 0, Skipped: 0, Total: 7, NotRun: 5

Tags use wildcards

The tags are now also compared as -like wildcards, so you don't have to spell out the whole tag if you can't remember it. This is especially useful when you are running tests locally:

Invoke-Pester $path -ExcludeT "Accept*", "*nuxonly" | Out-Null
Starting test discovery in 1 files.
Discovering tests in ...\real-life-tagging-scenarios.tests.ps1.
Found 7 tests. 59ms
Test discovery finished. 97ms


Running tests from '...\real-life-tagging-scenarios.tests.ps1'
Describing Get-Beer
 Context Unit tests
   [+] unit test 1 15ms (7ms|8ms)
Tests completed in 269ms
Tests Passed: 1, Failed: 0, Skipped: 0, Total: 7, NotRun: 6

Logging

All the major components log extensively.I am using logs as a debugging tool all the time so I make sure the logs are usable and not overly verbose. See if you can figure out why `...

Read more

5.0.0-rc5

30 Apr 05:57
63c5736
Compare
Choose a tag to compare
5.0.0-rc5 Pre-release
Pre-release
  • Add debugging mocks
  • Pre-build the binaries
  • Optimize runtime
  • Move all builds to AzureDevOps

Release notes in this huge readme: https://github.com/pester/Pester/blob/v5.0/README.md

List of changes 5.0.0-rc4...5.0.0-rc5

5.0.0-rc4

20 Apr 07:04
6ceec22
Compare
Choose a tag to compare
5.0.0-rc4 Pre-release
Pre-release

Merges module into a single file, taking the load time down to <1s.

Release notes in this huge readme: https://github.com/pester/Pester/blob/v5.0/README.md

List of changes 5.0.0-rc3...5.0.0-rc4

5.0.0-rc3

20 Apr 07:03
Compare
Choose a tag to compare
5.0.0-rc3 Pre-release
Pre-release

Adds -PassThru and -FullNameParameter, improves speed of discovery, defining and asserting mocks and should.

Release notes in this huge readme: https://github.com/pester/Pester/blob/v5.0/README.md

List of changes 5.0.0-rc1...5.0.0-rc3

5.0.0-rc2

20 Apr 07:02
Compare
Choose a tag to compare
5.0.0-rc2 Pre-release
Pre-release

Broken release. Do not use.

5.0.0-rc1

05 Apr 06:18
Compare
Choose a tag to compare
5.0.0-rc1 Pre-release
Pre-release

Release notes in this huge readme, full release expected in a week, go try it please :)

https://github.com/pester/Pester/blob/v5.0/README.md

5.0.0-beta

30 Apr 05:54
Compare
Choose a tag to compare
5.0.0-beta Pre-release
Pre-release

Pester v5 - beta

🙋‍ Want to share feedback? Go here

Pester5 beta is finally here. 🥳🥳🥳 Frankly there are more news than I am able to cover. Here some of the best new features:

Tags

Tags on everyting

The tag parameter is now available on Describe, Context and It and it is possible to filter tags on any level. You can then use -Tag and -ExcludeTag to run just the tests that you want.

Here you can see an example of a test suite that has acceptance tests and unit tests, and some of the tests are slow, some are flaky, and some only work on Linux. Pester5 makes runnin all reliable acceptance tests, that can run on Windows is as simple as:

Invoke-Pester $path -Tag "Acceptance" -ExcludeTag "Flaky", "Slow", "LinuxOnly"
Describe "Get-Beer" {

    Context "acceptance tests" -Tag "Acceptance" {

        It "acceptance test 1" -Tag "Slow", "Flaky" {
            1 | Should -Be 1
        }

        It "acceptance test 2" {
            1 | Should -Be 1
        }

        It "acceptance test 3" -Tag "WindowsOnly" {
            1 | Should -Be 1
        }

        It "acceptance test 4" -Tag "Slow" {
            1 | Should -Be 1
        }

        It "acceptance test 5" -Tag "LinuxOnly" {
            1 | Should -Be 1
        }
    }

    Context "unit tests" {

        It "unit test 1" {
            1 | Should -Be 1
        }

        It "unit test 2" -Tag "LinuxOnly" {
            1 | Should -Be 1
        }

    }
}
Starting test discovery in 1 files.
Discovering tests in ...\real-life-tagging-scenarios.tests.ps1.
Found 7 tests. 482ms
Test discovery finished. 800ms

Running tests from '...\real-life-tagging-scenarios.tests.ps1'
Describing Get-Beer
  Context acceptance tests
      [+] acceptance test 2 50ms (29ms|20ms)
      [+] acceptance test 3 42ms (19ms|23ms)
Tests completed in 1.09s
Tests Passed: 2, Failed: 0, Skipped: 0, Total: 7, NotRun: 5

Tags use wildcards

The tags are now also compared as -like wildcards, so you don't have to spell out the whole tag if you can't remember it. This is especially useful when you are running tests locally:

Invoke-Pester $path -ExcludeT "Accept*", "*nuxonly" | Out-Null
Starting test discovery in 1 files.
Discovering tests in ...\real-life-tagging-scenarios.tests.ps1.
Found 7 tests. 59ms
Test discovery finished. 97ms


Running tests from '...\real-life-tagging-scenarios.tests.ps1'
Describing Get-Beer
 Context Unit tests
   [+] unit test 1 15ms (7ms|8ms)
Tests completed in 269ms
Tests Passed: 1, Failed: 0, Skipped: 0, Total: 7, NotRun: 6

Logging

All the major components log extensively.I am using logs as a debugging tool all the time so I make sure the logs are usable and not overly verbose. See if you can figure out why acceptance test 1 is excluded from the run, and why acceptance test 2 runs.

RuntimeFilter: (Get-Beer) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer) Block did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer) There is 'Acceptance' include tag filter.
RuntimeFilter: (Get-Beer) Block has no tags, moving to next include filter.
RuntimeFilter: (Get-Beer) Block did not match any of the include filters, but it will still be included in the run, it's children will determine if it will run.
RuntimeFilter: (Get-Beer.acceptance tests) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests) Block did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer.acceptance tests) There is 'Acceptance' include tag filter.
RuntimeFilter: (Get-Beer.acceptance tests) Block is included, because it's tag 'Acceptance' matches tag filter 'Acceptance'.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 1) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 1) Test is excluded, because it's tag 'Flaky' matches exclude tag filter 'Flaky'.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 2) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 2) Test did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 2) Test is included, because its parent is included.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 3) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 3) Test did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 3) Test is included, because its parent is included.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 4) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 4) Test is excluded, because it's tag 'Slow' matches exclude tag filter 'Slow'.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 5) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.acceptance tests.acceptance test 5) Test is excluded, because it's tag 'LinuxOnly' matches exclude tag filter 'LinuxOnly'.
RuntimeFilter: (Get-Beer.Unit tests) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.Unit tests) Block did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer.Unit tests) There is 'Acceptance' include tag filter.
RuntimeFilter: (Get-Beer.Unit tests) Block has no tags, moving to next include filter.
RuntimeFilter: (Get-Beer.Unit tests) Block did not match any of the include filters, but it will still be included in the run, it's children will determine if it will run.
RuntimeFilter: (Get-Beer.Unit tests.unit test 1) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.Unit tests.unit test 1) Test did not match the exclude tag filter, moving on to the next filter.
RuntimeFilter: (Get-Beer.Unit tests.unit test 1) There is 'Acceptance' include tag filter.
RuntimeFilter: (Get-Beer.Unit tests.unit test 1) Test has no tags, moving to next include filter.
RuntimeFilter: (Get-Beer.Unit tests.unit test 1) Test did not match any of the include filters, it will not be included in the run.
RuntimeFilter: (Get-Beer.Unit tests.unit test 2) There is 'Flaky, Slow, LinuxOnly' exclude tag filter.
RuntimeFilter: (Get-Beer.Unit tests.unit test 2) Test is excluded, because it's tag 'LinuxOnly' matches exclude tag filter 'LinuxOnly'.
RuntimeFilter: (Get-Beer.Unit tests) Block was marked as Should run based on filters, but none of its tests or tests in children blocks were marked as should run. So the block won't run.

Please be aware that the log is currently only written to the screen and not persisted in the result object. And that the logging comes with a performance penalty.

Run only what is needed

Look at the last line of the above log. It says that the block will not run, because none of the tests inside of it, or inside of any of the children blocks will run. This is great because when the block does not run, none of its setups and teardowns run either.

Invoking the code below with -ExcludeTag Acceptance will filter out all the tests in the file and there will be nothing to run. Pester5 understands that if there are no tests in the file to run, there is no point in executing the setups and teardowns in it, and so it returns almost immediately:

BeforeAll {
    Start-Sleep -Seconds 3
}

Describe "describe 1" {
    BeforeAll {
        Start-Sleep -Seconds 3
    }

    It "acceptance test 1" -Tag "Acceptance" {
        1 | Should -Be 1
    }

    AfterAll {
        Start-Sleep -Seconds 3
    }
}
Starting test discovery in 1 files.
Found 1 tests. 64ms
Test discovery finished. 158ms
Tests completed in 139ms
Tests Passed: 0, Failed: 0, Skipped: 0, Total: 1, NotRun: 1

Skip on everyting

-Skip is now available on Describe and Context. This allows you to skip all the tests in that block and every child block.

Describe "describe1" {
    Context "with one skipped test" {
        It "test 1" -Skip {
            1 | Should -Be 2
        }

        It "test 2" {
            1 | Should -Be 1
        }
    }

    Describe "that is skipped" -Skip {
        It "test 3" {
            1 | Should -Be 2
        }
    }

    Context "that is skipped and has skipped test" -Skip {
        It "test 3" -Skip {
            1 | Should -Be 2
        }

        It "test 3" {
            1 | Should -Be 2
        }
    }
}
Starting test discovery in 1 files.
Found 5 tests. 117ms
Test discovery finished. 418ms
Describing describe1
 Context with one skipped test
   [!] test 1, is skipped 18ms (0ms|18ms)
   [+] test 2 52ms (29ms|22ms)
 Describing that is skipped
   [!] test 3, is skipped 12ms (0ms|12ms)
 Context that is skipped and has skipped test
   [!] test 3, is skipped 10ms (0ms|10ms)
   [!] test 3, is skipped 10ms (0ms|10ms)
Tests completed in 1.03s
Tests Passed: 1, Failed: 0, Skipped: 4, Total: 5, NotRun: 0

(Pending is translated to skipped, Inconclusive does not exist anymore. Are you relying on them extensively? Share your feedback.)

Collect all Should failures

Should can now be configured to continue on failure. This will report the error to Pester, but won't fail the test immediately. Instead, all the Should failures are collected and reported at the end of the test. This allows you to put multiple assertions into one It and still get complete information on failure.
...

Read more

4.10.1

07 Feb 20:04
Compare
Choose a tag to compare
  • Fix nuget description to not include domain that we don't own anymore.