Skip to content

Commit

Permalink
More test content
Browse files Browse the repository at this point in the history
  • Loading branch information
lwasser committed Sep 28, 2023
1 parent c9e7bde commit 532d87a
Show file tree
Hide file tree
Showing 7 changed files with 222 additions and 186 deletions.
2 changes: 2 additions & 0 deletions ci-tests-data/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ But why
Intro <self>
Writing tests <tests>
Test types: unit, integration, functional <test-types>
Run tests in CI & locally <run-tests>
Package data <data>
CI <ci>
Expand Down
115 changes: 115 additions & 0 deletions ci-tests-data/run-tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
# Running your tests

### What test suite tools should I use to run tests?

We recommend using Pytest to set up testing infrastructure for your package as it is the most common testing tool used in the Python ecosystem.

[Pytest package](https://docs.pytest.org/en/latest/) also has a number of extensions that can be used to add functionality such as:

- [pytest-cov](https://pytest-cov.readthedocs.io/en/latest/) allows you to analyze the code coverage of your package during your tests, and generates a report that you can [upload to codecov](https://codecov.io/).

```{note}
Reference the issue with running tests in vscode, breakpoints and –no-cov flag. Then link to tutorial that explains how to deal with this.
```

### Running tests

**Tox**

- [Tox](https://tox.wiki/en/latest/index.html#useful-links) is an automation tool that supports common steps such as building documentation, running tests across various versions of Python, and more. You can find [a nice overview of tox in the plasmaPy documentation](https://docs.plasmapy.org/en/stable/contributing/testing_guide.html#using-tox).

**Make**

- Some developers opt to use Make for writing tests due to its versatility; it's not tied to a specific language and can be employed to orchestrate various build processes. However, Make's unique syntax and approach can make it more challenging to learn, particularly if you're not already familiar with it. Make also won't manage environments for you like nox will do.

**Hatch**

- [Hatch](https://github.com/ofek/hatch) is a modern end-to-end packaging tool that also has a nice build backend called hatchling. Hatch offers a tox-like setup where you can run tests locally using different Python versions. If you are using hatch to support your packaging workflow, you may want to also use its testing capabilities rather than using nox.

**[Nox](https://nox.thea.codes/):** is a Python-based automation tool that builds upon the features of both Make and Tox. Nox is designed to simplify and streamline testing and development workflows. Everything that you do with nox can be implemented using a Python-based interface.

## Run your test suite

Your package will be used by a diverse set of users who will be running various Python versions and using various operating systems. Thus you will want to run your test suite in all of these environments to identify issues that users may have before they run into them “in the wild”.

There are two primary ways that you can run tests - locally on your computer and in your CI build. We discuss both below.

## Section 2: Run tests on your computer

In this guide we recommend Nox for running your tests locally. However you will also see packages using Tox or Hatch to create local environments with different versions of Python for you. Some packages use the more traditional Make approach. Note that Make does not manage environments for you.

As discussed above, it’s ideal for you to make sure that you run your tests on the various combinations of operating systems and Python versions that your users may be using.

### Why we like nox

Nox simplifies the process of creating and managing different testing environments, allowing you to check how your code behaves across various Python versions and configurations. With Nox, you can define specific test scenarios, set up virtual environments, and run tests across operating systems with a single command. Running tests locally using tools like Nox help you create controlled environment(s) to run your tests, making sure your code works correctly on your own computer before sharing it with others.

We recommend Nox for this purpose because you can also use it to setup other types of development builds including building your documentation, package distributions and more. Nox also supports working with conda, venv and other environment managers.

### Working with Nox

## Environments

By default, Nox uses the Python built in ` venv` environment manager. A virtual environment (`venv`) is a self-contained Python environment that allows you to isolate and manage dependencies for different Python projects. It helps ensure that project-specific libraries and packages do not interfere with each other, promoting a clean and organized development environment.

Nox uses `venv`` by default. An example of using nox to run tests in venv environments for python versions 3.9, 3.10 and 3.11 is below.

```{warning}
Note that for the code below to work, you need to have all 3 versions of python installed on your computer for venv to find.
```

Below is an example of setting up nox to run tests using `venv`.

```python
import nox

# for this to run you will need to have python3.9, python3.10 and python3.11 installed on your computer. Otherwise nox will skip running tests for whatever versions are missing

@nox.session(python=["3.9", "3.10", "3.11"])
def test(session):

# install
session.install(".[all]")
# install dev requirements
session.install("-r", "requirements-dev.txt")
# Run tests
session.run("pytest")

```

Below is an example for setting up nox to use mamba (or conda).
Note that when you are using conda, it can automagically install
the various versions of Python that you need. You won't need to install all three Python versions if you use conda/mamba, like you do with `venv`.

```python
import nox

# The syntax below allows you to use mamba / conda as your environment manager, if you use this approach you don’t have to worry about installing different versions of Python

@nox.session(venv_backend='mamba', python=["3.9", "3.10", "3.11"])
def test(session):
"""Nox function that installs dev requirements and runs
tests on Python 3.9 through 3.11
"""

# Install dev requirements
session.install(".[all]")
# Install dev requirements
session.install("-r", "requirements-dev.txt")
# Run tests using any parameters that you need
session.run("pytest")
```

### Running tests on CI

Running your test suite locally is useful as you develop code and also test new features or changes to the code base. However, you also will want to setup Continuous Integration (CI) to run your tests online. CI allows you to run all of your tests in the cloud. While you may only be able to run tests locally on a specific operating system that you run, in CI you can specify tests to run both on various versions of Python and across different operating systems.

CI can also be triggered for pull requests and pushes to your repository. This means that every pull request that you, your maintainer team or a contributor submit, can be tested. In the end CI testing ensures your code continues to run as expected even as changes are made to the code base. [Read more about CI here. ](https://docs.google.com/document/d/1jmo2l5u02c_F1zZi0bAIYXeJ6HiIryJbXzsNbMQQX6o/edit#heading=h.3mx2na93o7bf)

### CI Environment

CI Environment: When you’re ready to publish your code online, you can setup Continuous Integration (CI). A CI platform will allow you to not only run your tests on various Python versions but also different operating systems like Mac, Linux and Windows. Tools like GitHub Actions and GitLab CI/CD make it easy for you to run tests on various Python versions, and even on Windows, Mac, and Linux. CI can finally be configured to ensure that tests run on every push and pull request to your repository. This ensures that any changes made to your package are tested across operations systems and Python versions before they are merged into the main branch of your codebase. &lt;<tests in ci link here>>

By embracing these testing practices, you can ensure that your code runs as you expect it to across the diverse landscapes of user environments.

pr checks.
69 changes: 69 additions & 0 deletions ci-tests-data/test-types.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Different types of tests: Unit, Integration & Functional Tests

There are different types of tests that you want to consider when creating your
test suite:

1. Unit tests
2. Integration
3. Functional (also known as end-to-end) tests

Here you will learn about all three different types of tests.

### Unit Tests

A unit test in Python involves testing individual components or units of code in isolation to ensure that they work correctly. The goal of unit testing is to verify that each part of the software, typically at the function or method level, performs its intended task correctly.

Unit tests can be compared to examining each piece of your puzzle to ensure parts of it are not broken. If all of the pieces of your puzzle don’t fit together, you will never complete it. Similarly, when working with code, tests ensure that each function, attribute, class, method works properly when isolated.

**Unit test example:** Pretend that you have a function that converts a temperature value from Celsius to Fahrenheit. A test for that function might ensure that when provided with a value in Celsius, the function returns the correct value in degrees Fahrenheit. That function is a unit test. It checks a single unit (function) in your code.

```{figure} ../images/python-tests-puzzle.png
:height: 350px
:alt: image of two puzzle pieces with some missing parts. The puzzle pieces are purple teal yellow and blue. The shapes of each piece don’t fit together.
If puzzle pieces have missing ends, they can’t work together with other elements in the puzzle. The same is true with individual functions, methods and classes in your software. The code needs to work both individually and together to perform certain sets of tasks.
```

### Integration tests

Integration tests involve testing how parts of your package work together or integrate. Integration tests can be compared to connecting a bunch of puzzle pieces together to form a whole picture. Integration tests focus on how different pieces of your code fit and work together.

For example, if you had a series of steps that collected temperature data in a spreadsheet, converted it from degrees celsius to Fahrenheit and then provided an average temperature for a particular time period. An integration test would ensure that all parts of that workflow behaved as expected.

```{figure} ../images/python-tests-puzzle-fit.png
:height: 450px
:alt: image of puzzle pieces that all fit together nicely. The puzzle pieces are colorful - purple, green and teal.
Your integration tests should ensure that parts of your code that are expected to work
together, do so as expected.
```

### End-to-end (functional) tests

End-to-end tests (also referred to as functional tests) in Python are like comprehensive checklists for your software. They simulate real user end-to-end workflows to make sure the code base supports real life applications and use-cases from start to finish. These tests help catch issues that might not show up in smaller tests and ensure your entire application or program behaves correctly. Think of them as a way to give your software a final check before it's put into action, making sure it's ready to deliver a smooth experience to its users.

```{note}
For scientific packages, creating short tutorials that highlight core workflows that your package supports, that are run when your documentation is built could also serve as end-to-end tests.
```

### Comparing unit, integration and end-to-end tests

Unit tests, integration tests, and end-to-end tests have complementary advantages and disadvantages. The fine-grained nature of unit tests make them well-suited for isolating where errors are occurring, but not very suitable for verifying that different sections of code work together. Integration and end-to-end tests verify that the different portions of the program work together, but are less well-suited for isolating where errors are occurring. A thorough test suite should have a mixture of unit tests, integration tests, and functional tests.

# Code coverage

Code coverage is the amount of your package's codebase that is run as a part of running your project's tests. A good rule of thumb is to ensure that \*\*every line of

your code is run at least once during testing\**. However, note that good code coverage does not *guarantee\* that your package is well-tested. For example, you may run all of your lines of code, but not account for many edge-cases that users may have. Ultimately, you should think carefully about the way your package will be used, and decide whether your tests adequately cover all of that usage.

A common service for analyzing code coverage is [codecov.io](https://codecov.io/). This project is free for open source tools, and will provide dashboards that tell you how much of your codebase is covered during your tests. We recommend setting up an account, and using codecov to keep track of your code coverage.

```{figure} ../images/code-cov-stravalib.png
:height: 450px
:alt: Screenshot of the code cov service - showing test coverage for the stravalib package. in this image you can see a list of package modules and the associated number of lines and % lines covered by tests. at the top of the image you can see what branch is being evaluated and the path to the repository being shown.
the Code cov platform is a useful tool if you wish to visually track code coverage. Using it you can not only get the same summary information that you can get with pytest-cov extension. You can also get a visual representation of what lines are covered by your tests and what lines are not covered. Code cove is mostly useful for evaluating unit tests and/or how much of your package code is "covered. It however will not evaluate things like integration tests and end-to-end workflows. b
```
Loading

0 comments on commit 532d87a

Please sign in to comment.