Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add e2e tests for login, survey and dashboard #681

Open
wants to merge 9 commits into
base: staging
Choose a base branch
from

Conversation

Yasaman-Behnam-Malekzadeh
Copy link

@Yasaman-Behnam-Malekzadeh Yasaman-Behnam-Malekzadeh commented Oct 16, 2024

To add e2e tests for Login page, Survey form and dashboard's home page.

$ cd e2e-tests 
$ npx playwright test       

Running 5 tests using 4 workers

  ✓  1 example.spec.ts:3:1 › has title (902ms)
  ✓  2 dashboard.spec.ts:5:2 › Dashboard › Check dashboard and Navbar (1.1s)
  ✓  3 survey.spec.ts:5:2 › Survey › Fill survey for first time (1.1s)
  ✓  4 login.spec.ts:5:2 › Login › Login with default username (1.1s)
  ✓  5 example.spec.ts:10:1 › get started link (807ms)

  5 passed (2.8s)

Summary by CodeRabbit

  • New Features

    • Introduced a reusable login function to streamline the login process for users in automated tests.
    • Added a new GitHub Actions workflow to automate testing for ZenML using Playwright, enhancing the testing process and ensuring proper environment setup.
  • Bug Fixes

    • Improved reliability of automated tests by ensuring correct initialization of the ZenML environment before executing tests.

Copy link
Contributor

coderabbitai bot commented Oct 16, 2024

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The changes include the introduction of a reusable login function in the e2e-tests/utils.ts file, which automates the login process for a web application. Additionally, a new GitHub Actions workflow file, playwright-pipeline.yml, has been created to automate testing for ZenML using Playwright. This workflow is triggered on pushes and pull requests to the main branch and includes steps for setting up the environment, installing necessary packages, and running tests.

Changes

File Change Summary
e2e-tests/utils.ts Added a new async function login(page: Page) for automating the login process.
.github/workflows/playwright-pipeline.yml Introduced a new workflow for automating ZenML testing with Playwright, including setup and test execution steps.

Poem

In the burrow where code does dwell,
A login function works quite well.
With workflows set to run and play,
ZenML tests will save the day!
Hops of joy, let’s celebrate,
Automation makes our work first-rate! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@CLAassistant
Copy link

CLAassistant commented Oct 16, 2024

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Outside diff range and nitpick comments (8)
e2e-tests/login.spec.ts (2)

4-8: LGTM: Test suite structure is well-organized. Consider adding more test cases.

The test suite structure follows Playwright's best practices, using test.describe to group related tests. This enhances readability and organization.

While the current test case for logging in with a default username is a good start, consider adding more test cases to cover different login scenarios, such as:

  • Login with invalid credentials
  • Login with empty fields
  • Login with special characters in the username/password
  • Login attempt with a locked account

This will help ensure comprehensive coverage of the login functionality.


1-8: Summary: Good foundation for login e2e tests. Consider expanding test coverage.

This file provides a solid foundation for login e2e tests using Playwright. The structure and implementation follow good practices, promoting readability and reusability.

To further improve the test suite:

  1. Add more test cases to cover various login scenarios.
  2. Include assertions in the existing test case to verify login success.
  3. Consider testing error handling and edge cases.

These enhancements will result in a more comprehensive and robust test suite for the login functionality.

e2e-tests/utils.ts (1)

3-4: LGTM: Well-structured reusable login function.

The function signature is correct, and exporting it allows for reuse across test files. The async nature is appropriate for handling Playwright's asynchronous operations.

Consider expanding the comment to provide more context:

// Reusable login function for e2e tests
// Navigates to the login page, fills in credentials, and submits the form
e2e-tests/dashboard.spec.ts (3)

13-17: LGTM: Comprehensive navbar visibility checks.

The visibility checks for all main navbar items are thorough and well-implemented.

Minor suggestion: Consider rewording the comment on line 13 for better clarity:

-//Visible the navbar
+//Check visibility of navbar items

19-30: LGTM: Thorough URL change checks for navbar interactions.

The implementation effectively verifies that clicking each navbar item results in the expected URL change. The use of regular expressions for URL matching provides flexibility.

Consider extracting the navbar items and their corresponding URL patterns into a configuration object. This would make the test more maintainable and easier to update if new sections are added:

const navItems = [
  { name: "Pipelines", urlPattern: /\/pipelines\?tab=pipelines/ },
  { name: "Models", urlPattern: /\/models/ },
  { name: "Artifacts", urlPattern: /\/artifacts/ },
  { name: "Stacks", urlPattern: /\/stacks/ },
];

for (const item of navItems) {
  await page.click(`a:has-text("${item.name}")`);
  await expect(page).toHaveURL(item.urlPattern);
}

This approach would reduce code duplication and make it easier to add or modify navbar items in the future.


1-32: Overall: Well-structured and comprehensive dashboard test.

The test provides good coverage of basic dashboard functionality, including login, navbar visibility, and navigation. The code is well-organized and uses async/await syntax consistently.

Consider the following suggestions to further improve the test:

  1. Add assertions for page content after navigation. For example, check for the presence of expected elements on each page.
  2. Consider testing edge cases, such as rapid navigation between pages or refreshing the page after navigation.
  3. If applicable, add tests for any interactive elements on the dashboard beyond navigation (e.g., widgets, dropdowns).
  4. Consider splitting this into multiple smaller, focused test cases for better isolation and easier debugging.

Example of checking page content after navigation:

await page.click('a:has-text("Pipelines")');
await expect(page).toHaveURL(/\/pipelines\?tab=pipelines/);
await expect(page.locator('h1:has-text("Pipelines")')).toBeVisible();

These additions would enhance the robustness and coverage of your e2e tests.

e2e-tests/survey.spec.ts (2)

5-12: Consider adding an assertion or log for the early exit case.

The early exit when the survey form is not visible is a good practice. However, to improve debuggability, consider adding an assertion or log message when skipping the test.

Here's a suggested improvement:

 if (!isVisible) {
+  console.log('Survey form not visible, skipping test.');
+  test.skip();
   return;
 }

1-45: Overall assessment: Good coverage, room for improvement in robustness and maintainability.

This new end-to-end test for the survey functionality provides good coverage of the entire survey flow. However, there are several areas where the test could be improved:

  1. Add more assertions throughout the test to verify the success of each step.
  2. Use more consistent and robust selectors, preferably data-testid attributes.
  3. Extract repeated patterns into helper functions to improve maintainability.
  4. Add a final assertion to verify successful completion of the entire survey process.
  5. Consider error handling for cases where elements are not found or interactions fail.

These improvements will make the test more reliable, easier to maintain, and more informative when failures occur.

To further enhance the overall testing architecture, consider:

  1. Creating a PageObject model for the survey to encapsulate selectors and common interactions.
  2. Implementing a test data management strategy for consistent test data across runs.
  3. Setting up visual regression tests to catch unexpected UI changes.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between aacc566 and 6861c6a.

📒 Files selected for processing (4)
  • e2e-tests/dashboard.spec.ts (1 hunks)
  • e2e-tests/login.spec.ts (1 hunks)
  • e2e-tests/survey.spec.ts (1 hunks)
  • e2e-tests/utils.ts (1 hunks)
🧰 Additional context used
🔇 Additional comments (5)
e2e-tests/login.spec.ts (1)

1-2: LGTM: Imports are correct and follow good practices.

The imports are appropriate for the test implementation. Importing test from Playwright is standard, and using a separate login utility function promotes code reusability across tests.

e2e-tests/utils.ts (2)

1-1: LGTM: Correct imports for Playwright test utilities.

The import statement correctly brings in the necessary Playwright test utilities (Page and expect) that are used in the login function.


5-9: ⚠️ Potential issue

Improve robustness and flexibility of the login function.

The login function implementation is logical, but there are several areas for improvement:

  1. The hardcoded URL might not work across different environments.
  2. Using an empty password could be a security concern or a specific test case.
  3. The selectors used for locating elements are fragile and might break if the UI changes.

Consider the following improvements:

  1. Use an environment variable or configuration file for the base URL:

    await page.goto(process.env.BASE_URL || "http://127.0.0.1:8237/");
  2. Make the username and password configurable:

    export async function login(page: Page, username = "default", password = "") {
      // ...
      await page.fill('input[name="username"]', username);
      await page.fill('input[name="password"]', password);
      // ...
    }
  3. Use more robust selectors, preferably data-testid attributes:

    await expect(page.locator('[data-testid="login-header"]')).toBeVisible();
    await page.fill('[data-testid="username-input"]', username);
    await page.fill('[data-testid="password-input"]', password);
    await page.click('[data-testid="login-button"]');

To ensure the login function works correctly across different scenarios, consider adding the following test cases:

e2e-tests/dashboard.spec.ts (1)

1-5: LGTM: Imports and test structure look good.

The imports are appropriate, and the test structure is well-organized. The use of a custom login function from utils.ts is a good practice for code reusability.

e2e-tests/survey.spec.ts (1)

1-4: LGTM: Imports and test structure look good.

The imports are appropriate, and using a separate login utility function promotes code reuse across tests.

Comment on lines 5 to 7
test("Login with default username", async ({ page }) => {
await login(page);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

LGTM: Test case implementation is concise. Consider adding assertions.

The test case structure is correct, using an async function and the page object as per Playwright's conventions. Utilizing the login utility function promotes code reusability and keeps the test case concise.

However, the current implementation lacks assertions to verify the success of the login process. Consider adding assertions to ensure the login was successful, such as:

test("Login with default username", async ({ page }) => {
  await login(page);
  // Add assertions here, for example:
  await expect(page).toHaveURL('/dashboard'); // Assuming successful login redirects to dashboard
  await expect(page.locator('text=Welcome, User')).toBeVisible(); // Assuming there's a welcome message
});

This will make the test more robust by verifying the expected outcome of the login process.

Comment on lines 6 to 12
await login(page);

const overviewLink = await page.locator('a:has-text("Overview")').isVisible();

if (!overviewLink) {
return;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider failing the test if "Overview" link is not visible.

The current implementation silently passes the test if the "Overview" link is not visible. This might hide potential issues with the dashboard rendering.

Consider modifying the code to fail the test explicitly:

 if (!overviewLink) {
-  return;
+  throw new Error("Overview link is not visible. Dashboard might not have loaded correctly.");
 }

This change will make the test fail with a clear error message if the dashboard doesn't load as expected.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await login(page);
const overviewLink = await page.locator('a:has-text("Overview")').isVisible();
if (!overviewLink) {
return;
}
await login(page);
const overviewLink = await page.locator('a:has-text("Overview")').isVisible();
if (!overviewLink) {
throw new Error("Overview link is not visible. Dashboard might not have loaded correctly.");
}

Comment on lines 14 to 26
//survey form - Step 1
await page.fill('input[name="fullName"]', "test");
await page.fill('input[name="email"]', "[email protected]");
await page
.getByLabel("I want to receive news and recommendations about how to use ZenML")
.check();
await page.click('button span:has-text("Continue")');
await page.waitForSelector("text=What will be your primary use for ZenML?");

//survey form - Step 2
await page.click('div label:has-text("Personal")');
await page.click('button span:has-text("Continue")');
await page.waitForSelector("text=What best describes your current situation with ZenML?");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance test robustness and verifiability for survey steps 1 and 2.

While the test covers the necessary interactions, consider the following improvements:

  1. Add assertions after each step to verify successful completion.
  2. Use more robust selectors, preferably data-testid attributes, to make the test less brittle to UI changes.
  3. Consider extracting repeated patterns (like clicking "Continue") into helper functions.

Here's an example of how you might improve step 1:

// Helper function
async function clickContinue(page) {
  await page.click('[data-testid="continue-button"]');
}

// In the test
// Step 1
await page.fill('[data-testid="fullName-input"]', "test");
await page.fill('[data-testid="email-input"]', "[email protected]");
await page.check('[data-testid="news-checkbox"]');
await clickContinue(page);
await expect(page.locator('[data-testid="step2-title"]')).toBeVisible();

Comment on lines 28 to 40
//survey form - Step 3
await page.check('label:has-text("I\'m new to MLOps and exploring")');
await page.click('button span:has-text("Continue")');
await page.waitForSelector("text=What is your current infrastructure?");

//survey form - Step 4
await page.check('label:has-text("GCP") button');
await page.check('label:has-text("Azure")');
await page.check('label:has-text("Openshift")');
await page.check('label:has-text("AWS")');
await page.check('label:has-text("Native Kubernetes")');
await page.click('button span:has-text("Continue")');
await page.waitForSelector("text=Join The ZenML Slack Community");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve robustness and add assertions for survey steps 3 and 4.

The test covers the necessary interactions, but could be improved:

  1. Add assertions after each step to verify successful completion.
  2. Use more consistent and robust selectors, preferably data-testid attributes.
  3. Consider creating a helper function for checking multiple options in step 4.

Here's an example of how you might improve step 4:

// Helper function
async function checkMultipleOptions(page, options) {
  for (const option of options) {
    await page.check(`[data-testid="${option}-checkbox"]`);
  }
}

// In the test
// Step 4
const infrastructureOptions = ['GCP', 'Azure', 'Openshift', 'AWS', 'Native-Kubernetes'];
await checkMultipleOptions(page, infrastructureOptions);
await clickContinue(page);
await expect(page.locator('[data-testid="step5-title"]')).toBeVisible();

Comment on lines 42 to 45
//survey form - Step 5
await page.click('button span:has-text("Skip")');
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add final assertions to verify survey completion.

The test concludes by clicking the "Skip" button, but it doesn't verify the final state. Consider the following improvements:

  1. Add an assertion to verify that the survey process has completed successfully.
  2. Check for any elements that should be visible after survey completion (e.g., dashboard elements).

Here's a suggested improvement:

// Final step
await page.click('[data-testid="skip-button"]');

// Verify survey completion
await expect(page.locator('[data-testid="survey-completion-message"]')).toBeVisible();

// Verify redirection to dashboard or relevant post-survey page
await expect(page.locator('[data-testid="dashboard-title"]')).toBeVisible();

@Cahllagerfeld
Copy link
Contributor

Cahllagerfeld commented Oct 17, 2024

Wow, nice contribution 👍
To make the test run in a CI environment we need to make sure that zenml is actually running.
I will take this discussion to a colleague on how to set this up properly, and will get back to this PR 👍

There are 2 challenges I currently see:

  • Inside the github action we need to make sure that Zenml runs properly and has some meaningful data => otherwise all tables will just be empty. We could run our quickstart for this to seed the database
  • The tests need to run in a fixed order => The survey test needs to run before all others, as it will always trigger otherwise for example

@Cahllagerfeld
Copy link
Contributor

Okay I just had some conversations with some backend developers, and here's what the suggested.

  • Inside a Github Action we can either install zenml from pip or we checkout the dev branch directly
  • We create several pipelines for our testcases inside this repo, and let them run against the zenml instance.
  • After that we can run zenml up and proceed with our tests from there.

What do you think about this?
Let me know if you want to follow up on your changes, or do you want us to jump in on it 🙏

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (9)
e2e-tests/01-login.spec.ts (1)

4-8: LGTM: Test suite structure is correct. Consider future expansions.

The test suite structure follows Playwright's best practices and is well-organized. The suite name "Login" accurately describes the functionality being tested.

As the login functionality evolves, consider adding more test cases to cover different scenarios, such as:

  • Login with invalid credentials
  • Login with empty fields
  • Password reset functionality (if applicable)
  • Remember me functionality (if applicable)
    This will ensure comprehensive coverage of the login feature.
e2e-tests/02-dashboard.spec.ts (3)

4-5: LGTM: Well-structured test suite and case.

The test suite and case are correctly defined using Playwright's syntax. The test case name is descriptive, which is good for readability and maintenance.

Consider adding more specific test cases instead of having a single large test. This would improve test isolation and make it easier to identify which specific functionality fails if an issue occurs.


14-30: LGTM: Comprehensive checks for navbar items and URL changes.

The test thoroughly verifies the visibility of navbar items and correctly checks URL changes after clicking each item. The use of regex for URL validation is appropriate.

Consider adding a small delay or wait condition before each URL check to ensure the page has had time to update. For example:

await page.waitForURL(/\/pipelines\?tab=pipelines/);
await expect(page).toHaveURL(/\/pipelines\?tab=pipelines/);

This can help prevent potential race conditions and make the test more robust.


1-32: Overall, well-implemented e2e test for the dashboard.

This test file provides a good foundation for testing the dashboard functionality. It covers login, navbar visibility, and navigation between different sections. The code is clean and easy to understand.

To further improve the test suite:

  1. Consider breaking down the large test case into smaller, more focused tests.
  2. Implement explicit assertions instead of early returns for critical elements.
  3. Add wait conditions before URL checks to prevent potential race conditions.

These changes will enhance the test's reliability and make it easier to maintain and debug in the future.

As you continue to develop your e2e test suite, consider implementing a Page Object Model (POM) pattern. This will help encapsulate page-specific selectors and actions, making your tests more maintainable and easier to update when the UI changes.

e2e-tests/00-survey.spec.ts (5)

5-12: LGTM: Good test setup and early exit strategy.

The test case is well-structured with a clear title and efficient login process. The early exit strategy based on the visibility of the account details prompt is a good practice.

Consider adding an explicit assertion or log message when skipping the test due to the prompt not being visible. This would improve test clarity and debugging:

if (!isVisible) {
  console.log('Account details prompt not visible. Skipping survey test.');
  return;
}

14-21: LGTM: Survey step 1 is well-implemented.

The implementation of the first survey step is clear and follows best practices for form interaction and waiting for elements.

Consider extracting test data (like name and email) into constants or a configuration file. This would make the test more maintainable and allow for easy data changes:

const TEST_DATA = {
  fullName: "test",
  email: "[email protected]"
};

await page.fill('input[name="fullName"]', TEST_DATA.fullName);
await page.fill('input[name="email"]', TEST_DATA.email);

23-31: LGTM: Survey steps 2 and 3 are consistently implemented.

The implementation of steps 2 and 3 follows the same pattern as step 1, maintaining consistency and using appropriate waits between steps.

Consider creating constants for frequently used selectors. This would improve maintainability and reduce the risk of typos:

const SELECTORS = {
  continueButton: 'button span:has-text("Continue")',
  // Add other common selectors here
};

await page.click(SELECTORS.continueButton);

33-40: LGTM: Survey step 4 thoroughly tests multiple options.

The implementation of step 4 is consistent with previous steps and demonstrates good coverage by checking multiple infrastructure options.

Consider refactoring the repeated checks into a loop to reduce code duplication and improve maintainability:

const infrastructureOptions = ['GCP', 'Azure', 'Openshift', 'AWS', 'Native Kubernetes'];
for (const option of infrastructureOptions) {
  await page.check(`label:has-text("${option}")`);
}
await page.click(SELECTORS.continueButton);

1-45: Overall, well-implemented e2e test with room for enhancements.

This end-to-end test for the survey form is comprehensive and follows many good practices. It effectively simulates user interactions across multiple steps of the survey. Here are some suggestions for future improvements:

  1. Extract test data and selectors into separate constants or configuration files.
  2. Implement helper functions for repetitive tasks like checking multiple options.
  3. Add more assertions throughout the test, especially at the end, to verify the survey completion.
  4. Consider adding error handling and logging for better debugging.
  5. If possible, add negative test cases or edge cases to increase test coverage.

These enhancements would further improve the maintainability, readability, and robustness of the test suite.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 6861c6a and bbfc98d.

📒 Files selected for processing (3)
  • e2e-tests/00-survey.spec.ts (1 hunks)
  • e2e-tests/01-login.spec.ts (1 hunks)
  • e2e-tests/02-dashboard.spec.ts (1 hunks)
🧰 Additional context used
🔇 Additional comments (5)
e2e-tests/01-login.spec.ts (2)

1-2: LGTM: Imports are correct and well-structured.

The imports are appropriate for the test implementation. The use of a custom login function from a separate utils file is a good practice, promoting code reusability and separation of concerns.


5-7: LGTM: Test case implementation is concise. Verify login utility function.

The test case is well-structured and uses the login utility function, which promotes code reuse. The async function and use of the page object are correct for Playwright tests.

To ensure the login function is implemented correctly, let's verify its contents:

This will help us confirm that the login function is properly implemented and includes all necessary steps for the login process.

✅ Verification successful

Enhance Verification of login Utility Function

The previous attempt to search for the login function using rg with the --type typescript flag failed. Let's adjust the commands to ensure we accurately locate and verify the login function implementation.


Login Utility Function Verified

The login function in e2e-tests/utils.ts is correctly implemented, performing all necessary steps to execute a login with the default username.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the implementation of the login utility function

# Test: Check the contents of the utils.ts file
cat e2e-tests/utils.ts

# Test: Search for the login function definition
rg --type typescript 'export async function login' e2e-tests/

Length of output: 527


Script:

#!/bin/bash
# Description: Refined verification of the login utility function without specifying file type

# Test: Search for the login function definition without specifying the file type
rg 'export async function login' e2e-tests/

# Alternatively, using ast-grep for a more precise search
ast-grep --lang typescript --pattern $'export async function login($_) { $$$ }' e2e-tests/

Length of output: 673

e2e-tests/02-dashboard.spec.ts (1)

1-2: LGTM: Imports are correct and well-organized.

The import statements are appropriate for the test file. Using a custom login function from utils is a good practice for code reusability and separation of concerns.

e2e-tests/00-survey.spec.ts (2)

1-4: LGTM: Imports and test structure are well-organized.

The imports and test structure follow Playwright's best practices. The use of a separate login utility function promotes code reusability across test files.


41-45: LGTM: Survey step 5 completes the test flow.

The implementation of the final step is consistent with the previous steps and completes the survey flow.

Ensure that skipping the last step is the intended behavior. If it is, consider adding a comment explaining why this step is skipped.

Consider adding final assertions to verify the survey completion:

await page.click('button span:has-text("Skip")');
// Add assertions to verify survey completion
await expect(page.locator('text=Survey Completed')).toBeVisible();
// Or verify redirection to a specific page
await expect(page).toHaveURL(/.*dashboard/);

Comment on lines +6 to +12
await login(page);

const overviewLink = await page.locator('a:has-text("Overview")').isVisible();

if (!overviewLink) {
return;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider using an assertion instead of an early return for the "Overview" link.

While checking for the "Overview" link visibility is a good practice, using an early return might lead to silently skipping important test assertions if the link is not visible.

Consider replacing the visibility check and early return with an assertion:

await expect(page.locator('a:has-text("Overview")')).toBeVisible({ timeout: 5000 });

This change will make the test fail explicitly if the "Overview" link is not visible, providing clearer feedback about the application's state.

@Yasaman-Behnam-Malekzadeh
Copy link
Author

  • The tests need to run in a fixed order => The survey test needs to run before all others, as it will always trigger otherwise for example

According to this doc https://playwright.dev/docs/test-parallel#disable-parallelism adding --workers=1 flag and renaming files in alphabetic order is enough to make it run sequentialy.
I've renamed the files. Here is the result:

npx playwright test --workers=1

Running 5 tests using 1 worker

  ✓  1 00-survey.spec.ts:5:2 › Survey › Fill survey for first time (607ms)
  ✓  2 01-login.spec.ts:5:2 › Login › Login with default username (534ms)
  ✓  3 02-dashboard.spec.ts:5:2 › Dashboard › Check dashboard and Navbar (325ms)
  ✓  4 03-example.spec.ts:3:1 › has title (505ms)
  ✓  5 03-example.spec.ts:10:1 › get started link (707ms)

Let me know if you want to follow up on your changes, or do you want us to jump in on it

I don't have much experience on CI, but I can give it a try. Can you share me some examples?

@Cahllagerfeld
Copy link
Contributor

Nice!

A github action for installing python, and zenml can look like this

name: Install ZenML

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  install_zenml:
    runs-on: ubuntu-latest

    steps:
      # Step 1: Checkout the repository
      - name: Checkout repository
        uses: actions/checkout@v3

      # Step 2: Set up Python 3.12
      - name: Set up Python 3.12
        uses: actions/setup-python@v4
        with:
          python-version: "3.12"

      # Step 3: Install ZenML globally
      - name: Install ZenML
        run: |
          pip install --upgrade pip
          pip install zenml

      # Step 4: Verify installation (optional)
      - name: Verify ZenML Installation
        run: zenml version

As for running the pipeline, I'd loop in a machine learning engineer to help with setting up valid test cases.

@Yasaman-Behnam-Malekzadeh
Copy link
Author

A github action for installing python, and zenml can look like this

Thanks for sharing an example of Github Actions. I’ve created a file named playwright.pipeline.yml and added my lines along with the example you shared.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (3)
.github/workflows/playwright-pipeline.yml (3)

1-9: LGTM with a minor formatting suggestion.

The workflow name and trigger events are correctly defined. However, there's a minor formatting issue on line 1.

Consider removing the extra space before the colon on line 1:

-name : ZenML Playwright test pipelines
+name: ZenML Playwright test pipelines
🧰 Tools
🪛 yamllint

[warning] 1-1: too many spaces before colon

(colons)


[error] 3-3: trailing spaces

(trailing-spaces)


24-32: LGTM! ZenML setup looks good, with a suggestion.

The ZenML installation and initialization steps are correctly defined. However, consider the following suggestion:

You might want to add a step to configure ZenML with any specific settings required for your tests. For example:

      - name: Configure ZenML
        run: |
          zenml config set <key> <value>

This would ensure that ZenML is set up consistently for your test environment.

🧰 Tools
🪛 yamllint

[error] 28-28: trailing spaces

(trailing-spaces)


34-39: LGTM! Test execution and verification look good, with minor formatting suggestions.

The Playwright test execution and ZenML verification steps are correctly defined. The use of --workers=1 ensures sequential test execution, which aligns with the PR objectives.

Consider removing trailing spaces from lines 36 and 39 to improve code formatting:

-      - name: Run Playwright Tests
-        run: npx playwright test --workers=1
-      
-      - name: Verify ZenML Installation
-        run: zenml version
-  
+      - name: Run Playwright Tests
+        run: npx playwright test --workers=1
+
+      - name: Verify ZenML Installation
+        run: zenml version
🧰 Tools
🪛 yamllint

[error] 36-36: trailing spaces

(trailing-spaces)


[error] 39-39: trailing spaces

(trailing-spaces)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between bbfc98d and 3c7eb78.

📒 Files selected for processing (1)
  • .github/workflows/playwright-pipeline.yml (1 hunks)
🧰 Additional context used
🪛 actionlint
.github/workflows/playwright-pipeline.yml

13-13: label "ubunto-latest" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-14.0", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-13.0", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "macos-12.0", "macos-11", "macos-11.0", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

🪛 yamllint
.github/workflows/playwright-pipeline.yml

[warning] 1-1: too many spaces before colon

(colons)


[error] 3-3: trailing spaces

(trailing-spaces)


[error] 18-18: trailing spaces

(trailing-spaces)


[error] 23-23: trailing spaces

(trailing-spaces)


[error] 28-28: trailing spaces

(trailing-spaces)


[error] 33-33: trailing spaces

(trailing-spaces)


[error] 36-36: trailing spaces

(trailing-spaces)


[error] 39-39: trailing spaces

(trailing-spaces)

🔇 Additional comments (1)
.github/workflows/playwright-pipeline.yml (1)

15-22: LGTM! Checkout and Python setup look good.

The repository checkout and Python setup steps are correctly defined and use appropriate versions of the actions.

🧰 Tools
🪛 yamllint

[error] 18-18: trailing spaces

(trailing-spaces)

Comment on lines 11 to 13
jobs:
run_zenml_tests:
runs-on: ubunto-latest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix the runner specification typo.

There's a critical typo in the runner specification that needs to be corrected.

Please apply the following change:

-    runs-on: ubunto-latest
+    runs-on: ubuntu-latest

This will ensure that the workflow runs on the latest Ubuntu runner provided by GitHub Actions.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
jobs:
run_zenml_tests:
runs-on: ubunto-latest
jobs:
run_zenml_tests:
runs-on: ubuntu-latest
🧰 Tools
🪛 actionlint

13-13: label "ubunto-latest" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-14.0", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-13.0", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "macos-12.0", "macos-11", "macos-11.0", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (3)
.github/workflows/playwright-pipeline.yml (3)

1-9: LGTM with a minor formatting suggestion.

The workflow name and trigger events are correctly defined. However, there's a minor formatting issue on line 1.

Consider applying this change to fix the formatting:

-name : ZenML Playwright test pipelines
+name: ZenML Playwright test pipelines
🧰 Tools
🪛 yamllint

[warning] 1-1: too many spaces before colon

(colons)


[error] 3-3: trailing spaces

(trailing-spaces)


15-28: LGTM with a suggestion for improvement.

The steps for setting up the environment are well-defined. However, consider updating the checkout action to the latest version.

Apply this change to use the latest checkout action:

-        uses: actions/checkout@v3
+        uses: actions/checkout@v4
🧰 Tools
🪛 yamllint

[error] 18-18: trailing spaces

(trailing-spaces)


[error] 23-23: trailing spaces

(trailing-spaces)


[error] 28-28: trailing spaces

(trailing-spaces)


1-39: Overall, the workflow is well-implemented with minor formatting suggestions.

The GitHub Actions workflow effectively addresses the requirements mentioned in the PR objectives. It sets up the environment, runs Playwright tests sequentially, and verifies the ZenML installation.

To improve code quality:

Consider removing trailing spaces at the end of lines 3, 18, 23, 28, 33, 36, and 39. You can use an editor with a trailing space removal feature or run a command like sed -i 's/[[:space:]]*$//' .github/workflows/playwright-pipeline.yml to remove them automatically.

🧰 Tools
🪛 yamllint

[warning] 1-1: too many spaces before colon

(colons)


[error] 3-3: trailing spaces

(trailing-spaces)


[error] 18-18: trailing spaces

(trailing-spaces)


[error] 23-23: trailing spaces

(trailing-spaces)


[error] 28-28: trailing spaces

(trailing-spaces)


[error] 33-33: trailing spaces

(trailing-spaces)


[error] 36-36: trailing spaces

(trailing-spaces)


[error] 39-39: trailing spaces

(trailing-spaces)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 3c7eb78 and 4ae903e.

📒 Files selected for processing (1)
  • .github/workflows/playwright-pipeline.yml (1 hunks)
🧰 Additional context used
🪛 yamllint
.github/workflows/playwright-pipeline.yml

[warning] 1-1: too many spaces before colon

(colons)


[error] 3-3: trailing spaces

(trailing-spaces)


[error] 18-18: trailing spaces

(trailing-spaces)


[error] 23-23: trailing spaces

(trailing-spaces)


[error] 28-28: trailing spaces

(trailing-spaces)


[error] 33-33: trailing spaces

(trailing-spaces)


[error] 36-36: trailing spaces

(trailing-spaces)


[error] 39-39: trailing spaces

(trailing-spaces)

🔇 Additional comments (3)
.github/workflows/playwright-pipeline.yml (3)

11-13: LGTM! Runner specification issue resolved.

The job definition and runner specification are correct. The previously reported typo in the runner specification has been addressed.


29-35: LGTM! ZenML initialization and test execution are well-configured.

The steps for initializing ZenML and running Playwright tests are correctly implemented. The use of --workers=1 ensures sequential test execution, addressing the concern raised in the PR comments.

🧰 Tools
🪛 yamllint

[error] 33-33: trailing spaces

(trailing-spaces)


37-38: LGTM! ZenML installation verification is a good practice.

The final step to verify the ZenML installation by checking its version is a good practice. It ensures that ZenML is correctly installed and available in the environment.

@Cahllagerfeld Cahllagerfeld changed the base branch from main to staging October 23, 2024 13:03
@Cahllagerfeld
Copy link
Contributor

Cahllagerfeld commented Oct 23, 2024

I just pushed a pipeline that we can use for creating some data in the server that we can test against.

The action just run for the first time, and I think there are threee changes that we need to make:
We also need to enable the Analytics by setting ZENML_ANALYTICS_OPT_IN to false in the environment

  1. we need to install zenml[server] so we can use zenml up
  2. There is already a second Playwright github action. I think we need to merge the one I sent earlier with this one, so we can set up all the playwright dependencies as well.
  3. The order of the steps:
    In pseudocode it would look like this
    1. Install Playwright and its dependencies
    2. Install ZenML
    3. Run the fixture
    4. ZenML Up
    5. Run the actual tests

@Yasaman-Behnam-Malekzadeh
Copy link
Author

I just pushed a pipeline that we can use for creating some data in the server that we can test against.

The action just run for the first time, and I think there are threee changes that we need to make: We also need to enable the Analytics by setting ZENML_ANALYTICS_OPT_IN to false in the environment

  1. we need to install zenml[server] so we can use zenml up

  2. There is already a second Playwright github action. I think we need to merge the one I sent earlier with this one, so we can set up all the playwright dependencies as well.

  3. The order of the steps:
    In pseudocode it would look like this

    1. Install Playwright and its dependencies
    2. Install ZenML
    3. Run the fixture
    4. ZenML Up
    5. Run the actual tests

I've merged two files and added some steps according to changes you mentioned.

@Cahllagerfeld
Copy link
Contributor

We successfully ran the first github action passing all of it 🎉.

If you want you can investigate this issue we get in the upload artifact
grafik
It looks like a configuration issue with playwright reports.

@Cahllagerfeld
Copy link
Contributor

Just a small checkin if everything is fine, or if you want us to take it over from here 👍
Let me know if you need further assistance or help 👍

@Yasaman-Behnam-Malekzadeh
Copy link
Author

Just a small checkin if everything is fine, or if you want us to take it over from here 👍 Let me know if you need further assistance or help 👍

Thank you for checking in! I’d prefer if you could take it over from here.
Appreciate the support! 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants