Skip to content

Conversation

@fivetran-sushmitha
Copy link

Jira ticket

Closes <ADD TICKET LINK HERE, EACH PR MUST BE LINKED TO A JIRA TICKET>

Description of Change

<MENTION A SHORT DESCRIPTION OF YOUR CHANGES HERE>

Testing

<MENTION ABOUT YOUR TESTING DETAILS HERE, ATTACH SCREENSHOTS IF NEEDED (WITHOUT PII)>

Checklist

Some tips and links to help validate your PR:

  • Tested the connector with fivetran debug command.
  • Added/Updated example specific README.md file, refer here for template.
  • Followed Python Coding Standards, refer here

@fivetran-sushmitha fivetran-sushmitha requested review from a team as code owners October 31, 2025 18:15
Copilot AI review requested due to automatic review settings October 31, 2025 18:15
@github-actions github-actions bot added the size/XL PR size: extra large label Oct 31, 2025
@github-actions
Copy link

🧹 Python Code Quality Check

⚠️ Flake8 has detected issues, please fix the issues before merging:

📎 Download full report from workflow artifacts.

📌 Only Python files changed in this PR were checked.

🔍 See how this check works

This comment is auto-updated with every commit.

@fivetran-sushmitha fivetran-sushmitha added the hackathon For all the PRs related to the internal Fivetran 2025 Connector SDK Hackathon. label Oct 31, 2025
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds a new Refiner Survey Analytics Connector that integrates with the Refiner API to sync NPS survey data, responses, and user information into Fivetran destinations. The connector implements incremental syncs with cursor-based pagination, proper error handling, and comprehensive state management.

Key changes:

  • New connector implementation with 5 normalized tables (surveys, questions, responses, answers, respondents)
  • Incremental sync support based on last_data_reception_at timestamps with configurable start dates
  • Comprehensive error handling with exponential backoff and proper retry logic for transient failures

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.

File Description
connectors/refiner/connector.py Main connector implementation with API integration, pagination, state management, and data transformation logic
connectors/refiner/configuration.json Configuration template with API key and optional start date parameters
connectors/refiner/README.md Complete documentation covering setup, features, authentication, data handling, and table schemas

if date_string.endswith("Z"):
date_string = date_string[:-1] + "+00:00"
return datetime.fromisoformat(date_string).astimezone(timezone.utc)
except Exception as e:
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Catching generic Exception without specific exception types. Replace with specific exceptions like ValueError since datetime.fromisoformat() raises ValueError for invalid formats.

Suggested change
except Exception as e:
except ValueError as e:

Copilot uses AI. Check for mistakes.

except Exception as e:
log.severe(f"Sync failed: {e}")
raise RuntimeError(f"Sync failed: {str(e)}")
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Catching generic Exception is too broad for the top-level error handler. Consider catching specific exception types or at minimum re-raise without wrapping to preserve the original exception type and stack trace.

Suggested change
raise RuntimeError(f"Sync failed: {str(e)}")
raise

Copilot uses AI. Check for mistakes.
- **Type safety** - Configuration validation ensures required fields exist before processing

### Key functions
- `validate_configuration()` - Validates required API key configuration
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation references validate_configuration() function which should not exist according to SDK v2+ guidelines (automatic validation when configuration.json is present). Remove this reference or clarify if there's custom validation logic beyond what SDK provides.

Suggested change
- `validate_configuration()` - Validates required API key configuration

Copilot uses AI. Check for mistakes.
Comment on lines +181 to +202
The examples provided are intended to help you effectively use Fivetran's Connector SDK. While we've tested the code, Fivetran cannot be held responsible for any unexpected or negative consequences that may arise from using these examples. For inquiries, please reach out to our Support team.

---

## Business value and revenue impact

This Refiner connector directly helps Fivetran customers unlock survey analytics at scale:

### Customer feedback analytics
- **Problem** - Companies rely on manual CSV exports to analyze survey responses.
- **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools.
- **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers.

### User-level insights
- **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data.
- **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID.
- **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS.

### Operational efficiency
- **Problem** - Manual exports are time-consuming and error-prone.
- **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs.
- **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities. No newline at end of file
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 'Business value and revenue impact' section is not part of the README template and contains internal business justification content that should not be in user-facing documentation. This entire section (lines 183-202) should be removed from the README.

Suggested change
The examples provided are intended to help you effectively use Fivetran's Connector SDK. While we've tested the code, Fivetran cannot be held responsible for any unexpected or negative consequences that may arise from using these examples. For inquiries, please reach out to our Support team.
---
## Business value and revenue impact
This Refiner connector directly helps Fivetran customers unlock survey analytics at scale:
### Customer feedback analytics
- **Problem** - Companies rely on manual CSV exports to analyze survey responses.
- **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools.
- **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers.
### User-level insights
- **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data.
- **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID.
- **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS.
### Operational efficiency
- **Problem** - Manual exports are time-consuming and error-prone.
- **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs.
- **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities.
The examples provided are intended to help you effectively use Fivetran's Connector SDK. While we've tested the code, Fivetran cannot be held responsible for any unexpected or negative consequences that may arise from using these examples. For inquiries, please reach out to our Support team.

Copilot uses AI. Check for mistakes.
Comment on lines +190 to +202
- **Problem** - Companies rely on manual CSV exports to analyze survey responses.
- **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools.
- **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers.

### User-level insights
- **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data.
- **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID.
- **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS.

### Operational efficiency
- **Problem** - Manual exports are time-consuming and error-prone.
- **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs.
- **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities. No newline at end of file
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bold text should only be used for UI element names according to the documentation guidelines. Remove bold formatting from 'Problem', 'Solution', and 'Revenue impact' throughout the Business value section (if that section is retained).

Suggested change
- **Problem** - Companies rely on manual CSV exports to analyze survey responses.
- **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools.
- **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers.
### User-level insights
- **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data.
- **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID.
- **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS.
### Operational efficiency
- **Problem** - Manual exports are time-consuming and error-prone.
- **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs.
- **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities.
- Problem - Companies rely on manual CSV exports to analyze survey responses.
- Solution - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools.
- Revenue impact - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers.
### User-level insights
- Problem - Product and growth teams cannot easily correlate survey responses with user behavior data.
- Solution - Connector enables joining survey data with product usage in the warehouse via user ID.
- Revenue impact - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS.
### Operational efficiency
- Problem - Manual exports are time-consuming and error-prone.
- Solution - Fivetran automation eliminates manual work and ensures consistent incremental syncs.
- Revenue impact - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities.

Copilot uses AI. Check for mistakes.
api_key = configuration.get("api_key")
start_date = configuration.get("start_date", __DEFAULT_START_DATE)

last_survey_sync = state.get("last_survey_sync", start_date)
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable last_survey_sync is not used.

Suggested change
last_survey_sync = state.get("last_survey_sync", start_date)

Copilot uses AI. Check for mistakes.
]


def make_api_request(url: str, headers: dict, params: dict = None) -> dict:
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mixing implicit and explicit returns may indicate an error, as implicit returns always return None.

Copilot uses AI. Check for mistakes.
from datetime import datetime, timezone

# For URL parameter encoding
from urllib.parse import urlencode
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'urlencode' is not used.

Suggested change
from urllib.parse import urlencode

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 5 comments.

Comments suppressed due to low confidence (1)

connectors/refiner/connector.py:28

  • Import of 'urlencode' is not used.
from urllib.parse import urlencode

from datetime import datetime, timezone

# For URL parameter encoding
from urllib.parse import urlencode
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The urlencode import is unused throughout the connector. Remove this unused import to keep dependencies minimal.

Suggested change
from urllib.parse import urlencode

Copilot uses AI. Check for mistakes.
Comment on lines +114 to +116
### Configuration validation (`validate_configuration()`)
- Validates required `api_key` field exists and is not empty
- Provides clear error messages for configuration issues
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This entire section documenting validate_configuration() should be removed since the function should not exist when using configuration.json. The SDK handles configuration validation automatically.

Copilot uses AI. Check for mistakes.
Comment on lines +581 to +611
try:
surveys_synced = fetch_surveys(api_key, state)
log.info(f"Synced {surveys_synced} surveys")

contacts_synced = fetch_contacts(api_key, state)
log.info(f"Synced {contacts_synced} contacts")

responses_synced = fetch_responses(api_key, state, last_response_sync)
log.info(f"Synced {responses_synced} responses")

# Update sync timestamps - note that last_response_sync is already updated in fetch_responses
state["last_survey_sync"] = current_sync_time
state["last_contact_sync"] = current_sync_time

# Only update last_response_sync if no responses were found (use current time as marker)
if responses_synced == 0:
state["last_response_sync"] = current_sync_time

# Save the progress by checkpointing the state. This is important for ensuring that the sync process can resume
# from the correct position in case of next sync or interruptions.
# Learn more about how and where to checkpoint by reading our best practices documentation
# (https://fivetran.com/docs/connectors/connector-sdk/best-practices#largedatasetrecommendation).
op.checkpoint(state)

log.info(f"Sync completed successfully at {current_sync_time}")

except Exception as e:
log.severe(f"Sync failed: {e}")
raise RuntimeError(f"Sync failed: {str(e)}")


Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a bare except Exception is too broad. Consider catching more specific exceptions or removing this try-except entirely since the helper functions already handle their own errors appropriately and re-raise RuntimeError.

Suggested change
try:
surveys_synced = fetch_surveys(api_key, state)
log.info(f"Synced {surveys_synced} surveys")
contacts_synced = fetch_contacts(api_key, state)
log.info(f"Synced {contacts_synced} contacts")
responses_synced = fetch_responses(api_key, state, last_response_sync)
log.info(f"Synced {responses_synced} responses")
# Update sync timestamps - note that last_response_sync is already updated in fetch_responses
state["last_survey_sync"] = current_sync_time
state["last_contact_sync"] = current_sync_time
# Only update last_response_sync if no responses were found (use current time as marker)
if responses_synced == 0:
state["last_response_sync"] = current_sync_time
# Save the progress by checkpointing the state. This is important for ensuring that the sync process can resume
# from the correct position in case of next sync or interruptions.
# Learn more about how and where to checkpoint by reading our best practices documentation
# (https://fivetran.com/docs/connectors/connector-sdk/best-practices#largedatasetrecommendation).
op.checkpoint(state)
log.info(f"Sync completed successfully at {current_sync_time}")
except Exception as e:
log.severe(f"Sync failed: {e}")
raise RuntimeError(f"Sync failed: {str(e)}")
surveys_synced = fetch_surveys(api_key, state)
log.info(f"Synced {surveys_synced} surveys")
contacts_synced = fetch_contacts(api_key, state)
log.info(f"Synced {contacts_synced} contacts")
responses_synced = fetch_responses(api_key, state, last_response_sync)
log.info(f"Synced {responses_synced} responses")
# Update sync timestamps - note that last_response_sync is already updated in fetch_responses
state["last_survey_sync"] = current_sync_time
state["last_contact_sync"] = current_sync_time
# Only update last_response_sync if no responses were found (use current time as marker)
if responses_synced == 0:
state["last_response_sync"] = current_sync_time
# Save the progress by checkpointing the state. This is important for ensuring that the sync process can resume
# from the correct position in case of next sync or interruptions.
# Learn more about how and where to checkpoint by reading our best practices documentation
# (https://fivetran.com/docs/connectors/connector-sdk/best-practices#largedatasetrecommendation).
op.checkpoint(state)
log.info(f"Sync completed successfully at {current_sync_time}")

Copilot uses AI. Check for mistakes.
Comment on lines +183 to +202
---

## Business value and revenue impact

This Refiner connector directly helps Fivetran customers unlock survey analytics at scale:

### Customer feedback analytics
- **Problem** - Companies rely on manual CSV exports to analyze survey responses.
- **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools.
- **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers.

### User-level insights
- **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data.
- **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID.
- **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS.

### Operational efficiency
- **Problem** - Manual exports are time-consuming and error-prone.
- **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs.
- **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities. No newline at end of file
Copy link

Copilot AI Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 'Business value and revenue impact' section (lines 185-202) should not be included in the README. This is internal business content that doesn't belong in example connector documentation. Remove this entire section.

Suggested change
---
## Business value and revenue impact
This Refiner connector directly helps Fivetran customers unlock survey analytics at scale:
### Customer feedback analytics
- **Problem** - Companies rely on manual CSV exports to analyze survey responses.
- **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools.
- **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers.
### User-level insights
- **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data.
- **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID.
- **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS.
### Operational efficiency
- **Problem** - Manual exports are time-consuming and error-prone.
- **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs.
- **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities.
---

Copilot uses AI. Check for mistakes.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fivetran-sushmitha Left a few suggestions. Thanks.
Also, the main Readme.md file is missing.

@@ -0,0 +1,202 @@
# Connector SDK Refiner Survey Analytics Connector

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Connector SDK Refiner Survey Analytics Connector
# Connector SDK Refiner Survey Analytics Connector Example

}
```

Configuration parameters:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Configuration parameters:
### Configuration parameters

The connector uses Bearer token authentication via the `Authorization` header. To obtain your API key:

1. Log in to your Refiner account.
2. Go to **Settings > Integrations > API**.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
2. Go to **Settings > Integrations > API**.
2. Go to **Settings** > **Integrations** > API**.

The API key is included in every request as `Authorization: Bearer YOUR_API_KEY`.

## Pagination
The connector handles pagination automatically using Refiner API's page-based pagination structure. The API supports the following pagination parameters:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The connector handles pagination automatically using Refiner API's page-based pagination structure. The API supports the following pagination parameters:
The connector handles pagination automatically using the Refiner API's page-based pagination structure. The API supports the following pagination parameters:

The connector uses page-based pagination with automatic detection of the last page:
- Each sync processes all paginated data completely using the `pagination.current_page` and `pagination.last_page` response fields.
- Pagination state is not persisted between sync runs for cleaner state management.
- Uses the `date_range_start` parameter to filter responses from API directly for incremental syncs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Uses the `date_range_start` parameter to filter responses from API directly for incremental syncs.
- Uses the `date_range_start` parameter to filter responses from the API directly for incremental syncs.

- `fetch_responses()` - Paginate through responses with date filtering

## Data handling
The connector processes survey and response data with an optimized incremental sync strategy:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there some content missing?

- Incremental syncs use `last_response_sync` timestamp from state to fetch only new/updated responses since last successful sync
- State tracks separate timestamps for surveys and responses
- Checkpoint every 1000 records during large response syncs to enable resumability
- Final checkpoint saves complete state only after successful sync completion

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Final checkpoint saves complete state only after successful sync completion
- Final checkpoint saves the complete state only after successful sync completion

- `fetch_answers()` - Extract answers from response data
- `fetch_respondent()` - Extract or update respondent information

The connector maintains clean state with `last_survey_sync` and `last_response_sync` timestamps, automatically advancing after each successful sync to ensure reliable incremental syncs without data duplication or gaps.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The connector maintains clean state with `last_survey_sync` and `last_response_sync` timestamps, automatically advancing after each successful sync to ensure reliable incremental syncs without data duplication or gaps.
The connector maintains a clean state with `last_survey_sync` and `last_response_sync` timestamps, automatically advancing after each successful sync to ensure reliable incremental syncs without data duplication or gaps.

The connector implements comprehensive error handling with multiple layers of protection:

### Configuration validation (`validate_configuration()`)
- Validates required `api_key` field exists and is not empty

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Validates required `api_key` field exists and is not empty
- Validates the required `api_key` field exists and is not empty

Comment on lines +130 to +133
### Checkpoint recovery
- Checkpoints every 1000 records during large syncs enable recovery from interruptions
- State tracking allows sync to resume from last successful checkpoint
- Final checkpoint only saved after complete successful sync

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Checkpoint recovery
- Checkpoints every 1000 records during large syncs enable recovery from interruptions
- State tracking allows sync to resume from last successful checkpoint
- Final checkpoint only saved after complete successful sync
### Checkpoint recovery
- Checkpoints every 1000 records during large syncs enable recovery from interruptions
- State tracking allows sync to resume from the last successful checkpoint
- Final checkpoint only saved after a complete successful sync

Copy link
Contributor

@fivetran-sahilkhirwal fivetran-sahilkhirwal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please address these comments as well as copilot comments :)

return questions_count


def fetch_responses(api_key: str, state: dict, last_sync_time: str) -> int:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function is very long. can you please break it into smaller function to make it more readable and maintainable :)

Comment on lines +612 to +621
# Create the connector object using the schema and update functions
connector = Connector(update=update, schema=schema)

# Check if the script is being run as the main module.
# This is Python's standard entry method allowing your script to be run directly from the command line or IDE 'run' button.
# This is useful for debugging while you write your code. Note this method is not called by Fivetran when executing your connector in production.
# Please test using the Fivetran debug command prior to finalizing and deploying your connector.
if __name__ == "__main__":
# Test the connector locally
connector.debug()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# Create the connector object using the schema and update functions
connector = Connector(update=update, schema=schema)
# Check if the script is being run as the main module.
# This is Python's standard entry method allowing your script to be run directly from the command line or IDE 'run' button.
# This is useful for debugging while you write your code. Note this method is not called by Fivetran when executing your connector in production.
# Please test using the Fivetran debug command prior to finalizing and deploying your connector.
if __name__ == "__main__":
# Open the configuration.json file and load its contents
with open("configuration.json", "r") as f:
configuration = json.load(f)
# Test the connector locally
connector.debug(configuration=configuration)

please follow this template :)

Comment on lines +540 to +553
if __USE_CURSOR_PAGINATION and next_page_cursor:
page_cursor = next_page_cursor
elif not __USE_CURSOR_PAGINATION:
current_page = pagination.get("current_page", page)
last_page = pagination.get("last_page", page)

if current_page >= last_page:
log.info(f"Reached last page of contacts: {last_page}")
break
page += 1
else:
log.info("No next page cursor, pagination complete")
break

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic is repeated in the code. We can create a method and reuse it wherever we need to use the page handling :)

@fivetran-sahilkhirwal
Copy link
Contributor

Please link the example to the main README
Also, there are some flake8 issues present. Please resolve them as well
You can check the flake8 issue report here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

hackathon For all the PRs related to the internal Fivetran 2025 Connector SDK Hackathon. size/XL PR size: extra large

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants