-
Notifications
You must be signed in to change notification settings - Fork 29
example(refiner): added new connector for refiner connector #437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🧹 Python Code Quality Check📎 Download full report from workflow artifacts. 📌 Only Python files changed in this PR were checked. This comment is auto-updated with every commit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds a new Refiner Survey Analytics Connector that integrates with the Refiner API to sync NPS survey data, responses, and user information into Fivetran destinations. The connector implements incremental syncs with cursor-based pagination, proper error handling, and comprehensive state management.
Key changes:
- New connector implementation with 5 normalized tables (surveys, questions, responses, answers, respondents)
- Incremental sync support based on
last_data_reception_attimestamps with configurable start dates - Comprehensive error handling with exponential backoff and proper retry logic for transient failures
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.
| File | Description |
|---|---|
| connectors/refiner/connector.py | Main connector implementation with API integration, pagination, state management, and data transformation logic |
| connectors/refiner/configuration.json | Configuration template with API key and optional start date parameters |
| connectors/refiner/README.md | Complete documentation covering setup, features, authentication, data handling, and table schemas |
| if date_string.endswith("Z"): | ||
| date_string = date_string[:-1] + "+00:00" | ||
| return datetime.fromisoformat(date_string).astimezone(timezone.utc) | ||
| except Exception as e: |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Catching generic Exception without specific exception types. Replace with specific exceptions like ValueError since datetime.fromisoformat() raises ValueError for invalid formats.
| except Exception as e: | |
| except ValueError as e: |
|
|
||
| except Exception as e: | ||
| log.severe(f"Sync failed: {e}") | ||
| raise RuntimeError(f"Sync failed: {str(e)}") |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Catching generic Exception is too broad for the top-level error handler. Consider catching specific exception types or at minimum re-raise without wrapping to preserve the original exception type and stack trace.
| raise RuntimeError(f"Sync failed: {str(e)}") | |
| raise |
| - **Type safety** - Configuration validation ensures required fields exist before processing | ||
|
|
||
| ### Key functions | ||
| - `validate_configuration()` - Validates required API key configuration |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation references validate_configuration() function which should not exist according to SDK v2+ guidelines (automatic validation when configuration.json is present). Remove this reference or clarify if there's custom validation logic beyond what SDK provides.
| - `validate_configuration()` - Validates required API key configuration |
| The examples provided are intended to help you effectively use Fivetran's Connector SDK. While we've tested the code, Fivetran cannot be held responsible for any unexpected or negative consequences that may arise from using these examples. For inquiries, please reach out to our Support team. | ||
|
|
||
| --- | ||
|
|
||
| ## Business value and revenue impact | ||
|
|
||
| This Refiner connector directly helps Fivetran customers unlock survey analytics at scale: | ||
|
|
||
| ### Customer feedback analytics | ||
| - **Problem** - Companies rely on manual CSV exports to analyze survey responses. | ||
| - **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools. | ||
| - **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers. | ||
|
|
||
| ### User-level insights | ||
| - **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data. | ||
| - **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID. | ||
| - **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS. | ||
|
|
||
| ### Operational efficiency | ||
| - **Problem** - Manual exports are time-consuming and error-prone. | ||
| - **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs. | ||
| - **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities. No newline at end of file |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 'Business value and revenue impact' section is not part of the README template and contains internal business justification content that should not be in user-facing documentation. This entire section (lines 183-202) should be removed from the README.
| The examples provided are intended to help you effectively use Fivetran's Connector SDK. While we've tested the code, Fivetran cannot be held responsible for any unexpected or negative consequences that may arise from using these examples. For inquiries, please reach out to our Support team. | |
| --- | |
| ## Business value and revenue impact | |
| This Refiner connector directly helps Fivetran customers unlock survey analytics at scale: | |
| ### Customer feedback analytics | |
| - **Problem** - Companies rely on manual CSV exports to analyze survey responses. | |
| - **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools. | |
| - **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers. | |
| ### User-level insights | |
| - **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data. | |
| - **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID. | |
| - **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS. | |
| ### Operational efficiency | |
| - **Problem** - Manual exports are time-consuming and error-prone. | |
| - **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs. | |
| - **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities. | |
| The examples provided are intended to help you effectively use Fivetran's Connector SDK. While we've tested the code, Fivetran cannot be held responsible for any unexpected or negative consequences that may arise from using these examples. For inquiries, please reach out to our Support team. |
| - **Problem** - Companies rely on manual CSV exports to analyze survey responses. | ||
| - **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools. | ||
| - **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers. | ||
|
|
||
| ### User-level insights | ||
| - **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data. | ||
| - **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID. | ||
| - **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS. | ||
|
|
||
| ### Operational efficiency | ||
| - **Problem** - Manual exports are time-consuming and error-prone. | ||
| - **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs. | ||
| - **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities. No newline at end of file |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bold text should only be used for UI element names according to the documentation guidelines. Remove bold formatting from 'Problem', 'Solution', and 'Revenue impact' throughout the Business value section (if that section is retained).
| - **Problem** - Companies rely on manual CSV exports to analyze survey responses. | |
| - **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools. | |
| - **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers. | |
| ### User-level insights | |
| - **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data. | |
| - **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID. | |
| - **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS. | |
| ### Operational efficiency | |
| - **Problem** - Manual exports are time-consuming and error-prone. | |
| - **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs. | |
| - **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities. | |
| - Problem - Companies rely on manual CSV exports to analyze survey responses. | |
| - Solution - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools. | |
| - Revenue impact - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers. | |
| ### User-level insights | |
| - Problem - Product and growth teams cannot easily correlate survey responses with user behavior data. | |
| - Solution - Connector enables joining survey data with product usage in the warehouse via user ID. | |
| - Revenue impact - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS. | |
| ### Operational efficiency | |
| - Problem - Manual exports are time-consuming and error-prone. | |
| - Solution - Fivetran automation eliminates manual work and ensures consistent incremental syncs. | |
| - Revenue impact - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities. |
| api_key = configuration.get("api_key") | ||
| start_date = configuration.get("start_date", __DEFAULT_START_DATE) | ||
|
|
||
| last_survey_sync = state.get("last_survey_sync", start_date) |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable last_survey_sync is not used.
| last_survey_sync = state.get("last_survey_sync", start_date) |
| ] | ||
|
|
||
|
|
||
| def make_api_request(url: str, headers: dict, params: dict = None) -> dict: |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mixing implicit and explicit returns may indicate an error, as implicit returns always return None.
| from datetime import datetime, timezone | ||
|
|
||
| # For URL parameter encoding | ||
| from urllib.parse import urlencode |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'urlencode' is not used.
| from urllib.parse import urlencode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (1)
connectors/refiner/connector.py:28
- Import of 'urlencode' is not used.
from urllib.parse import urlencode
| from datetime import datetime, timezone | ||
|
|
||
| # For URL parameter encoding | ||
| from urllib.parse import urlencode |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The urlencode import is unused throughout the connector. Remove this unused import to keep dependencies minimal.
| from urllib.parse import urlencode |
| ### Configuration validation (`validate_configuration()`) | ||
| - Validates required `api_key` field exists and is not empty | ||
| - Provides clear error messages for configuration issues |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This entire section documenting validate_configuration() should be removed since the function should not exist when using configuration.json. The SDK handles configuration validation automatically.
| try: | ||
| surveys_synced = fetch_surveys(api_key, state) | ||
| log.info(f"Synced {surveys_synced} surveys") | ||
|
|
||
| contacts_synced = fetch_contacts(api_key, state) | ||
| log.info(f"Synced {contacts_synced} contacts") | ||
|
|
||
| responses_synced = fetch_responses(api_key, state, last_response_sync) | ||
| log.info(f"Synced {responses_synced} responses") | ||
|
|
||
| # Update sync timestamps - note that last_response_sync is already updated in fetch_responses | ||
| state["last_survey_sync"] = current_sync_time | ||
| state["last_contact_sync"] = current_sync_time | ||
|
|
||
| # Only update last_response_sync if no responses were found (use current time as marker) | ||
| if responses_synced == 0: | ||
| state["last_response_sync"] = current_sync_time | ||
|
|
||
| # Save the progress by checkpointing the state. This is important for ensuring that the sync process can resume | ||
| # from the correct position in case of next sync or interruptions. | ||
| # Learn more about how and where to checkpoint by reading our best practices documentation | ||
| # (https://fivetran.com/docs/connectors/connector-sdk/best-practices#largedatasetrecommendation). | ||
| op.checkpoint(state) | ||
|
|
||
| log.info(f"Sync completed successfully at {current_sync_time}") | ||
|
|
||
| except Exception as e: | ||
| log.severe(f"Sync failed: {e}") | ||
| raise RuntimeError(f"Sync failed: {str(e)}") | ||
|
|
||
|
|
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a bare except Exception is too broad. Consider catching more specific exceptions or removing this try-except entirely since the helper functions already handle their own errors appropriately and re-raise RuntimeError.
| try: | |
| surveys_synced = fetch_surveys(api_key, state) | |
| log.info(f"Synced {surveys_synced} surveys") | |
| contacts_synced = fetch_contacts(api_key, state) | |
| log.info(f"Synced {contacts_synced} contacts") | |
| responses_synced = fetch_responses(api_key, state, last_response_sync) | |
| log.info(f"Synced {responses_synced} responses") | |
| # Update sync timestamps - note that last_response_sync is already updated in fetch_responses | |
| state["last_survey_sync"] = current_sync_time | |
| state["last_contact_sync"] = current_sync_time | |
| # Only update last_response_sync if no responses were found (use current time as marker) | |
| if responses_synced == 0: | |
| state["last_response_sync"] = current_sync_time | |
| # Save the progress by checkpointing the state. This is important for ensuring that the sync process can resume | |
| # from the correct position in case of next sync or interruptions. | |
| # Learn more about how and where to checkpoint by reading our best practices documentation | |
| # (https://fivetran.com/docs/connectors/connector-sdk/best-practices#largedatasetrecommendation). | |
| op.checkpoint(state) | |
| log.info(f"Sync completed successfully at {current_sync_time}") | |
| except Exception as e: | |
| log.severe(f"Sync failed: {e}") | |
| raise RuntimeError(f"Sync failed: {str(e)}") | |
| surveys_synced = fetch_surveys(api_key, state) | |
| log.info(f"Synced {surveys_synced} surveys") | |
| contacts_synced = fetch_contacts(api_key, state) | |
| log.info(f"Synced {contacts_synced} contacts") | |
| responses_synced = fetch_responses(api_key, state, last_response_sync) | |
| log.info(f"Synced {responses_synced} responses") | |
| # Update sync timestamps - note that last_response_sync is already updated in fetch_responses | |
| state["last_survey_sync"] = current_sync_time | |
| state["last_contact_sync"] = current_sync_time | |
| # Only update last_response_sync if no responses were found (use current time as marker) | |
| if responses_synced == 0: | |
| state["last_response_sync"] = current_sync_time | |
| # Save the progress by checkpointing the state. This is important for ensuring that the sync process can resume | |
| # from the correct position in case of next sync or interruptions. | |
| # Learn more about how and where to checkpoint by reading our best practices documentation | |
| # (https://fivetran.com/docs/connectors/connector-sdk/best-practices#largedatasetrecommendation). | |
| op.checkpoint(state) | |
| log.info(f"Sync completed successfully at {current_sync_time}") |
| --- | ||
|
|
||
| ## Business value and revenue impact | ||
|
|
||
| This Refiner connector directly helps Fivetran customers unlock survey analytics at scale: | ||
|
|
||
| ### Customer feedback analytics | ||
| - **Problem** - Companies rely on manual CSV exports to analyze survey responses. | ||
| - **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools. | ||
| - **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers. | ||
|
|
||
| ### User-level insights | ||
| - **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data. | ||
| - **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID. | ||
| - **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS. | ||
|
|
||
| ### Operational efficiency | ||
| - **Problem** - Manual exports are time-consuming and error-prone. | ||
| - **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs. | ||
| - **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities. No newline at end of file |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 'Business value and revenue impact' section (lines 185-202) should not be included in the README. This is internal business content that doesn't belong in example connector documentation. Remove this entire section.
| --- | |
| ## Business value and revenue impact | |
| This Refiner connector directly helps Fivetran customers unlock survey analytics at scale: | |
| ### Customer feedback analytics | |
| - **Problem** - Companies rely on manual CSV exports to analyze survey responses. | |
| - **Solution** - Automated connector syncs survey and response data keyed by user ID, ready for analysis in BI tools. | |
| - **Revenue impact** - Opens new market segment (customer feedback analytics); supports product and marketing analytics customers. | |
| ### User-level insights | |
| - **Problem** - Product and growth teams cannot easily correlate survey responses with user behavior data. | |
| - **Solution** - Connector enables joining survey data with product usage in the warehouse via user ID. | |
| - **Revenue impact** - Drives adoption among SaaS and growth teams using Refiner to measure user satisfaction and NPS. | |
| ### Operational efficiency | |
| - **Problem** - Manual exports are time-consuming and error-prone. | |
| - **Solution** - Fivetran automation eliminates manual work and ensures consistent incremental syncs. | |
| - **Revenue impact** - Strengthens positioning in the feedback automation space, increasing customer retention and cross-sell opportunities. | |
| --- |
fivetran-chinmayichandrasekar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fivetran-sushmitha Left a few suggestions. Thanks.
Also, the main Readme.md file is missing.
| @@ -0,0 +1,202 @@ | |||
| # Connector SDK Refiner Survey Analytics Connector | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # Connector SDK Refiner Survey Analytics Connector | |
| # Connector SDK Refiner Survey Analytics Connector Example |
| } | ||
| ``` | ||
|
|
||
| Configuration parameters: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Configuration parameters: | |
| ### Configuration parameters |
| The connector uses Bearer token authentication via the `Authorization` header. To obtain your API key: | ||
|
|
||
| 1. Log in to your Refiner account. | ||
| 2. Go to **Settings > Integrations > API**. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| 2. Go to **Settings > Integrations > API**. | |
| 2. Go to **Settings** > **Integrations** > API**. |
| The API key is included in every request as `Authorization: Bearer YOUR_API_KEY`. | ||
|
|
||
| ## Pagination | ||
| The connector handles pagination automatically using Refiner API's page-based pagination structure. The API supports the following pagination parameters: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The connector handles pagination automatically using Refiner API's page-based pagination structure. The API supports the following pagination parameters: | |
| The connector handles pagination automatically using the Refiner API's page-based pagination structure. The API supports the following pagination parameters: |
| The connector uses page-based pagination with automatic detection of the last page: | ||
| - Each sync processes all paginated data completely using the `pagination.current_page` and `pagination.last_page` response fields. | ||
| - Pagination state is not persisted between sync runs for cleaner state management. | ||
| - Uses the `date_range_start` parameter to filter responses from API directly for incremental syncs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - Uses the `date_range_start` parameter to filter responses from API directly for incremental syncs. | |
| - Uses the `date_range_start` parameter to filter responses from the API directly for incremental syncs. |
| - `fetch_responses()` - Paginate through responses with date filtering | ||
|
|
||
| ## Data handling | ||
| The connector processes survey and response data with an optimized incremental sync strategy: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there some content missing?
| - Incremental syncs use `last_response_sync` timestamp from state to fetch only new/updated responses since last successful sync | ||
| - State tracks separate timestamps for surveys and responses | ||
| - Checkpoint every 1000 records during large response syncs to enable resumability | ||
| - Final checkpoint saves complete state only after successful sync completion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - Final checkpoint saves complete state only after successful sync completion | |
| - Final checkpoint saves the complete state only after successful sync completion |
| - `fetch_answers()` - Extract answers from response data | ||
| - `fetch_respondent()` - Extract or update respondent information | ||
|
|
||
| The connector maintains clean state with `last_survey_sync` and `last_response_sync` timestamps, automatically advancing after each successful sync to ensure reliable incremental syncs without data duplication or gaps. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The connector maintains clean state with `last_survey_sync` and `last_response_sync` timestamps, automatically advancing after each successful sync to ensure reliable incremental syncs without data duplication or gaps. | |
| The connector maintains a clean state with `last_survey_sync` and `last_response_sync` timestamps, automatically advancing after each successful sync to ensure reliable incremental syncs without data duplication or gaps. |
| The connector implements comprehensive error handling with multiple layers of protection: | ||
|
|
||
| ### Configuration validation (`validate_configuration()`) | ||
| - Validates required `api_key` field exists and is not empty |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - Validates required `api_key` field exists and is not empty | |
| - Validates the required `api_key` field exists and is not empty |
| ### Checkpoint recovery | ||
| - Checkpoints every 1000 records during large syncs enable recovery from interruptions | ||
| - State tracking allows sync to resume from last successful checkpoint | ||
| - Final checkpoint only saved after complete successful sync |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| ### Checkpoint recovery | |
| - Checkpoints every 1000 records during large syncs enable recovery from interruptions | |
| - State tracking allows sync to resume from last successful checkpoint | |
| - Final checkpoint only saved after complete successful sync | |
| ### Checkpoint recovery | |
| - Checkpoints every 1000 records during large syncs enable recovery from interruptions | |
| - State tracking allows sync to resume from the last successful checkpoint | |
| - Final checkpoint only saved after a complete successful sync |
fivetran-sahilkhirwal
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please address these comments as well as copilot comments :)
| return questions_count | ||
|
|
||
|
|
||
| def fetch_responses(api_key: str, state: dict, last_sync_time: str) -> int: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function is very long. can you please break it into smaller function to make it more readable and maintainable :)
| # Create the connector object using the schema and update functions | ||
| connector = Connector(update=update, schema=schema) | ||
|
|
||
| # Check if the script is being run as the main module. | ||
| # This is Python's standard entry method allowing your script to be run directly from the command line or IDE 'run' button. | ||
| # This is useful for debugging while you write your code. Note this method is not called by Fivetran when executing your connector in production. | ||
| # Please test using the Fivetran debug command prior to finalizing and deploying your connector. | ||
| if __name__ == "__main__": | ||
| # Test the connector locally | ||
| connector.debug() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fivetran_connector_sdk/template_example_connector/connector.py
Lines 153 to 166 in 49bfe57
| # Create the connector object using the schema and update functions | |
| connector = Connector(update=update, schema=schema) | |
| # Check if the script is being run as the main module. | |
| # This is Python's standard entry method allowing your script to be run directly from the command line or IDE 'run' button. | |
| # This is useful for debugging while you write your code. Note this method is not called by Fivetran when executing your connector in production. | |
| # Please test using the Fivetran debug command prior to finalizing and deploying your connector. | |
| if __name__ == "__main__": | |
| # Open the configuration.json file and load its contents | |
| with open("configuration.json", "r") as f: | |
| configuration = json.load(f) | |
| # Test the connector locally | |
| connector.debug(configuration=configuration) |
please follow this template :)
| if __USE_CURSOR_PAGINATION and next_page_cursor: | ||
| page_cursor = next_page_cursor | ||
| elif not __USE_CURSOR_PAGINATION: | ||
| current_page = pagination.get("current_page", page) | ||
| last_page = pagination.get("last_page", page) | ||
|
|
||
| if current_page >= last_page: | ||
| log.info(f"Reached last page of contacts: {last_page}") | ||
| break | ||
| page += 1 | ||
| else: | ||
| log.info("No next page cursor, pagination complete") | ||
| break | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic is repeated in the code. We can create a method and reuse it wherever we need to use the page handling :)
|
Please link the example to the main README |
Jira ticket
Closes
<ADD TICKET LINK HERE, EACH PR MUST BE LINKED TO A JIRA TICKET>Description of Change
<MENTION A SHORT DESCRIPTION OF YOUR CHANGES HERE>Testing
<MENTION ABOUT YOUR TESTING DETAILS HERE, ATTACH SCREENSHOTS IF NEEDED (WITHOUT PII)>Checklist
Some tips and links to help validate your PR:
fivetran debugcommand.