diff --git a/docs/how_to/parameterization.md b/docs/how_to/parameterization.md index 2bd33025..2e0a91d2 100644 --- a/docs/how_to/parameterization.md +++ b/docs/how_to/parameterization.md @@ -151,6 +151,56 @@ semantic_model_binding: semantic_model_name: [,,...] ``` +### `semantic_model_refresh` + +Semantic model refresh is used to automatically refresh semantic models after deployment. This is particularly useful for semantic models that have undergone destructive changes (such as schema modifications) or when you need to ensure data is current after deployment. The refresh uses the Power BI REST API and supports custom refresh payloads for advanced scenarios like partition-based or incremental refreshes. + +**⚠️ Feature Flag Required:** This feature requires the `enable_semantic_model_refresh` feature flag to be enabled. See [Optional Features](optional_feature.md) for information on enabling feature flags. + +**Basic usage with default full refresh:** + +```yaml +semantic_model_refresh: + # Required field: value must be a string or a list of strings + - semantic_model_name: + # OR + semantic_model_name: [,,...] +``` + +**Advanced usage with custom refresh payload:** + +```yaml +semantic_model_refresh: + # Single model with custom refresh payload + - semantic_model_name: "Sales Model" + refresh_payload: + type: "full" + objects: + - table: "Sales" + - table: "Products" + partition: "Products-2024" + commitMode: "transactional" + maxParallelism: 2 + # Another model with default refresh + - semantic_model_name: ["Marketing Model", "Finance Model"] +``` + +**Enabling the feature:** + +```python +from fabric_cicd import append_feature_flag + +# Enable semantic model refresh +append_feature_flag("enable_semantic_model_refresh") +``` + +**Notes:** + +- If `refresh_payload` is not specified, a default full refresh (`{"type": "full"}`) is performed. +- The refresh is initiated asynchronously (returns HTTP 202), meaning the deployment will continue while the refresh runs in the background. +- For enhanced refresh features (partition-based refresh, custom commit modes, etc.), your workspace must be in a Premium capacity. +- For detailed information about refresh payload options, see the [Power BI REST API documentation](https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/refresh-dataset). + ## Advanced Find and Replace ### `find_value` Regex diff --git a/docs/how_to/semantic_model_deployment.md b/docs/how_to/semantic_model_deployment.md new file mode 100644 index 00000000..783bba9a --- /dev/null +++ b/docs/how_to/semantic_model_deployment.md @@ -0,0 +1,239 @@ +# Semantic Model Deployment + +## Overview + +Semantic model deployment in fabric-cicd supports advanced features including handling destructive changes and automatic refresh after deployment. This guide covers how to work with semantic models in your CI/CD pipelines. + +## Feature Flags + +The advanced semantic model features require specific feature flags to be enabled: + +- **`enable_semantic_model_destructive_change_detection`**: Enables detection and guidance for destructive changes +- **`enable_semantic_model_refresh`**: Enables automatic refresh after deployment + +```python +from fabric_cicd import append_feature_flag + +# Enable destructive change detection +append_feature_flag("enable_semantic_model_destructive_change_detection") + +# Enable semantic model refresh +append_feature_flag("enable_semantic_model_refresh") +``` + +## Handling Destructive Changes + +**⚠️ Requires Feature Flag:** This feature requires the `enable_semantic_model_destructive_change_detection` feature flag to be enabled. + +### What are Destructive Changes? + +Destructive changes are schema modifications to semantic models that can cause data loss or require the dataset to be dropped and fully reprocessed. Common examples include: + +- Removing or renaming columns +- Changing a column's data type +- Altering incremental refresh or partition definitions +- Removing or modifying hierarchies +- Disabling Auto date/time features + +### Error Detection + +When fabric-cicd detects a destructive change during deployment, it will: + +1. Log detailed warning messages explaining the issue +2. Provide guidance on resolution options +3. Fail the deployment (to prevent accidental data loss) + +The error will typically include: + +``` +WARNING: Semantic model 'MyModel' deployment failed due to destructive changes that require data purge. +WARNING: Destructive changes include operations like: removing columns, changing data types, + altering partition definitions, or removing hierarchies. +``` + +### Resolution Options + +When you encounter a destructive change error, you have three options: + +#### Option 1: Use XMLA Endpoint to Clear Values (Recommended) + +Before redeploying, connect to the semantic model via the XMLA endpoint and execute a TMSL command to clear values: + +**Using Tabular Editor or SSMS:** + +1. Connect to your Power BI workspace via XMLA endpoint: + - Endpoint URL: `powerbi://api.powerbi.com/v1.0/myorg/` +2. Execute the following TMSL command: + +```json +{ + "refresh": { + "type": "clearValues", + "objects": [ + { + "database": "YourSemanticModelName" + } + ] + } +} +``` + +3. After clearing values, redeploy using fabric-cicd + +**Using PowerShell:** + +```powershell +# Install SqlServer PowerShell module if not already installed +# This module provides the Invoke-ASCmd cmdlet for Analysis Services +Install-Module -Name SqlServer -AllowClobber + +# Define the XMLA endpoint and model +$workspaceName = "YourWorkspace" +$modelName = "YourSemanticModel" +$xmlaEndpoint = "powerbi://api.powerbi.com/v1.0/myorg/$workspaceName" + +# TMSL script to clear values +$tmslScript = @" +{ + "refresh": { + "type": "clearValues", + "objects": [ + { + "database": "$modelName" + } + ] + } +} +"@ + +# Execute the TMSL command +Invoke-ASCmd -Server $xmlaEndpoint -Query $tmslScript +``` + +#### Option 2: Delete and Recreate + +Manually delete the semantic model from the target workspace, then redeploy. This is the simplest option but results in temporary unavailability. + +#### Option 3: Revert Changes + +Review your schema changes and revert any incompatible modifications. Consider using additive changes (adding new columns/measures) instead of modifying existing ones. + +## Automatic Refresh After Deployment + +**⚠️ Requires Feature Flag:** This feature requires the `enable_semantic_model_refresh` feature flag to be enabled. + +fabric-cicd supports automatic refresh of semantic models after successful deployment using the `semantic_model_refresh` parameter. + +### Basic Usage + +First, enable the feature flag: + +```python +from fabric_cicd import append_feature_flag + +append_feature_flag("enable_semantic_model_refresh") +``` + +Then add the following to your `parameter.yml` file: + +```yaml +semantic_model_refresh: + - semantic_model_name: "Sales Model" + - semantic_model_name: ["Marketing Model", "Finance Model"] +``` + +This will perform a default full refresh on the specified models after deployment. + +### Advanced Refresh Options + +For Premium capacities, you can specify custom refresh payloads to control exactly how the refresh is performed: + +```yaml +semantic_model_refresh: + - semantic_model_name: "Sales Model" + refresh_payload: + type: "full" + objects: + - table: "Sales" + - table: "Products" + partition: "Products-2024" + commitMode: "transactional" + maxParallelism: 2 + retryCount: 1 +``` + +**Supported refresh payload options:** + +- `type`: Type of refresh (`"full"`, `"calculate"`, `"dataOnly"`, `"automatic"`) +- `objects`: Array of tables/partitions to refresh (omit for full model refresh) +- `commitMode`: `"transactional"` or `"partialBatch"` +- `maxParallelism`: Number of parallel threads (Premium only) +- `retryCount`: Number of retry attempts on failure +- `timeout`: Timeout in format `"hh:mm:ss"` + +### Refresh Behavior + +- Refresh is initiated asynchronously (HTTP 202 response) +- Deployment continues while refresh runs in background +- If refresh fails, deployment does not fail (warning is logged) +- Use Power BI workspace or API to monitor refresh status + +### Complete Example + +Here's a complete `parameter.yml` example combining binding and refresh: + +```yaml +# Bind semantic models to connections +semantic_model_binding: + - connection_id: "abc123-guid-here" + semantic_model_name: "Sales Model" + +# Refresh models after deployment +semantic_model_refresh: + - semantic_model_name: "Sales Model" + refresh_payload: + type: "full" + commitMode: "transactional" +``` + +## Best Practices + +1. **Always test destructive changes** in a development environment first +2. **Use XMLA endpoint** for clearing values before deploying destructive changes to production +3. **Monitor refresh status** after deployment to ensure data is current +4. **Use incremental refresh** where possible to minimize processing time +5. **Document schema changes** in your commit messages or pull requests +6. **Consider using separate models** for frequently changing schemas to isolate impact + +## Prerequisites + +- **XMLA Endpoint Access**: Premium, Premium Per User, or Embedded capacity +- **Permissions**: Contributor or Admin role on the workspace +- **Authentication**: Azure AD credentials with appropriate permissions + +## Troubleshooting + +### "Cannot connect to XMLA endpoint" + +- Verify your workspace is in a Premium capacity +- Check that XMLA read-write is enabled (Workspace Settings → Premium) +- Ensure your credentials have appropriate permissions + +### "Refresh fails after deployment" + +- Check data source connectivity +- Verify credentials and connections are properly configured +- Review refresh errors in the Power BI workspace refresh history + +### "Still getting destructive change error after clearing values" + +- Ensure you cleared values for the correct semantic model +- Verify the TMSL command executed successfully +- Try using Option 2 (delete and recreate) instead + +## Additional Resources + +- [Power BI REST API - Refresh Dataset](https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/refresh-dataset) +- [TMSL Reference - Refresh Command](https://learn.microsoft.com/en-us/analysis-services/tmsl/refresh-command-tmsl) +- [XMLA Endpoint Documentation](https://learn.microsoft.com/en-us/power-bi/enterprise/service-premium-connect-tools) +- [Enhanced Refresh API](https://learn.microsoft.com/en-us/power-bi/connect-data/asynchronous-refresh) diff --git a/src/fabric_cicd/_items/_semanticmodel.py b/src/fabric_cicd/_items/_semanticmodel.py index 799b25d4..115fd36a 100644 --- a/src/fabric_cicd/_items/_semanticmodel.py +++ b/src/fabric_cicd/_items/_semanticmodel.py @@ -4,6 +4,7 @@ """Functions to process and deploy Semantic Model item.""" import logging +from typing import Optional from fabric_cicd import FabricWorkspace, constants @@ -19,13 +20,27 @@ def publish_semanticmodels(fabric_workspace_obj: FabricWorkspace) -> None: """ item_type = "SemanticModel" + # Check if destructive change detection is enabled + enable_destructive_change_detection = "enable_semantic_model_destructive_change_detection" in constants.FEATURE_FLAG + for item_name in fabric_workspace_obj.repository_items.get(item_type, {}): exclude_path = r".*\.pbi[/\\].*" - fabric_workspace_obj._publish_item(item_name=item_name, item_type=item_type, exclude_path=exclude_path) + if enable_destructive_change_detection: + _publish_semanticmodel_with_retry( + fabric_workspace_obj=fabric_workspace_obj, + item_name=item_name, + item_type=item_type, + exclude_path=exclude_path, + ) + else: + # Use standard publish without destructive change detection + fabric_workspace_obj._publish_item(item_name=item_name, item_type=item_type, exclude_path=exclude_path) model_with_binding_dict = fabric_workspace_obj.environment_parameter.get("semantic_model_binding", []) if not model_with_binding_dict: + # Check if semantic model refresh is configured + _refresh_semanticmodels_if_configured(fabric_workspace_obj) return # Build connection mapping from semantic_model_binding parameter @@ -48,6 +63,9 @@ def publish_semanticmodels(fabric_workspace_obj: FabricWorkspace) -> None: fabric_workspace_obj=fabric_workspace_obj, connections=connections, connection_details=binding_mapping ) + # Refresh semantic models after binding if configured + _refresh_semanticmodels_if_configured(fabric_workspace_obj) + def get_connections(fabric_workspace_obj: FabricWorkspace) -> dict: """ @@ -176,3 +194,215 @@ def build_request_body(body: dict) -> dict: }, } } + + +def _is_destructive_change_error(error_message: str, error_code: Optional[str] = None) -> bool: + """ + Check if an error indicates a destructive change that requires data purge. + + Args: + error_message: The error message from the API response + error_code: Optional error code from the API response + + Returns: + True if the error indicates destructive changes, False otherwise + """ + # Check for known destructive change error codes + destructive_error_codes = [ + "Alm_InvalidRequest_PurgeRequired", + "PurgeRequired", + ] + + if error_code and error_code in destructive_error_codes: + return True + + # Check for destructive change keywords in error message + destructive_keywords = [ + "purge required", + "data deletion", + "destructive change", + "will cause loss of data", + "requires data to be dropped", + ] + + # Validate error_message is a string before processing + if not isinstance(error_message, str): + return False + + error_message_lower = error_message.lower() + return any(keyword in error_message_lower for keyword in destructive_keywords) + + +def _publish_semanticmodel_with_retry( + fabric_workspace_obj: FabricWorkspace, + item_name: str, + item_type: str, + exclude_path: str, +) -> None: + """ + Publishes a semantic model with retry logic for destructive changes. + + This function attempts to publish a semantic model. If it fails due to destructive + changes requiring data purge, it logs a detailed warning with guidance on how to + resolve the issue. + + Args: + fabric_workspace_obj: The FabricWorkspace object + item_name: Name of the semantic model to publish + item_type: Type of the item (SemanticModel) + exclude_path: Regex string of paths to exclude + """ + try: + # Try to publish the semantic model normally + fabric_workspace_obj._publish_item(item_name=item_name, item_type=item_type, exclude_path=exclude_path) + except Exception as e: + error_message = str(e) + + # Check if this is a destructive change error + if _is_destructive_change_error(error_message): + logger.warning( + f"Semantic model '{item_name}' deployment failed due to destructive changes that require data purge." + ) + logger.warning( + "Destructive changes include operations like: removing columns, changing data types, " + "altering partition definitions, or removing hierarchies." + ) + logger.warning( + "To resolve this issue, you have the following options:\n" + " 1. Use external tools to clear values before deployment:\n" + " - Connect via XMLA endpoint (e.g., using Tabular Editor or SSMS)\n" + " - Execute TMSL command: {'refresh': {'type': 'clearValues', 'objects': [...]}}\n" + " 2. Manually delete and recreate the semantic model in the target workspace\n" + " 3. Review the schema changes and revert incompatible modifications" + ) + logger.error(f"Full error details: {error_message}") + + # Re-raise the exception so deployment fails visibly + raise + + +def _refresh_semanticmodels_if_configured(fabric_workspace_obj: FabricWorkspace) -> None: + """ + Refresh semantic models if configured in parameter file. + + Checks for 'semantic_model_refresh' parameter and refreshes models accordingly. + Requires 'enable_semantic_model_refresh' feature flag to be enabled. + + Args: + fabric_workspace_obj: The FabricWorkspace object + """ + # Check if semantic model refresh feature is enabled + if "enable_semantic_model_refresh" not in constants.FEATURE_FLAG: + return + + refresh_config = fabric_workspace_obj.environment_parameter.get("semantic_model_refresh") + + if not refresh_config: + return + + item_type = "SemanticModel" + + # Get list of semantic models to refresh + models_to_refresh = [] + + # Check if refresh_config is a list (multiple model configurations) + if isinstance(refresh_config, list): + for config in refresh_config: + model_names = config.get("semantic_model_name", []) + if isinstance(model_names, str): + model_names = [model_names] + + refresh_payload = config.get("refresh_payload") + + for model_name in model_names: + models_to_refresh.append({"name": model_name, "payload": refresh_payload}) + # Single model configuration as dict + elif isinstance(refresh_config, dict): + model_names = refresh_config.get("semantic_model_name", []) + if isinstance(model_names, str): + model_names = [model_names] + + refresh_payload = refresh_config.get("refresh_payload") + + for model_name in model_names: + models_to_refresh.append({"name": model_name, "payload": refresh_payload}) + + # Refresh each model + for model_config in models_to_refresh: + model_name = model_config["name"] + custom_payload = model_config.get("payload") + + # Check if this semantic model exists in the repository + if model_name not in fabric_workspace_obj.repository_items.get(item_type, {}): + logger.warning(f"Semantic model '{model_name}' not found in repository, skipping refresh") + continue + + # Get the semantic model object + item_obj = fabric_workspace_obj.repository_items[item_type][model_name] + model_id = item_obj.guid + + if not model_id: + logger.warning(f"Semantic model '{model_name}' has no GUID, skipping refresh") + continue + + _refresh_semanticmodel( + fabric_workspace_obj=fabric_workspace_obj, + model_name=model_name, + model_id=model_id, + custom_payload=custom_payload, + ) + + +def _refresh_semanticmodel( + fabric_workspace_obj: FabricWorkspace, + model_name: str, + model_id: str, + custom_payload: Optional[dict] = None, +) -> None: + """ + Refresh a semantic model using Power BI REST API. + + Args: + fabric_workspace_obj: The FabricWorkspace object + model_name: Name of the semantic model + model_id: GUID of the semantic model + custom_payload: Optional custom refresh payload. If None, uses default automatic refresh. + """ + logger.info(f"Refreshing semantic model '{model_name}' (ID: {model_id})") + + # Build the refresh payload + if custom_payload: + refresh_body = custom_payload + logger.debug(f"Using custom refresh payload for '{model_name}': {refresh_body}") + else: + # Default to automatic/default refresh (no payload needed for basic refresh) + refresh_body = {"type": "full"} + logger.debug(f"Using default full refresh for '{model_name}'") + + try: + # Use Power BI API for dataset refresh + # https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/refresh-dataset + refresh_url = ( + f"{constants.DEFAULT_API_ROOT_URL}/v1.0/myorg/groups/" + f"{fabric_workspace_obj.workspace_id}/datasets/{model_id}/refreshes" + ) + + refresh_response = fabric_workspace_obj.endpoint.invoke( + method="POST", + url=refresh_url, + body=refresh_body, + ) + + status_code = refresh_response.get("status_code") + + if status_code == 202: + logger.info(f"{constants.INDENT}Refresh initiated successfully for '{model_name}'") + # Note: 202 means the refresh has been accepted and is running asynchronously + elif status_code == 200: + logger.info(f"{constants.INDENT}Refresh completed for '{model_name}'") + else: + logger.warning(f"{constants.INDENT}Unexpected status code for refresh: {status_code}") + + except Exception as e: + logger.error(f"Failed to refresh semantic model '{model_name}': {e!s}") + # Don't re-raise - we want deployment to continue even if refresh fails diff --git a/tests/test_semanticmodel.py b/tests/test_semanticmodel.py new file mode 100644 index 00000000..b1decbf5 --- /dev/null +++ b/tests/test_semanticmodel.py @@ -0,0 +1,656 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. + +"""Tests for semantic model deployment with destructive changes and refresh functionality.""" + +import json +import tempfile +from pathlib import Path +from unittest.mock import MagicMock, patch + +import pytest + +from fabric_cicd import constants +from fabric_cicd._items._semanticmodel import ( + _is_destructive_change_error, + _publish_semanticmodel_with_retry, + _refresh_semanticmodel, + _refresh_semanticmodels_if_configured, + publish_semanticmodels, +) +from fabric_cicd.fabric_workspace import FabricWorkspace + + +@pytest.fixture +def mock_endpoint(): + """Mock FabricEndpoint to avoid real API calls.""" + mock = MagicMock() + + def mock_invoke(method, url, **_kwargs): + if method == "GET" and "workspaces" in url and not url.endswith("/items"): + return {"body": {"value": [], "capacityId": "test-capacity"}} + if method == "GET" and url.endswith("/items"): + return {"body": {"value": []}} + if method == "POST" and url.endswith("/folders"): + return {"body": {"id": "mock-folder-id"}} + if method == "POST" and url.endswith("/items"): + return {"body": {"id": "mock-item-id", "workspaceId": "mock-workspace-id"}} + if method == "POST" and "refreshes" in url: + return {"body": {}, "status_code": 202} + return {"body": {"value": [], "capacityId": "test-capacity"}} + + mock.invoke.side_effect = mock_invoke + mock.upn_auth = True + return mock + + +@pytest.fixture(autouse=True) +def clear_feature_flags(): + """Clear feature flags before and after each test.""" + original_flags = constants.FEATURE_FLAG.copy() + constants.FEATURE_FLAG.clear() + yield + constants.FEATURE_FLAG.clear() + constants.FEATURE_FLAG.update(original_flags) + + +def test_is_destructive_change_error_with_error_code(): + """Test detection of destructive change errors by error code.""" + assert _is_destructive_change_error("Some error message", "Alm_InvalidRequest_PurgeRequired") + assert _is_destructive_change_error("Some error message", "PurgeRequired") + assert not _is_destructive_change_error("Some error message", "OtherErrorCode") + + +def test_is_destructive_change_error_with_keywords(): + """Test detection of destructive change errors by keywords in message.""" + assert _is_destructive_change_error("Dataset changes will cause loss of data and purge required") + assert _is_destructive_change_error("This operation requires data to be dropped") + assert _is_destructive_change_error("Destructive change detected") + assert _is_destructive_change_error("Data deletion is required") + assert not _is_destructive_change_error("Normal deployment error") + assert not _is_destructive_change_error("Invalid request") + + +def test_is_destructive_change_error_case_insensitive(): + """Test that destructive change detection is case-insensitive.""" + assert _is_destructive_change_error("PURGE REQUIRED for this update") + assert _is_destructive_change_error("Will Cause LOSS OF DATA") + + +def test_is_destructive_change_error_no_message(): + """Test handling of None or empty error messages.""" + assert not _is_destructive_change_error(None) + assert not _is_destructive_change_error("") + # Test non-string types + assert not _is_destructive_change_error(123) # type: ignore + assert not _is_destructive_change_error([]) # type: ignore + assert not _is_destructive_change_error({}) # type: ignore + + +def test_publish_semanticmodel_with_retry_success(mock_endpoint): + """Test successful semantic model deployment without retry.""" + with tempfile.TemporaryDirectory() as temp_dir: + temp_path = Path(temp_dir) + + # Create a semantic model item + model_dir = temp_path / "TestModel.SemanticModel" + model_dir.mkdir(parents=True, exist_ok=True) + + platform_file = model_dir / ".platform" + metadata = { + "metadata": { + "type": "SemanticModel", + "displayName": "Test Model", + "description": "Test semantic model", + }, + "config": {"logicalId": "test-model-id"}, + } + + with platform_file.open("w", encoding="utf-8") as f: + json.dump(metadata, f) + + with (model_dir / "model.bim").open("w", encoding="utf-8") as f: + f.write('{"name": "TestModel"}') + + with ( + patch("fabric_cicd.fabric_workspace.FabricEndpoint", return_value=mock_endpoint), + patch.object( + FabricWorkspace, "_refresh_deployed_items", new=lambda self: setattr(self, "deployed_items", {}) + ), + patch.object( + FabricWorkspace, "_refresh_deployed_folders", new=lambda self: setattr(self, "deployed_folders", {}) + ), + ): + workspace = FabricWorkspace( + workspace_id="12345678-1234-5678-abcd-1234567890ab", + repository_directory=str(temp_path), + item_type_in_scope=["SemanticModel"], + ) + + # Refresh repository items to populate the workspace + workspace._refresh_repository_items() + + # Should not raise any exception + _publish_semanticmodel_with_retry( + fabric_workspace_obj=workspace, + item_name="Test Model", + item_type="SemanticModel", + exclude_path=r".*\.pbi[/\\].*", + ) + + +def test_publish_semanticmodel_with_retry_destructive_error(): + """Test semantic model deployment with destructive change error.""" + with tempfile.TemporaryDirectory() as temp_dir: + temp_path = Path(temp_dir) + + # Create a semantic model item + model_dir = temp_path / "TestModel.SemanticModel" + model_dir.mkdir(parents=True, exist_ok=True) + + platform_file = model_dir / ".platform" + metadata = { + "metadata": { + "type": "SemanticModel", + "displayName": "Test Model", + "description": "Test semantic model", + }, + "config": {"logicalId": "test-model-id"}, + } + + with platform_file.open("w", encoding="utf-8") as f: + json.dump(metadata, f) + + with (model_dir / "model.bim").open("w", encoding="utf-8") as f: + f.write('{"name": "TestModel"}') + + # Mock endpoint to raise destructive change error + mock_endpoint_with_error = MagicMock() + + def mock_invoke_with_error(method, url, **_kwargs): + if method == "POST" and "/items" in url and "/updateDefinition" not in url: + error_msg = "Alm_InvalidRequest_PurgeRequired - Dataset changes will cause loss of data" + raise Exception(error_msg) + if method == "GET" and "workspaces" in url and not url.endswith("/items"): + return {"body": {"value": [], "capacityId": "test-capacity"}} + if method == "GET" and url.endswith("/items"): + return {"body": {"value": []}} + return {"body": {"value": []}} + + mock_endpoint_with_error.invoke.side_effect = mock_invoke_with_error + mock_endpoint_with_error.upn_auth = True + + with ( + patch("fabric_cicd.fabric_workspace.FabricEndpoint", return_value=mock_endpoint_with_error), + patch.object( + FabricWorkspace, "_refresh_deployed_items", new=lambda self: setattr(self, "deployed_items", {}) + ), + patch.object( + FabricWorkspace, "_refresh_deployed_folders", new=lambda self: setattr(self, "deployed_folders", {}) + ), + ): + workspace = FabricWorkspace( + workspace_id="12345678-1234-5678-abcd-1234567890ab", + repository_directory=str(temp_path), + item_type_in_scope=["SemanticModel"], + ) + + # Refresh repository items to populate the workspace + workspace._refresh_repository_items() + + # Should raise the exception after logging warnings + with pytest.raises(Exception, match="Alm_InvalidRequest_PurgeRequired"): + _publish_semanticmodel_with_retry( + fabric_workspace_obj=workspace, + item_name="Test Model", + item_type="SemanticModel", + exclude_path=r".*\.pbi[/\\].*", + ) + + +def test_refresh_semanticmodel_default_payload(mock_endpoint): + """Test semantic model refresh with default payload.""" + with tempfile.TemporaryDirectory() as temp_dir: + temp_path = Path(temp_dir) + + with ( + patch("fabric_cicd.fabric_workspace.FabricEndpoint", return_value=mock_endpoint), + patch.object( + FabricWorkspace, "_refresh_deployed_items", new=lambda self: setattr(self, "deployed_items", {}) + ), + patch.object( + FabricWorkspace, "_refresh_deployed_folders", new=lambda self: setattr(self, "deployed_folders", {}) + ), + ): + workspace = FabricWorkspace( + workspace_id="12345678-1234-5678-abcd-1234567890ab", + repository_directory=str(temp_path), + item_type_in_scope=["SemanticModel"], + ) + + # Should not raise any exception + _refresh_semanticmodel( + fabric_workspace_obj=workspace, + model_name="Test Model", + model_id="test-model-guid", + custom_payload=None, + ) + + +def test_refresh_semanticmodel_custom_payload(mock_endpoint): + """Test semantic model refresh with custom payload.""" + with tempfile.TemporaryDirectory() as temp_dir: + temp_path = Path(temp_dir) + + custom_payload = { + "type": "full", + "objects": [{"table": "Sales"}, {"table": "Products", "partition": "Products-2024"}], + "commitMode": "transactional", + } + + with ( + patch("fabric_cicd.fabric_workspace.FabricEndpoint", return_value=mock_endpoint), + patch.object( + FabricWorkspace, "_refresh_deployed_items", new=lambda self: setattr(self, "deployed_items", {}) + ), + patch.object( + FabricWorkspace, "_refresh_deployed_folders", new=lambda self: setattr(self, "deployed_folders", {}) + ), + ): + workspace = FabricWorkspace( + workspace_id="12345678-1234-5678-abcd-1234567890ab", + repository_directory=str(temp_path), + item_type_in_scope=["SemanticModel"], + ) + + # Should not raise any exception + _refresh_semanticmodel( + fabric_workspace_obj=workspace, + model_name="Test Model", + model_id="test-model-guid", + custom_payload=custom_payload, + ) + + +def test_refresh_semanticmodels_if_configured_no_config(mock_endpoint): + """Test that refresh is skipped when not configured.""" + with tempfile.TemporaryDirectory() as temp_dir: + temp_path = Path(temp_dir) + + with ( + patch("fabric_cicd.fabric_workspace.FabricEndpoint", return_value=mock_endpoint), + patch.object( + FabricWorkspace, "_refresh_deployed_items", new=lambda self: setattr(self, "deployed_items", {}) + ), + patch.object( + FabricWorkspace, "_refresh_deployed_folders", new=lambda self: setattr(self, "deployed_folders", {}) + ), + ): + workspace = FabricWorkspace( + workspace_id="12345678-1234-5678-abcd-1234567890ab", + repository_directory=str(temp_path), + item_type_in_scope=["SemanticModel"], + ) + + # No environment_parameter for refresh + workspace.environment_parameter = {} + + # Should not raise any exception and should not attempt refresh + _refresh_semanticmodels_if_configured(workspace) + + +def test_refresh_semanticmodels_if_configured_single_model(mock_endpoint): + """Test semantic model refresh with single model configuration.""" + with tempfile.TemporaryDirectory() as temp_dir: + temp_path = Path(temp_dir) + + # Create a semantic model item + model_dir = temp_path / "TestModel.SemanticModel" + model_dir.mkdir(parents=True, exist_ok=True) + + platform_file = model_dir / ".platform" + metadata = { + "metadata": { + "type": "SemanticModel", + "displayName": "Test Model", + "description": "Test semantic model", + }, + "config": {"logicalId": "test-model-id"}, + } + + with platform_file.open("w", encoding="utf-8") as f: + json.dump(metadata, f) + + with (model_dir / "model.bim").open("w", encoding="utf-8") as f: + f.write('{"name": "TestModel"}') + + with ( + patch("fabric_cicd.fabric_workspace.FabricEndpoint", return_value=mock_endpoint), + patch.object( + FabricWorkspace, "_refresh_deployed_items", new=lambda self: setattr(self, "deployed_items", {}) + ), + patch.object( + FabricWorkspace, "_refresh_deployed_folders", new=lambda self: setattr(self, "deployed_folders", {}) + ), + ): + workspace = FabricWorkspace( + workspace_id="12345678-1234-5678-abcd-1234567890ab", + repository_directory=str(temp_path), + item_type_in_scope=["SemanticModel"], + ) + + # Set up repository items with a GUID + workspace._refresh_repository_items() + workspace.repository_items["SemanticModel"]["Test Model"].guid = "test-model-guid" + + # Configure refresh + workspace.environment_parameter = { + "semantic_model_refresh": {"semantic_model_name": "Test Model", "refresh_payload": {"type": "full"}} + } + + # Enable the refresh feature flag + constants.FEATURE_FLAG.add("enable_semantic_model_refresh") + + # Should not raise any exception + _refresh_semanticmodels_if_configured(workspace) + + +def test_refresh_semanticmodels_if_configured_multiple_models(mock_endpoint): + """Test semantic model refresh with multiple models configuration.""" + with tempfile.TemporaryDirectory() as temp_dir: + temp_path = Path(temp_dir) + + # Create two semantic model items + for model_name in ["Model1", "Model2"]: + model_dir = temp_path / f"{model_name}.SemanticModel" + model_dir.mkdir(parents=True, exist_ok=True) + + platform_file = model_dir / ".platform" + metadata = { + "metadata": { + "type": "SemanticModel", + "displayName": model_name, + "description": f"Test {model_name}", + }, + "config": {"logicalId": f"{model_name.lower()}-id"}, + } + + with platform_file.open("w", encoding="utf-8") as f: + json.dump(metadata, f) + + with (model_dir / "model.bim").open("w", encoding="utf-8") as f: + f.write(f'{{"name": "{model_name}"}}') + + with ( + patch("fabric_cicd.fabric_workspace.FabricEndpoint", return_value=mock_endpoint), + patch.object( + FabricWorkspace, "_refresh_deployed_items", new=lambda self: setattr(self, "deployed_items", {}) + ), + patch.object( + FabricWorkspace, "_refresh_deployed_folders", new=lambda self: setattr(self, "deployed_folders", {}) + ), + ): + workspace = FabricWorkspace( + workspace_id="12345678-1234-5678-abcd-1234567890ab", + repository_directory=str(temp_path), + item_type_in_scope=["SemanticModel"], + ) + + # Set up repository items with GUIDs + workspace._refresh_repository_items() + workspace.repository_items["SemanticModel"]["Model1"].guid = "model1-guid" + workspace.repository_items["SemanticModel"]["Model2"].guid = "model2-guid" + + # Configure refresh for multiple models + workspace.environment_parameter = { + "semantic_model_refresh": [ + {"semantic_model_name": "Model1", "refresh_payload": {"type": "full"}}, + {"semantic_model_name": ["Model2"], "refresh_payload": None}, + ] + } + + # Enable the refresh feature flag + constants.FEATURE_FLAG.add("enable_semantic_model_refresh") + + # Should not raise any exception + _refresh_semanticmodels_if_configured(workspace) + + +def test_publish_semanticmodels_with_refresh(mock_endpoint): + """Test full semantic model publishing with refresh configuration.""" + with tempfile.TemporaryDirectory() as temp_dir: + temp_path = Path(temp_dir) + + # Create a semantic model item + model_dir = temp_path / "TestModel.SemanticModel" + model_dir.mkdir(parents=True, exist_ok=True) + + platform_file = model_dir / ".platform" + metadata = { + "metadata": { + "type": "SemanticModel", + "displayName": "Test Model", + "description": "Test semantic model", + }, + "config": {"logicalId": "test-model-id"}, + } + + with platform_file.open("w", encoding="utf-8") as f: + json.dump(metadata, f) + + with (model_dir / "model.bim").open("w", encoding="utf-8") as f: + f.write('{"name": "TestModel"}') + + with ( + patch("fabric_cicd.fabric_workspace.FabricEndpoint", return_value=mock_endpoint), + patch.object( + FabricWorkspace, "_refresh_deployed_items", new=lambda self: setattr(self, "deployed_items", {}) + ), + patch.object( + FabricWorkspace, "_refresh_deployed_folders", new=lambda self: setattr(self, "deployed_folders", {}) + ), + ): + workspace = FabricWorkspace( + workspace_id="12345678-1234-5678-abcd-1234567890ab", + repository_directory=str(temp_path), + item_type_in_scope=["SemanticModel"], + ) + + # Set up repository items with a GUID + workspace._refresh_repository_items() + workspace.repository_items["SemanticModel"]["Test Model"].guid = "test-model-guid" + + # Configure refresh + workspace.environment_parameter = { + "semantic_model_refresh": {"semantic_model_name": "Test Model", "refresh_payload": None} + } + + # Enable the refresh feature flag + constants.FEATURE_FLAG.add("enable_semantic_model_refresh") + + # Should not raise any exception + publish_semanticmodels(workspace) + + +def test_refresh_disabled_without_feature_flag(mock_endpoint): + """Test that refresh is skipped when feature flag is not enabled.""" + with tempfile.TemporaryDirectory() as temp_dir: + temp_path = Path(temp_dir) + + # Create a semantic model item + model_dir = temp_path / "TestModel.SemanticModel" + model_dir.mkdir(parents=True, exist_ok=True) + + platform_file = model_dir / ".platform" + metadata = { + "metadata": { + "type": "SemanticModel", + "displayName": "Test Model", + "description": "Test semantic model", + }, + "config": {"logicalId": "test-model-id"}, + } + + with platform_file.open("w", encoding="utf-8") as f: + json.dump(metadata, f) + + with (model_dir / "model.bim").open("w", encoding="utf-8") as f: + f.write('{"name": "TestModel"}') + + with ( + patch("fabric_cicd.fabric_workspace.FabricEndpoint", return_value=mock_endpoint), + patch.object( + FabricWorkspace, "_refresh_deployed_items", new=lambda self: setattr(self, "deployed_items", {}) + ), + patch.object( + FabricWorkspace, "_refresh_deployed_folders", new=lambda self: setattr(self, "deployed_folders", {}) + ), + ): + workspace = FabricWorkspace( + workspace_id="12345678-1234-5678-abcd-1234567890ab", + repository_directory=str(temp_path), + item_type_in_scope=["SemanticModel"], + ) + + workspace._refresh_repository_items() + workspace.repository_items["SemanticModel"]["Test Model"].guid = "test-model-guid" + + # Configure refresh but DON'T enable the feature flag + workspace.environment_parameter = { + "semantic_model_refresh": {"semantic_model_name": "Test Model", "refresh_payload": None} + } + + # Should not call refresh because feature flag is not enabled + _refresh_semanticmodels_if_configured(workspace) + + # Verify no refresh was attempted (would have logged if attempted) + + +def test_destructive_change_detection_with_feature_flag(): + """Test destructive change detection when feature flag is enabled.""" + with tempfile.TemporaryDirectory() as temp_dir: + temp_path = Path(temp_dir) + + # Create a semantic model item + model_dir = temp_path / "TestModel.SemanticModel" + model_dir.mkdir(parents=True, exist_ok=True) + + platform_file = model_dir / ".platform" + metadata = { + "metadata": { + "type": "SemanticModel", + "displayName": "Test Model", + "description": "Test semantic model", + }, + "config": {"logicalId": "test-model-id"}, + } + + with platform_file.open("w", encoding="utf-8") as f: + json.dump(metadata, f) + + with (model_dir / "model.bim").open("w", encoding="utf-8") as f: + f.write('{"name": "TestModel"}') + + # Mock endpoint to raise destructive change error + mock_endpoint_with_error = MagicMock() + + def mock_invoke_with_error(method, url, **_kwargs): + if method == "POST" and "/items" in url and "/updateDefinition" not in url: + error_msg = "Alm_InvalidRequest_PurgeRequired - Dataset changes will cause loss of data" + raise Exception(error_msg) + if method == "GET" and "workspaces" in url and not url.endswith("/items"): + return {"body": {"value": [], "capacityId": "test-capacity"}} + if method == "GET" and url.endswith("/items"): + return {"body": {"value": []}} + return {"body": {"value": []}} + + mock_endpoint_with_error.invoke.side_effect = mock_invoke_with_error + mock_endpoint_with_error.upn_auth = True + + with ( + patch("fabric_cicd.fabric_workspace.FabricEndpoint", return_value=mock_endpoint_with_error), + patch.object( + FabricWorkspace, "_refresh_deployed_items", new=lambda self: setattr(self, "deployed_items", {}) + ), + patch.object( + FabricWorkspace, "_refresh_deployed_folders", new=lambda self: setattr(self, "deployed_folders", {}) + ), + ): + workspace = FabricWorkspace( + workspace_id="12345678-1234-5678-abcd-1234567890ab", + repository_directory=str(temp_path), + item_type_in_scope=["SemanticModel"], + ) + + workspace._refresh_repository_items() + + # Enable the destructive change detection feature flag + constants.FEATURE_FLAG.add("enable_semantic_model_destructive_change_detection") + + # Should raise with guidance when feature flag is enabled + with pytest.raises(Exception, match="Alm_InvalidRequest_PurgeRequired"): + publish_semanticmodels(workspace) + + +def test_destructive_change_detection_without_feature_flag(): + """Test that destructive change detection is skipped when feature flag is not enabled.""" + with tempfile.TemporaryDirectory() as temp_dir: + temp_path = Path(temp_dir) + + # Create a semantic model item + model_dir = temp_path / "TestModel.SemanticModel" + model_dir.mkdir(parents=True, exist_ok=True) + + platform_file = model_dir / ".platform" + metadata = { + "metadata": { + "type": "SemanticModel", + "displayName": "Test Model", + "description": "Test semantic model", + }, + "config": {"logicalId": "test-model-id"}, + } + + with platform_file.open("w", encoding="utf-8") as f: + json.dump(metadata, f) + + with (model_dir / "model.bim").open("w", encoding="utf-8") as f: + f.write('{"name": "TestModel"}') + + # Mock endpoint to raise destructive change error + mock_endpoint_with_error = MagicMock() + + def mock_invoke_with_error(method, url, **_kwargs): + if method == "POST" and "/items" in url and "/updateDefinition" not in url: + error_msg = "Alm_InvalidRequest_PurgeRequired - Dataset changes will cause loss of data" + raise Exception(error_msg) + if method == "GET" and "workspaces" in url and not url.endswith("/items"): + return {"body": {"value": [], "capacityId": "test-capacity"}} + if method == "GET" and url.endswith("/items"): + return {"body": {"value": []}} + return {"body": {"value": []}} + + mock_endpoint_with_error.invoke.side_effect = mock_invoke_with_error + mock_endpoint_with_error.upn_auth = True + + with ( + patch("fabric_cicd.fabric_workspace.FabricEndpoint", return_value=mock_endpoint_with_error), + patch.object( + FabricWorkspace, "_refresh_deployed_items", new=lambda self: setattr(self, "deployed_items", {}) + ), + patch.object( + FabricWorkspace, "_refresh_deployed_folders", new=lambda self: setattr(self, "deployed_folders", {}) + ), + ): + workspace = FabricWorkspace( + workspace_id="12345678-1234-5678-abcd-1234567890ab", + repository_directory=str(temp_path), + item_type_in_scope=["SemanticModel"], + ) + + workspace._refresh_repository_items() + + # DON'T enable the feature flag - should use standard deployment + # Should raise exception but WITHOUT guidance messages + with pytest.raises(Exception, match="Alm_InvalidRequest_PurgeRequired"): + publish_semanticmodels(workspace)