You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm encountering an issue where some connection parameters available in the standard Snowflake JDBC (and Python connector) are missing in the spark-snowflake connector. In particular, the host parameter is not available.
Current Behavior:
When configuring the connection with the spark-snowflake connector, the options available are as follows:
I am running my Spark application inside a Snowpark container. With the standard spark-snowflake connector, my connection uses a public IP address to access Snowflake, which is blocked by our network policy. I would prefer for my container to be recognized as originating from Snowflake's network—something that appears possible when the host parameter is provided, as with the standard JDBC connector. Since the spark-snowflake connector uses the Snowflake JDBC under the hood, it seems that exposing this parameter would be a straightforward enhancement.
Expected Behavior:
It would be ideal if the spark-snowflake connector allowed the host parameter (and any similar missing parameters) to be passed through to the JDBC connector, similar to how it is done in the Python connector.
Could you please consider exposing this parameter in the connector configuration? This would greatly simplify network configuration issues without requiring changes to our network policies.
I could create a PR mysefl. Please let me know if you are okay with this approach.
The text was updated successfully, but these errors were encountered:
Description:
I'm encountering an issue where some connection parameters available in the standard Snowflake JDBC (and Python connector) are missing in the spark-snowflake connector. In particular, the
host
parameter is not available.Current Behavior:
When configuring the connection with the spark-snowflake connector, the options available are as follows:
In contrast, the standard JDBC (Python) connection allows for a
host
parameter:Impact:
I am running my Spark application inside a Snowpark container. With the standard spark-snowflake connector, my connection uses a public IP address to access Snowflake, which is blocked by our network policy. I would prefer for my container to be recognized as originating from Snowflake's network—something that appears possible when the
host
parameter is provided, as with the standard JDBC connector. Since the spark-snowflake connector uses the Snowflake JDBC under the hood, it seems that exposing this parameter would be a straightforward enhancement.Expected Behavior:
It would be ideal if the spark-snowflake connector allowed the
host
parameter (and any similar missing parameters) to be passed through to the JDBC connector, similar to how it is done in the Python connector.Additional Information:
host
support): [Snowflake Python Connector Documentation](https://docs.snowflake.com/en/developer-guide/python-connector/python-connector-connect)Could you please consider exposing this parameter in the connector configuration? This would greatly simplify network configuration issues without requiring changes to our network policies.
I could create a PR mysefl. Please let me know if you are okay with this approach.
The text was updated successfully, but these errors were encountered: