diff --git a/.gitignore b/.gitignore index 9619e05675..13a9966441 100644 --- a/.gitignore +++ b/.gitignore @@ -17,3 +17,4 @@ src/ui/templates src/ui/builder src/ui/client/builder/*.json src/ui/client/builder/*.txt +.vscode/settings.json diff --git a/src/common/core/letsencrypt/README.md b/src/common/core/letsencrypt/README.md index 053a202cec..9258e98436 100644 --- a/src/common/core/letsencrypt/README.md +++ b/src/common/core/letsencrypt/README.md @@ -1,31 +1,42 @@ -The Let's Encrypt plugin simplifies SSL/TLS certificate management by automating the creation, renewal, and configuration of free certificates from Let's Encrypt. This feature enables secure HTTPS connections for your websites without the complexity of manual certificate management, reducing both cost and administrative overhead. +# Let's Encrypt Plugin + +The Let's Encrypt plugin simplifies SSL/TLS certificate management by automating the creation, renewal, and configuration of free certificates from multiple certificate authorities. This feature enables secure HTTPS connections for your websites without the complexity of manual certificate management, reducing both cost and administrative overhead. **How it works:** 1. When enabled, BunkerWeb automatically detects the domains configured for your website. -2. BunkerWeb requests free SSL/TLS certificates from Let's Encrypt's certificate authority. +2. BunkerWeb requests free SSL/TLS certificates from supported certificate authorities (Let's Encrypt, ZeroSSL). 3. Domain ownership is verified through either HTTP challenges (proving you control the website) or DNS challenges (proving you control your domain's DNS). 4. Certificates are automatically installed and configured for your domains. 5. BunkerWeb handles certificate renewals in the background before expiration, ensuring continuous HTTPS availability. -6. The entire process is fully automated, requiring minimal intervention after the initial setup. +6. The entire process is fully automated with intelligent retry mechanisms, requiring minimal intervention after the initial setup. !!! info "Prerequisites" To use this feature, ensure that proper DNS **A records** are configured for each domain, pointing to the public IP(s) where BunkerWeb is accessible. Without correct DNS configuration, the domain verification process will fail. -### How to Use +## How to Use Follow these steps to configure and use the Let's Encrypt feature: 1. **Enable the feature:** Set the `AUTO_LETS_ENCRYPT` setting to `yes` to enable automatic certificate issuance and renewal. -2. **Provide contact email:** Enter your email address using the `EMAIL_LETS_ENCRYPT` setting to receive important notifications about your certificates. -3. **Choose challenge type:** Select either `http` or `dns` verification with the `LETS_ENCRYPT_CHALLENGE` setting. -4. **Configure DNS provider:** If using DNS challenges, specify your DNS provider and credentials. -5. **Select certificate profile:** Choose your preferred certificate profile using the `LETS_ENCRYPT_PROFILE` setting (classic, tlsserver, or shortlived). -6. **Let BunkerWeb handle the rest:** Once configured, certificates are automatically issued, installed, and renewed as needed. +2. **Choose certificate authority:** Select your preferred CA using `ACME_SSL_CA_PROVIDER` (letsencrypt or zerossl). +3. **Provide contact email:** Enter your email address using the `EMAIL_LETS_ENCRYPT` setting to receive important notifications about your certificates. +4. **Choose challenge type:** Select either `http` or `dns` verification with the `LETS_ENCRYPT_CHALLENGE` setting. +5. **Configure DNS provider:** If using DNS challenges, specify your DNS provider and credentials. +6. **Select certificate profile:** Choose your preferred certificate profile using the `LETS_ENCRYPT_PROFILE` setting (classic, tlsserver, or shortlived). +7. **Configure API keys:** For ZeroSSL, provide your API key using `ACME_ZEROSSL_API_KEY`. +8. **Let BunkerWeb handle the rest:** Once configured, certificates are automatically issued, installed, and renewed as needed with intelligent retry mechanisms. + +!!! tip "Certificate Authorities" + The plugin supports multiple certificate authorities: + - **Let's Encrypt**: Free, widely trusted, 90-day certificates + - **ZeroSSL**: Free alternative with competitive rate limits, supports EAB (External Account Binding) + + ZeroSSL requires an API key for automated EAB credential generation. Without an API key, you can manually provide EAB credentials using `ACME_ZEROSSL_EAB_KID` and `ACME_ZEROSSL_EAB_HMAC_KEY`. !!! tip "Certificate Profiles" - Let's Encrypt provides different certificate profiles for different use cases: - - **classic**: General-purpose certificates with 90-day validity (default) + Let's Encrypt and ZeroSSL provide different certificate profiles for different use cases: + - **classic**: General-purpose certificates with 90-day validity (default, widest compatibility) - **tlsserver**: Optimized for TLS server authentication with 90-day validity and smaller payload - **shortlived**: Enhanced security with 7-day validity for automated environments - **custom**: If your ACME server supports a different profile, set it using `LETS_ENCRYPT_CUSTOM_PROFILE`. @@ -33,29 +44,49 @@ Follow these steps to configure and use the Let's Encrypt feature: !!! info "Profile Availability" Note that the `tlsserver` and `shortlived` profiles may not be available in all environments or with all ACME clients at this time. The `classic` profile has the widest compatibility and is recommended for most users. If a selected profile is not available, the system will automatically fall back to the `classic` profile. -### Configuration Settings +## Advanced Security Features + +The plugin includes several advanced security and validation features: + +- **Public Suffix List (PSL) Validation**: Automatically prevents certificate requests for domains on the PSL (controlled by `LETS_ENCRYPT_DISABLE_PUBLIC_SUFFIXES`) +- **CAA Record Validation**: Checks DNS CAA records to ensure the selected certificate authority is authorized for your domains +- **IP Address Validation**: For HTTP challenges, validates that domain DNS records point to your server's public IP addresses +- **Retry Mechanisms**: Intelligent retry with exponential backoff for failed certificate generation attempts +- **Certificate Key Types**: Supports both RSA and ECDSA keys with provider-specific optimizations + +## Configuration Settings | Setting | Default | Context | Multiple | Description | | ---------------------------------- | ------------------------ | --------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `AUTO_LETS_ENCRYPT` | `no` | multisite | no | **Enable Let's Encrypt:** Set to `yes` to enable automatic certificate issuance and renewal. | +| `ACME_SSL_CA_PROVIDER` | `letsencrypt` | multisite | no | **Certificate Authority:** Select certificate authority (`letsencrypt` or `zerossl`). | +| `ACME_ZEROSSL_API_KEY` | | multisite | no | **ZeroSSL API Key:** Required for automated ZeroSSL certificate generation and EAB credential setup. | +| `ACME_ZEROSSL_EAB_KID` | | multisite | no | **ZeroSSL EAB Key ID:** Manual EAB key identifier (alternative to API key). Used with `ACME_ZEROSSL_EAB_HMAC_KEY`. | +| `ACME_ZEROSSL_EAB_HMAC_KEY` | | multisite | no | **ZeroSSL EAB HMAC Key:** Manual EAB HMAC key (alternative to API key). Used with `ACME_ZEROSSL_EAB_KID`. | | `LETS_ENCRYPT_PASSTHROUGH` | `no` | multisite | no | **Pass Through Let's Encrypt:** Set to `yes` to pass through Let's Encrypt requests to the web server. This is useful when BunkerWeb is behind another reverse proxy handling SSL. | -| `EMAIL_LETS_ENCRYPT` | `contact@{FIRST_SERVER}` | multisite | no | **Contact Email:** Email address that is used for Let's Encrypt notifications and is included in certificates. | +| `EMAIL_LETS_ENCRYPT` | `contact@{FIRST_SERVER}` | multisite | no | **Contact Email:** Email address that is used for certificate notifications and is included in certificates. | | `LETS_ENCRYPT_CHALLENGE` | `http` | multisite | no | **Challenge Type:** Method used to verify domain ownership. Options: `http` or `dns`. | | `LETS_ENCRYPT_DNS_PROVIDER` | | multisite | no | **DNS Provider:** When using DNS challenges, the DNS provider to use (e.g., cloudflare, route53, digitalocean). | | `LETS_ENCRYPT_DNS_PROPAGATION` | `default` | multisite | no | **DNS Propagation:** The time to wait for DNS propagation in seconds. If no value is provided, the provider's default propagation time is used. | | `LETS_ENCRYPT_DNS_CREDENTIAL_ITEM` | | multisite | yes | **Credential Item:** Configuration items for DNS provider authentication (e.g., `cloudflare_api_token 123456`). Values can be raw text, base64 encoded, or a JSON object. | | `USE_LETS_ENCRYPT_WILDCARD` | `no` | multisite | no | **Wildcard Certificates:** When set to `yes`, creates wildcard certificates for all domains. Only available with DNS challenges. | -| `USE_LETS_ENCRYPT_STAGING` | `no` | multisite | no | **Use Staging:** When set to `yes`, uses Let's Encrypt's staging environment for testing. Staging has higher rate limits but produces certificates that are not trusted by browsers. | +| `USE_LETS_ENCRYPT_STAGING` | `no` | multisite | no | **Use Staging:** When set to `yes`, uses staging environment for testing. Staging has higher rate limits but produces certificates that are not trusted by browsers. | | `LETS_ENCRYPT_CLEAR_OLD_CERTS` | `no` | global | no | **Clear Old Certificates:** When set to `yes`, removes old certificates that are no longer needed during renewal. | | `LETS_ENCRYPT_PROFILE` | `classic` | multisite | no | **Certificate Profile:** Select the certificate profile to use. Options: `classic` (general-purpose), `tlsserver` (optimized for TLS servers), or `shortlived` (7-day certificates). | | `LETS_ENCRYPT_CUSTOM_PROFILE` | | multisite | no | **Custom Certificate Profile:** Enter a custom certificate profile if your ACME server supports non-standard profiles. This overrides `LETS_ENCRYPT_PROFILE` if set. | -| `LETS_ENCRYPT_MAX_RETRIES` | `3` | multisite | no | **Maximum Retries:** Number of times to retry certificate generation on failure. Set to `0` to disable retries. Useful for handling temporary network issues or API rate limits. | +| `LETS_ENCRYPT_DISABLE_PUBLIC_SUFFIXES` | `yes` | multisite | no | **Disable Public Suffixes:** When set to `yes`, prevents certificate requests for domains matching the Public Suffix List (recommended for security). | +| `LETS_ENCRYPT_MAX_RETRIES` | `0` | multisite | no | **Maximum Retries:** Number of times to retry certificate generation on failure. Set to `0` to disable retries. Uses exponential backoff (30s, 60s, 120s, etc. up to 300s max). | +| `ACME_SKIP_CAA_CHECK` | `no` | global | no | **Skip CAA Validation:** Set to `yes` to skip DNS CAA record validation. Use with caution as CAA records provide important security controls. | +| `ACME_HTTP_STRICT_IP_CHECK` | `no` | global | no | **Strict IP Check:** When set to `yes`, rejects HTTP challenge certificates if domain IP doesn't match server IP. Useful for preventing misconfigurations. | !!! info "Information and behavior" - The `LETS_ENCRYPT_DNS_CREDENTIAL_ITEM` setting is a multiple setting and can be used to set multiple items for the DNS provider. The items will be saved as a cache file, and Certbot will read the credentials from it. - If no `LETS_ENCRYPT_DNS_PROPAGATION` setting is provided, the provider's default propagation time is used. - Full Let's Encrypt automation using the `http` challenge works in stream mode as long as you open the `80/tcp` port from the outside. Use the `LISTEN_STREAM_PORT_SSL` setting to choose your listening SSL/TLS port. - - If `LETS_ENCRYPT_PASSTHROUGH` is set to `yes`, BunkerWeb will not handle the ACME challenge requests itself but will pass them to the backend web server. This is useful in scenarios where BunkerWeb is acting as a reverse proxy in front of another server that is configured to handle Let's Encrypt challenges + - If `LETS_ENCRYPT_PASSTHROUGH` is set to `yes`, BunkerWeb will not handle the ACME challenge requests itself but will pass them to the backend web server. This is useful in scenarios where BunkerWeb is acting as a reverse proxy in front of another server that is configured to handle Let's Encrypt challenges. + - The plugin automatically validates external IP addresses for HTTP challenges and can optionally enforce strict IP matching. + - CAA record validation ensures only authorized certificate authorities can issue certificates for your domains. + - Public Suffix List validation prevents certificate requests for domains like `.com` or `.co.uk` that could never be validated. !!! tip "HTTP vs. DNS Challenges" **HTTP Challenges** are easier to set up and work well for most websites: @@ -63,6 +94,7 @@ Follow these steps to configure and use the Let's Encrypt feature: - Requires your website to be publicly accessible on port 80 - Automatically configured by BunkerWeb - Cannot be used for wildcard certificates + - Includes IP address validation for additional security **DNS Challenges** offer more flexibility and are required for wildcard certificates: @@ -70,42 +102,52 @@ Follow these steps to configure and use the Let's Encrypt feature: - Requires DNS provider API credentials - Required for wildcard certificates (e.g., *.example.com) - Useful when port 80 is blocked or unavailable + - Supports advanced wildcard domain grouping !!! warning "Wildcard certificates" Wildcard certificates are only available with DNS challenges. If you want to use them, you must set the `USE_LETS_ENCRYPT_WILDCARD` setting to `yes` and properly configure your DNS provider credentials. !!! warning "Rate Limits" - Let's Encrypt imposes rate limits on certificate issuance. When testing configurations, use the staging environment by setting `USE_LETS_ENCRYPT_STAGING` to `yes` to avoid hitting production rate limits. Staging certificates are not trusted by browsers but are useful for validating your setup. + Certificate authorities impose rate limits on certificate issuance. When testing configurations, use the staging environment by setting `USE_LETS_ENCRYPT_STAGING` to `yes` to avoid hitting production rate limits. Staging certificates are not trusted by browsers but are useful for validating your setup. - -### Supported DNS Providers +## Supported DNS Providers The Let's Encrypt plugin supports a wide range of DNS providers for DNS challenges. Each provider requires specific credentials that must be provided using the `LETS_ENCRYPT_DNS_CREDENTIAL_ITEM` setting. | Provider | Description | Mandatory Settings | Optional Settings | Documentation | | -------------- | --------------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------- | -| `cloudflare` | Cloudflare | either `api_token`
or `email` and `api_key` | | [Documentation](https://certbot-dns-cloudflare.readthedocs.io/en/stable/) | +| `cloudflare` | Cloudflare | either `api_token` or `email` and `api_key` | | [Documentation](https://certbot-dns-cloudflare.readthedocs.io/en/stable/) | | `desec` | deSEC | `token` | | [Documentation](https://github.com/desec-io/certbot-dns-desec/blob/main/README.md) | | `digitalocean` | DigitalOcean | `token` | | [Documentation](https://certbot-dns-digitalocean.readthedocs.io/en/stable/) | | `dnsimple` | DNSimple | `token` | | [Documentation](https://certbot-dns-dnsimple.readthedocs.io/en/stable/) | -| `dnsmadeeasy` | DNS Made Easy | `api_key`
`secret_key` | | [Documentation](https://certbot-dns-dnsmadeeasy.readthedocs.io/en/stable/) | -| `gehirn` | Gehirn DNS | `api_token`
`api_secret` | | [Documentation](https://certbot-dns-gehirn.readthedocs.io/en/stable/) | -| `google` | Google Cloud | `project_id`
`private_key_id`
`private_key`
`client_email`
`client_id`
`client_x509_cert_url` | `type` (default: `service_account`)
`auth_uri` (default: `https://accounts.google.com/o/oauth2/auth`)
`token_uri` (default: `https://accounts.google.com/o/oauth2/token`)
`auth_provider_x509_cert_url` (default: `https://www.googleapis.com/oauth2/v1/certs`) | [Documentation](https://certbot-dns-google.readthedocs.io/en/stable/) | +| `dnsmadeeasy` | DNS Made Easy | `api_key` `secret_key` | | [Documentation](https://certbot-dns-dnsmadeeasy.readthedocs.io/en/stable/) | +| `gehirn` | Gehirn DNS | `api_token` `api_secret` | | [Documentation](https://certbot-dns-gehirn.readthedocs.io/en/stable/) | +| `google` | Google Cloud | `project_id` `private_key_id` `private_key` `client_email` `client_id` `client_x509_cert_url` | `type` (default: `service_account`) `auth_uri` (default: `https://accounts.google.com/o/oauth2/auth`) `token_uri` (default: `https://accounts.google.com/o/oauth2/token`) `auth_provider_x509_cert_url` (default: `https://www.googleapis.com/oauth2/v1/certs`) | [Documentation](https://certbot-dns-google.readthedocs.io/en/stable/) | | `infomaniak` | Infomaniak | `token` | | [Documentation](https://github.com/infomaniak/certbot-dns-infomaniak) | -| `ionos` | IONOS | `prefix`
`secret` | `endpoint` (default: `https://api.hosting.ionos.com`) | [Documentation](https://github.com/helgeerbe/certbot-dns-ionos/blob/master/README.md) | +| `ionos` | IONOS | `prefix` `secret` | `endpoint` (default: `https://api.hosting.ionos.com`) | [Documentation](https://github.com/helgeerbe/certbot-dns-ionos/blob/master/README.md) | | `linode` | Linode | `key` | | [Documentation](https://certbot-dns-linode.readthedocs.io/en/stable/) | -| `luadns` | LuaDNS | `email`
`token` | | [Documentation](https://certbot-dns-luadns.readthedocs.io/en/stable/) | +| `luadns` | LuaDNS | `email` `token` | | [Documentation](https://certbot-dns-luadns.readthedocs.io/en/stable/) | | `njalla` | Njalla | `token` | | [Documentation](https://github.com/chaptergy/certbot-dns-njalla) | | `nsone` | NS1 | `api_key` | | [Documentation](https://certbot-dns-nsone.readthedocs.io/en/stable/) | -| `ovh` | OVH | `application_key`
`application_secret`
`consumer_key` | `endpoint` (default: `ovh-eu`) | [Documentation](https://certbot-dns-ovh.readthedocs.io/en/stable/) | -| `rfc2136` | RFC 2136 | `server`
`name`
`secret` | `port` (default: `53`)
`algorithm` (default: `HMAC-SHA512`)
`sign_query` (default: `false`) | [Documentation](https://certbot-dns-rfc2136.readthedocs.io/en/stable/) | -| `route53` | Amazon Route 53 | `access_key_id`
`secret_access_key` | | [Documentation](https://certbot-dns-route53.readthedocs.io/en/stable/) | -| `sakuracloud` | Sakura Cloud | `api_token`
`api_secret` | | [Documentation](https://certbot-dns-sakuracloud.readthedocs.io/en/stable/) | +| `ovh` | OVH | `application_key` `application_secret` `consumer_key` | `endpoint` (default: `ovh-eu`) | [Documentation](https://certbot-dns-ovh.readthedocs.io/en/stable/) | +| `rfc2136` | RFC 2136 | `server` `name` `secret` | `port` (default: `53`) `algorithm` (default: `HMAC-SHA512`) `sign_query` (default: `false`) | [Documentation](https://certbot-dns-rfc2136.readthedocs.io/en/stable/) | +| `route53` | Amazon Route 53 | `access_key_id` `secret_access_key` | | [Documentation](https://certbot-dns-route53.readthedocs.io/en/stable/) | +| `sakuracloud` | Sakura Cloud | `api_token` `api_secret` | | [Documentation](https://certbot-dns-sakuracloud.readthedocs.io/en/stable/) | | `scaleway` | Scaleway | `application_token` | | [Documentation](https://github.com/vanonox/certbot-dns-scaleway/blob/main/README.rst) | -### Example Configurations +## Certificate Key Types and Optimization + +The plugin automatically selects optimal certificate key types based on the certificate authority and DNS provider: + +- **ECDSA Keys**: Used by default for most providers + - Let's Encrypt: P-256 curve (secp256r1) for optimal performance + - ZeroSSL: P-384 curve (secp384r1) for enhanced security +- **RSA Keys**: Used for specific providers that require them + - Infomaniak and IONOS: RSA-4096 for compatibility -=== "Basic HTTP Challenge" +## Example Configurations + +=== "Basic HTTP Challenge with Let's Encrypt" Simple configuration using HTTP challenges for a single domain: @@ -113,6 +155,32 @@ The Let's Encrypt plugin supports a wide range of DNS providers for DNS challeng AUTO_LETS_ENCRYPT: "yes" EMAIL_LETS_ENCRYPT: "admin@example.com" LETS_ENCRYPT_CHALLENGE: "http" + ACME_SSL_CA_PROVIDER: "letsencrypt" + ``` + +=== "ZeroSSL with API Key" + + Configuration using ZeroSSL with automated EAB setup: + + ```yaml + AUTO_LETS_ENCRYPT: "yes" + EMAIL_LETS_ENCRYPT: "admin@example.com" + LETS_ENCRYPT_CHALLENGE: "http" + ACME_SSL_CA_PROVIDER: "zerossl" + ACME_ZEROSSL_API_KEY: "your-zerossl-api-key" + ``` + +=== "ZeroSSL with Manual EAB Credentials" + + Configuration using ZeroSSL with manually provided EAB credentials: + + ```yaml + AUTO_LETS_ENCRYPT: "yes" + EMAIL_LETS_ENCRYPT: "admin@example.com" + LETS_ENCRYPT_CHALLENGE: "http" + ACME_SSL_CA_PROVIDER: "zerossl" + ACME_ZEROSSL_EAB_KID: "your-eab-kid" + ACME_ZEROSSL_EAB_HMAC_KEY: "your-eab-hmac-key" ``` === "Cloudflare DNS with Wildcard" @@ -141,9 +209,24 @@ The Let's Encrypt plugin supports a wide range of DNS providers for DNS challeng LETS_ENCRYPT_DNS_CREDENTIAL_ITEM_2: "aws_secret_access_key YOUR_SECRET_KEY" ``` -=== "Testing with Staging Environment and Retries" +=== "Production with Enhanced Security" + + Configuration with all security features enabled and retry mechanisms: + + ```yaml + AUTO_LETS_ENCRYPT: "yes" + EMAIL_LETS_ENCRYPT: "admin@example.com" + LETS_ENCRYPT_CHALLENGE: "http" + ACME_SSL_CA_PROVIDER: "letsencrypt" + LETS_ENCRYPT_DISABLE_PUBLIC_SUFFIXES: "yes" + ACME_HTTP_STRICT_IP_CHECK: "yes" + LETS_ENCRYPT_MAX_RETRIES: "3" + LETS_ENCRYPT_PROFILE: "tlsserver" + ``` + +=== "Testing with Staging Environment" - Configuration for testing setup with the staging environment and enhanced retry settings: + Configuration for testing setup with the staging environment: ```yaml AUTO_LETS_ENCRYPT: "yes" @@ -182,3 +265,41 @@ The Let's Encrypt plugin supports a wide range of DNS providers for DNS challeng LETS_ENCRYPT_DNS_CREDENTIAL_ITEM_5: "client_id your-client-id" LETS_ENCRYPT_DNS_CREDENTIAL_ITEM_6: "client_x509_cert_url your-cert-url" ``` + +## Troubleshooting + +**Common Issues and Solutions:** + +1. **Certificate generation fails with rate limits** + - Use staging environment: `USE_LETS_ENCRYPT_STAGING: "yes"` + - Enable retries: `LETS_ENCRYPT_MAX_RETRIES: "3"` + +2. **HTTP challenge fails** + - Verify domain DNS points to your server IP + - Enable strict IP checking: `ACME_HTTP_STRICT_IP_CHECK: "yes"` + - Check firewall allows port 80 access + +3. **DNS challenge fails** + - Verify DNS provider credentials are correct + - Increase propagation time: `LETS_ENCRYPT_DNS_PROPAGATION: "300"` + - Check DNS provider API rate limits + +4. **CAA validation errors** + - Update CAA records to authorize your chosen certificate authority + - Temporarily skip CAA checking: `ACME_SKIP_CAA_CHECK: "yes"` + +5. **ZeroSSL EAB issues** + - Ensure API key is valid and has correct permissions + - Try manual EAB credentials if API setup fails + - Check ZeroSSL account has ACME access enabled + +**Debug Information:** + +Enable debug logging by setting `LOG_LEVEL: "DEBUG"` to get detailed information about: + +- Certificate generation process +- DNS validation steps +- HTTP challenge deployment +- CAA record checking +- IP address validation +- Retry attempts and backoff timing diff --git a/src/common/core/letsencrypt/jobs/certbot-auth.py b/src/common/core/letsencrypt/jobs/certbot-auth.py index d1483b3208..be0f8532c9 100644 --- a/src/common/core/letsencrypt/jobs/certbot-auth.py +++ b/src/common/core/letsencrypt/jobs/certbot-auth.py @@ -6,7 +6,9 @@ from sys import exit as sys_exit, path as sys_path from traceback import format_exc -for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) for paths in (("deps", "python"), ("utils",), ("api",), ("db",))]: +for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) + for paths in (("deps", "python"), ("utils",), ("api",), + ("db",))]: if deps_path not in sys_path: sys_path.append(deps_path) @@ -18,41 +20,191 @@ LOGGER = setup_logger("Lets-encrypt.auth") status = 0 + +def debug_log(logger, message): + # Log debug messages only when LOG_LEVEL environment variable is set to + # "debug" + if getenv("LOG_LEVEL") == "debug": + logger.debug(f"[DEBUG] {message}") + + try: - # Get env vars + # Get environment variables for ACME HTTP challenge + # CERTBOT_TOKEN: The filename for the challenge file + # CERTBOT_VALIDATION: The content to write to the challenge file token = getenv("CERTBOT_TOKEN", "") validation = getenv("CERTBOT_VALIDATION", "") + + debug_log(LOGGER, "ACME HTTP challenge authentication started") + debug_log(LOGGER, f"Token: {token[:10] if token else 'None'}...") + debug_log(LOGGER, + f"Validation length: {len(validation) if validation else 0} chars") + debug_log(LOGGER, "Checking for required environment variables") + debug_log(LOGGER, f"CERTBOT_TOKEN exists: {bool(token)}") + debug_log(LOGGER, f"CERTBOT_VALIDATION exists: {bool(validation)}") + + # Detect the current BunkerWeb integration type + # This determines how we handle the challenge deployment integration = get_integration() + + debug_log(LOGGER, f"Integration detection completed: {integration}") + debug_log(LOGGER, + "Determining challenge deployment method based on integration") LOGGER.info(f"Detected {integration} integration") - # Cluster case + # Cluster case: Docker, Swarm, Kubernetes, Autoconf + # For cluster deployments, we need to distribute the challenge + # to all instances via the BunkerWeb API if integration in ("Docker", "Swarm", "Kubernetes", "Autoconf"): + debug_log(LOGGER, + "Cluster integration detected, initializing database connection") + debug_log(LOGGER, + "Will distribute challenge to all cluster instances via API") + + # Initialize database connection to get list of instances db = Database(LOGGER, sqlalchemy_string=getenv("DATABASE_URI")) + # Get all active BunkerWeb instances from the database instances = db.get_instances() + + debug_log(LOGGER, f"Retrieved {len(instances)} instances from database") + debug_log(LOGGER, "Instance details:") + for i, instance in enumerate(instances): + debug_log(LOGGER, + f" Instance {i+1}: {instance['hostname']}:" + f"{instance['port']} (server: " + f"{instance.get('server_name', 'N/A')})") + debug_log(LOGGER, + "Preparing to send challenge data to each instance") LOGGER.info(f"Sending challenge to {len(instances)} instances") + + # Send challenge to each instance via API for instance in instances: - api = API(f"http://{instance['hostname']}:{instance['port']}", host=instance["server_name"]) - sent, err, status, resp = api.request("POST", "/lets-encrypt/challenge", data={"token": token, "validation": validation}) + debug_log(LOGGER, + f"Processing instance: {instance['hostname']}:" + f"{instance['port']}") + debug_log(LOGGER, + f"Server name: {instance.get('server_name', 'N/A')}") + debug_log(LOGGER, "Creating API client for this instance") + + # Create API client for this instance + api = API( + f"http://{instance['hostname']}:{instance['port']}", + host=instance["server_name"] + ) + + debug_log(LOGGER, f"API endpoint: {api.endpoint}") + debug_log(LOGGER, "Preparing challenge data payload") + debug_log(LOGGER, f"Token: {token[:10]}... (truncated)") + debug_log(LOGGER, + f"Validation length: {len(validation)} characters") + + # Send POST request to deploy the challenge + sent, err, status_code, resp = api.request( + "POST", + "/lets-encrypt/challenge", + data={"token": token, "validation": validation} + ) + if not sent: status = 1 - LOGGER.error(f"Can't send API request to {api.endpoint}/lets-encrypt/challenge : {err}") - elif status != 200: + LOGGER.error( + f"Can't send API request to " + f"{api.endpoint}/lets-encrypt/challenge: {err}" + ) + debug_log(LOGGER, f"API request failed with error: {err}") + debug_log(LOGGER, + "This instance will not receive the challenge") + elif status_code != 200: status = 1 - LOGGER.error(f"Error while sending API request to {api.endpoint}/lets-encrypt/challenge : status = {resp['status']}, msg = {resp['msg']}") + LOGGER.error( + f"Error while sending API request to " + f"{api.endpoint}/lets-encrypt/challenge: " + f"status = {resp['status']}, msg = {resp['msg']}" + ) + debug_log(LOGGER, f"HTTP status code: {status_code}") + debug_log(LOGGER, f"Response details: {resp}") + debug_log(LOGGER, + "Challenge deployment failed for this instance") else: - LOGGER.info(f"Successfully sent API request to {api.endpoint}/lets-encrypt/challenge") + LOGGER.info( + f"Successfully sent API request to " + f"{api.endpoint}/lets-encrypt/challenge" + ) + debug_log(LOGGER, f"HTTP status code: {status_code}") + debug_log(LOGGER, + "Challenge successfully deployed to instance") + debug_log(LOGGER, + f"Instance can now serve challenge at " + f"/.well-known/acme-challenge/{token}") - # Linux case + # Linux case: Standalone installation + # For standalone Linux installations, we write the challenge + # file directly to the local filesystem else: - root_dir = Path(sep, "var", "tmp", "bunkerweb", "lets-encrypt", ".well-known", "acme-challenge") + debug_log(LOGGER, "Standalone Linux integration detected") + debug_log(LOGGER, + "Writing challenge file directly to local filesystem") + debug_log(LOGGER, + "No API distribution needed for standalone mode") + + # Create the ACME challenge directory structure + # This follows the standard .well-known/acme-challenge path + root_dir = Path( + sep, "var", "tmp", "bunkerweb", "lets-encrypt", + ".well-known", "acme-challenge" + ) + + debug_log(LOGGER, f"Challenge directory path: {root_dir}") + debug_log(LOGGER, + "Creating directory structure if it doesn't exist") + debug_log(LOGGER, + "Directory will be created with parents=True, exist_ok=True") + + # Create directory structure with appropriate permissions root_dir.mkdir(parents=True, exist_ok=True) - root_dir.joinpath(token).write_text(validation, encoding="utf-8") + + debug_log(LOGGER, "Directory structure created successfully") + debug_log(LOGGER, f"Directory exists: {root_dir.exists()}") + debug_log(LOGGER, f"Directory is writable: {root_dir.is_dir()}") + + # Write the challenge validation content to the token file + challenge_file = root_dir.joinpath(token) + + debug_log(LOGGER, f"Challenge file path: {challenge_file}") + debug_log(LOGGER, f"Token filename: {token}") + debug_log(LOGGER, + f"Validation content length: {len(validation)} bytes") + debug_log(LOGGER, "Writing validation content to challenge file") + + challenge_file.write_text(validation, encoding="utf-8") + + debug_log(LOGGER, "Challenge file written successfully") + debug_log(LOGGER, f"File exists: {challenge_file.exists()}") + debug_log(LOGGER, + f"File size: {challenge_file.stat().st_size} bytes") + debug_log(LOGGER, "Let's Encrypt can now access the challenge file") + + LOGGER.info(f"Challenge file created at {challenge_file}") + except BaseException as e: status = 1 LOGGER.debug(format_exc()) - LOGGER.error(f"Exception while running certbot-auth.py :\n{e}") + LOGGER.error(f"Exception while running certbot-auth.py:\n{e}") + + debug_log(LOGGER, "Full exception traceback logged above") + debug_log(LOGGER, "Authentication process failed due to exception") + debug_log(LOGGER, "Let's Encrypt challenge will not be available") + +debug_log(LOGGER, + f"ACME HTTP challenge authentication completed with status: {status}") +if status == 0: + debug_log(LOGGER, "Authentication completed successfully") + debug_log(LOGGER, "Let's Encrypt can now access the challenge") +else: + debug_log(LOGGER, + "Authentication failed - challenge may not be accessible") -sys_exit(status) +sys_exit(status) \ No newline at end of file diff --git a/src/common/core/letsencrypt/jobs/certbot-cleanup.py b/src/common/core/letsencrypt/jobs/certbot-cleanup.py index 3e896b14a4..6add17f139 100644 --- a/src/common/core/letsencrypt/jobs/certbot-cleanup.py +++ b/src/common/core/letsencrypt/jobs/certbot-cleanup.py @@ -6,7 +6,9 @@ from sys import exit as sys_exit, path as sys_path from traceback import format_exc -for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) for paths in (("deps", "python"), ("utils",), ("api",), ("db",))]: +for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) + for paths in (("deps", "python"), ("utils",), ("api",), + ("db",))]: if deps_path not in sys_path: sys_path.append(deps_path) @@ -18,36 +20,175 @@ LOGGER = setup_logger("Lets-encrypt.cleanup") status = 0 + +def debug_log(logger, message): + # Log debug messages only when LOG_LEVEL environment variable is set to + # "debug" + if getenv("LOG_LEVEL") == "debug": + logger.debug(f"[DEBUG] {message}") + + try: - # Get env vars + # Get environment variables for ACME HTTP challenge cleanup + # CERTBOT_TOKEN: The filename of the challenge file to remove token = getenv("CERTBOT_TOKEN", "") + + debug_log(LOGGER, "ACME HTTP challenge cleanup started") + debug_log(LOGGER, f"Token to clean: {token[:10] if token else 'None'}...") + debug_log(LOGGER, + "Starting cleanup process for Let's Encrypt challenge") + debug_log(LOGGER, + "This will remove challenge files from all instances") + + # Detect the current BunkerWeb integration type + # This determines how we handle the challenge cleanup process integration = get_integration() + + debug_log(LOGGER, f"Integration detection completed: {integration}") + debug_log(LOGGER, + "Determining cleanup method based on integration type") LOGGER.info(f"Detected {integration} integration") - # Cluster case + # Cluster case: Docker, Swarm, Kubernetes, Autoconf + # For cluster deployments, we need to remove the challenge + # from all instances via the BunkerWeb API if integration in ("Docker", "Swarm", "Kubernetes", "Autoconf"): + debug_log(LOGGER, "Cluster integration detected") + debug_log(LOGGER, + "Will remove challenge from all cluster instances via API") + debug_log(LOGGER, + "Initializing database connection to get instance list") + + # Initialize database connection to get list of instances db = Database(LOGGER, sqlalchemy_string=getenv("DATABASE_URI")) + + # Get all active BunkerWeb instances from the database instances = db.get_instances() + + debug_log(LOGGER, f"Retrieved {len(instances)} instances from database") + debug_log(LOGGER, "Instance details for cleanup:") + for i, instance in enumerate(instances): + debug_log(LOGGER, + f" Instance {i+1}: {instance['hostname']}:" + f"{instance['port']} (server: " + f"{instance.get('server_name', 'N/A')})") + debug_log(LOGGER, + "Preparing to send DELETE requests to each instance") LOGGER.info(f"Cleaning challenge from {len(instances)} instances") + + # Remove challenge from each instance via API for instance in instances: - api = API(f"http://{instance['hostname']}:{instance['port']}", host=instance["server_name"]) - sent, err, status, resp = api.request("DELETE", "/lets-encrypt/challenge", data={"token": token}) + debug_log(LOGGER, + f"Processing cleanup for instance: " + f"{instance['hostname']}:{instance['port']}") + debug_log(LOGGER, + f"Server name: {instance.get('server_name', 'N/A')}") + debug_log(LOGGER, "Creating API client for cleanup request") + + # Create API client for this instance + api = API( + f"http://{instance['hostname']}:{instance['port']}", + host=instance["server_name"] + ) + + debug_log(LOGGER, f"API endpoint: {api.endpoint}") + debug_log(LOGGER, "Preparing DELETE request for challenge cleanup") + debug_log(LOGGER, f"Token to delete: {token}") + + # Send DELETE request to remove the challenge + sent, err, status_code, resp = api.request( + "DELETE", + "/lets-encrypt/challenge", + data={"token": token} + ) + if not sent: status = 1 - LOGGER.error(f"Can't send API request to {api.endpoint}/lets-encrypt/challenge : {err}") - elif status != 200: + LOGGER.error( + f"Can't send API request to " + f"{api.endpoint}/lets-encrypt/challenge: {err}" + ) + debug_log(LOGGER, f"DELETE request failed with error: {err}") + debug_log(LOGGER, + "Challenge file may remain on this instance") + elif status_code != 200: status = 1 - LOGGER.error(f"Error while sending API request to {api.endpoint}/lets-encrypt/challenge : status = {resp['status']}, msg = {resp['msg']}") + LOGGER.error( + f"Error while sending API request to " + f"{api.endpoint}/lets-encrypt/challenge: " + f"status = {resp['status']}, msg = {resp['msg']}" + ) + debug_log(LOGGER, f"HTTP status code: {status_code}") + debug_log(LOGGER, f"Response details: {resp}") + debug_log(LOGGER, + "Challenge cleanup failed for this instance") else: - LOGGER.info(f"Successfully sent API request to {api.endpoint}/lets-encrypt/challenge") - # Linux case + LOGGER.info( + f"Successfully sent API request to " + f"{api.endpoint}/lets-encrypt/challenge" + ) + debug_log(LOGGER, f"HTTP status code: {status_code}") + debug_log(LOGGER, + "Challenge successfully removed from instance") + debug_log(LOGGER, f"Token {token} has been cleaned up") + + # Linux case: Standalone installation + # For standalone Linux installations, we remove the challenge + # file directly from the local filesystem else: - Path(sep, "var", "tmp", "bunkerweb", "lets-encrypt", ".well-known", "acme-challenge", token).unlink(missing_ok=True) + debug_log(LOGGER, "Standalone Linux integration detected") + debug_log(LOGGER, + "Removing challenge file directly from local filesystem") + debug_log(LOGGER, "No API cleanup needed for standalone mode") + + # Construct path to the ACME challenge file + # This follows the standard .well-known/acme-challenge path + challenge_file = Path( + sep, "var", "tmp", "bunkerweb", "lets-encrypt", + ".well-known", "acme-challenge", token + ) + + debug_log(LOGGER, f"Challenge file path: {challenge_file}") + debug_log(LOGGER, f"Token filename: {token}") + debug_log(LOGGER, + "Checking if challenge file exists before cleanup") + debug_log(LOGGER, + f"File exists before cleanup: {challenge_file.exists()}") + if challenge_file.exists(): + debug_log(LOGGER, + f"File size: {challenge_file.stat().st_size} bytes") + + # Remove the challenge file if it exists + # missing_ok=True prevents errors if file doesn't exist + challenge_file.unlink(missing_ok=True) + + debug_log(LOGGER, "Challenge file unlink operation completed") + debug_log(LOGGER, + f"File exists after cleanup: {challenge_file.exists()}") + debug_log(LOGGER, "Local challenge file cleanup completed successfully") + debug_log(LOGGER, + "Challenge is no longer accessible to Let's Encrypt") + + LOGGER.info(f"Challenge file removed: {challenge_file}") + except BaseException as e: status = 1 LOGGER.debug(format_exc()) - LOGGER.error(f"Exception while running certbot-cleanup.py :\n{e}") + LOGGER.error(f"Exception while running certbot-cleanup.py:\n{e}") + + debug_log(LOGGER, "Full exception traceback logged above") + debug_log(LOGGER, "Cleanup process failed due to exception") + debug_log(LOGGER, "Some challenge files may remain uncleaned") + +debug_log(LOGGER, + f"ACME HTTP challenge cleanup completed with status: {status}") +if status == 0: + debug_log(LOGGER, "Cleanup completed successfully") + debug_log(LOGGER, "All challenge files have been removed") +else: + debug_log(LOGGER, + "Cleanup encountered errors - some files may remain") -sys_exit(status) +sys_exit(status) \ No newline at end of file diff --git a/src/common/core/letsencrypt/jobs/certbot-deploy.py b/src/common/core/letsencrypt/jobs/certbot-deploy.py index 2112511685..9f6915fae5 100644 --- a/src/common/core/letsencrypt/jobs/certbot-deploy.py +++ b/src/common/core/letsencrypt/jobs/certbot-deploy.py @@ -3,11 +3,14 @@ from io import BytesIO from os import getenv, sep from os.path import join +from pathlib import Path from sys import exit as sys_exit, path as sys_path from tarfile import open as tar_open from traceback import format_exc -for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) for paths in (("deps", "python"), ("utils",), ("api",), ("db",))]: +for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) + for paths in (("deps", "python"), ("utils",), ("api",), + ("db",))]: if deps_path not in sys_path: sys_path.append(deps_path) @@ -18,25 +21,97 @@ LOGGER = setup_logger("Lets-encrypt.deploy") status = 0 + +def debug_log(logger, message): + # Log debug messages only when LOG_LEVEL environment variable is set to + # "debug" + if getenv("LOG_LEVEL") == "debug": + logger.debug(f"[DEBUG] {message}") + + try: - # Get env vars + # Get environment variables for certificate deployment + # CERTBOT_TOKEN: Token from certbot (currently unused but preserved) + # RENEWED_DOMAINS: Domains that were successfully renewed token = getenv("CERTBOT_TOKEN", "") + renewed_domains = getenv("RENEWED_DOMAINS", "") + + debug_log(LOGGER, "Certificate deployment started") + debug_log(LOGGER, f"Token: {token[:10] if token else 'None'}...") + debug_log(LOGGER, f"Renewed domains: {renewed_domains}") + debug_log(LOGGER, + "Starting certificate distribution to cluster instances") + debug_log(LOGGER, + "This process will update certificates on all instances") - LOGGER.info(f"Certificates renewal for {getenv('RENEWED_DOMAINS')} successful") + LOGGER.info(f"Certificates renewal for {renewed_domains} successful") - # Create tarball of /var/cache/bunkerweb/letsencrypt + # Create tarball of certificate directory for distribution + # This packages all certificate files into a compressed archive + # for efficient transfer to cluster instances + debug_log(LOGGER, "Creating certificate archive for distribution") + debug_log(LOGGER, + "Packaging all Let's Encrypt certificates into tarball") + debug_log(LOGGER, + "Archive will contain fullchain.pem and privkey.pem files") + tgz = BytesIO() + cert_source_path = join(sep, "var", "cache", "bunkerweb", "letsencrypt", + "etc") + + debug_log(LOGGER, f"Source certificate path: {cert_source_path}") + debug_log(LOGGER, "Checking if certificate directory exists") + cert_path_exists = Path(cert_source_path).exists() + debug_log(LOGGER, f"Certificate directory exists: {cert_path_exists}") + # Create compressed tarball containing certificate files + # compresslevel=3 provides good compression with reasonable performance with tar_open(mode="w:gz", fileobj=tgz, compresslevel=3) as tf: - tf.add(join(sep, "var", "cache", "bunkerweb", "letsencrypt", "etc"), arcname="etc") + tf.add(cert_source_path, arcname="etc") + + # Reset buffer position for reading tgz.seek(0, 0) files = {"archive.tar.gz": tgz} + + archive_size = tgz.getbuffer().nbytes + debug_log(LOGGER, "Certificate archive created successfully") + debug_log(LOGGER, f"Archive size: {archive_size} bytes") + debug_log(LOGGER, "Compression level: 3 (balanced speed/size)") + debug_log(LOGGER, "Archive ready for distribution to instances") + # Initialize database connection to get cluster instances + debug_log(LOGGER, "Initializing database connection") + debug_log(LOGGER, + "Need to get list of active BunkerWeb instances") + db = Database(LOGGER, sqlalchemy_string=getenv("DATABASE_URI")) + # Get all active BunkerWeb instances for certificate distribution instances = db.get_instances() - services = db.get_non_default_settings(global_only=True, methods=False, with_drafts=True, filtered_settings=("SERVER_NAME",))["SERVER_NAME"].split(" ") + + # Get server names to calculate appropriate reload timeout + # More services require longer timeout for configuration reload + services = db.get_non_default_settings( + global_only=True, + methods=False, + with_drafts=True, + filtered_settings=("SERVER_NAME",) + )["SERVER_NAME"].split(" ") + + debug_log(LOGGER, f"Retrieved {len(instances)} instances from database") + debug_log(LOGGER, + f"Found {len(services)} configured services: {services}") + debug_log(LOGGER, "Instance details for certificate deployment:") + for i, instance in enumerate(instances): + debug_log(LOGGER, + f" Instance {i+1}: {instance['hostname']}:" + f"{instance['port']} (server: " + f"{instance.get('server_name', 'N/A')})") + debug_log(LOGGER, + "Will deploy certificates and trigger reload on each instance") + # Configure reload timeout based on environment and service count + # Minimum timeout prevents premature timeouts on slow systems reload_min_timeout = getenv("RELOAD_MIN_TIMEOUT", "5") if not reload_min_timeout.isdigit(): @@ -44,39 +119,155 @@ reload_min_timeout = 5 reload_min_timeout = int(reload_min_timeout) + + # Calculate actual timeout: minimum timeout or 3 seconds per service + calculated_timeout = max(reload_min_timeout, 3 * len(services)) + + debug_log(LOGGER, "Reload timeout configuration:") + debug_log(LOGGER, f" Minimum timeout setting: {reload_min_timeout}s") + debug_log(LOGGER, f" Number of services: {len(services)}") + debug_log(LOGGER, + f" Calculated timeout (max of min or 3s per service): " + f"{calculated_timeout}s") + debug_log(LOGGER, + "Timeout ensures all services have time to reload certificates") - for instance in instances: + LOGGER.info(f"Deploying certificates to {len(instances)} instances") + + # Deploy certificates to each cluster instance + for i, instance in enumerate(instances): + debug_log(LOGGER, f"Processing instance {i+1}/{len(instances)}") + debug_log(LOGGER, + f"Current instance: {instance['hostname']}:{instance['port']}") + + # Construct API endpoint for this instance endpoint = f"http://{instance['hostname']}:{instance['port']}" host = instance["server_name"] + + debug_log(LOGGER, f"API endpoint: {endpoint}") + debug_log(LOGGER, f"Host header: {host}") + debug_log(LOGGER, "Creating API client for certificate upload") + api = API(endpoint, host=host) - sent, err, status, resp = api.request("POST", "/lets-encrypt/certificates", files=files) + # Upload certificate archive to the instance + debug_log(LOGGER, f"Uploading certificate archive to {endpoint}") + debug_log(LOGGER, "Sending POST request with certificate tarball") + debug_log(LOGGER, + "This will extract certificates to instance filesystem") + + sent, err, status_code, resp = api.request( + "POST", + "/lets-encrypt/certificates", + files=files + ) + if not sent: status = 1 - LOGGER.error(f"Can't send API request to {api.endpoint}/lets-encrypt/certificates : {err}") - elif status != 200: + LOGGER.error( + f"Can't send API request to " + f"{api.endpoint}/lets-encrypt/certificates: {err}" + ) + debug_log(LOGGER, f"Certificate upload failed with error: {err}") + debug_log(LOGGER, + "Skipping reload for this instance due to upload failure") + continue + elif status_code != 200: status = 1 - LOGGER.error(f"Error while sending API request to {api.endpoint}/lets-encrypt/certificates : status = {resp['status']}, msg = {resp['msg']}") + LOGGER.error( + f"Error while sending API request to " + f"{api.endpoint}/lets-encrypt/certificates: " + f"status = {resp['status']}, msg = {resp['msg']}" + ) + debug_log(LOGGER, f"HTTP status code: {status_code}") + debug_log(LOGGER, f"Error response: {resp}") + debug_log(LOGGER, "Certificate upload failed, skipping reload") + continue else: LOGGER.info( - f"Successfully sent API request to {api.endpoint}/lets-encrypt/certificates", + f"Successfully sent API request to " + f"{api.endpoint}/lets-encrypt/certificates" ) - sent, err, status, resp = api.request( + debug_log(LOGGER, "Certificate archive uploaded successfully") + debug_log(LOGGER, + "Certificates are now available on instance filesystem") + debug_log(LOGGER, "Proceeding to trigger configuration reload") + + # Trigger configuration reload on the instance + # Configuration testing can be disabled via environment variable + disable_testing = getenv("DISABLE_CONFIGURATION_TESTING", + "no").lower() + test_config = "no" if disable_testing == "yes" else "yes" + + debug_log(LOGGER, f"Triggering configuration reload on {endpoint}") + debug_log(LOGGER, f"Configuration testing enabled: {test_config}") + debug_log(LOGGER, f"Reload timeout: {calculated_timeout}s") + debug_log(LOGGER, "This will reload nginx with new certificates") + + sent, err, status_code, resp = api.request( "POST", - f"/reload?test={'no' if getenv('DISABLE_CONFIGURATION_TESTING', 'no').lower() == 'yes' else 'yes'}", - timeout=max(reload_min_timeout, 3 * len(services)), + f"/reload?test={test_config}", + timeout=calculated_timeout, ) + if not sent: status = 1 - LOGGER.error(f"Can't send API request to {api.endpoint}/reload : {err}") - elif status != 200: + LOGGER.error( + f"Can't send API request to {api.endpoint}/reload: {err}" + ) + debug_log(LOGGER, f"Reload request failed with error: {err}") + debug_log(LOGGER, + "Instance may not have reloaded new certificates") + elif status_code != 200: status = 1 - LOGGER.error(f"Error while sending API request to {api.endpoint}/reload : status = {resp['status']}, msg = {resp['msg']}") + LOGGER.error( + f"Error while sending API request to {api.endpoint}/reload: " + f"status = {resp['status']}, msg = {resp['msg']}" + ) + debug_log(LOGGER, f"HTTP status code: {status_code}") + debug_log(LOGGER, f"Reload response: {resp}") + debug_log(LOGGER, + "Configuration reload failed on this instance") else: - LOGGER.info(f"Successfully sent API request to {api.endpoint}/reload") + LOGGER.info( + f"Successfully sent API request to {api.endpoint}/reload") + debug_log(LOGGER, "Configuration reload completed successfully") + debug_log(LOGGER, + "New certificates are now active on this instance") + debug_log(LOGGER, f"Instance {endpoint} fully updated") + + # Reset file pointer for next instance + debug_log(LOGGER, + "Resetting archive buffer position for next instance") + debug_log(LOGGER, + "Archive will be re-read from beginning for next upload") + tgz.seek(0, 0) + + debug_log(LOGGER, "Certificate deployment process completed") + debug_log(LOGGER, "All instances have been processed") + if status == 0: + debug_log(LOGGER, "All deployments successful") + else: + debug_log(LOGGER, + "Some deployments failed - check individual instance logs") + except BaseException as e: status = 1 LOGGER.debug(format_exc()) - LOGGER.error(f"Exception while running certbot-deploy.py :\n{e}") + LOGGER.error(f"Exception while running certbot-deploy.py:\n{e}") + + debug_log(LOGGER, "Full exception traceback logged above") + debug_log(LOGGER, "Certificate deployment failed due to exception") + debug_log(LOGGER, + "Some instances may not have received updated certificates") + +debug_log(LOGGER, + f"Certificate deployment completed with final status: {status}") +if status == 0: + debug_log(LOGGER, + "All certificates deployed successfully across cluster") +else: + debug_log(LOGGER, + "Deployment completed with errors - manual intervention may be needed") -sys_exit(status) +sys_exit(status) \ No newline at end of file diff --git a/src/common/core/letsencrypt/jobs/certbot-new.py b/src/common/core/letsencrypt/jobs/certbot-new.py index f49e841b2c..c2e76f1c74 100644 --- a/src/common/core/letsencrypt/jobs/certbot-new.py +++ b/src/common/core/letsencrypt/jobs/certbot-new.py @@ -11,13 +11,18 @@ from pathlib import Path from re import MULTILINE, match, search from select import select +from shutil import which +from socket import getaddrinfo, gaierror, AF_INET, AF_INET6 from subprocess import DEVNULL, PIPE, STDOUT, Popen, run from sys import exit as sys_exit, path as sys_path from time import sleep from traceback import format_exc -from typing import Dict, Literal, Type, Union +from typing import ( + Any, Dict, List, Literal, Optional, Set, Tuple, cast, Union +) -for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) for paths in (("deps", "python"), ("utils",), ("db",))]: +for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) + for paths in (("deps", "python"), ("utils",), ("db",))]: if deps_path not in sys_path: sys_path.append(deps_path) @@ -50,116 +55,1114 @@ from jobs import Job # type: ignore from logger import setup_logger # type: ignore -LOGGER = setup_logger("LETS-ENCRYPT.new") -CERTBOT_BIN = join(sep, "usr", "share", "bunkerweb", "deps", "python", "bin", "certbot") -DEPS_PATH = join(sep, "usr", "share", "bunkerweb", "deps", "python") - -LOGGER_CERTBOT = setup_logger("LETS-ENCRYPT.new.certbot") -status = 0 +LOGGER: Any = setup_logger("LETS-ENCRYPT.new") +CERTBOT_BIN: str = join(sep, "usr", "share", "bunkerweb", "deps", "python", + "bin", "certbot") +DEPS_PATH: str = join(sep, "usr", "share", "bunkerweb", "deps", "python") + +LOGGER_CERTBOT: Any = setup_logger("LETS-ENCRYPT.new.certbot") +status: int = 0 + +PLUGIN_PATH: Any = Path(sep, "usr", "share", "bunkerweb", "core", + "letsencrypt") +JOBS_PATH: Any = PLUGIN_PATH.joinpath("jobs") +CACHE_PATH: Any = Path(sep, "var", "cache", "bunkerweb", "letsencrypt") +DATA_PATH: Any = CACHE_PATH.joinpath("etc") +WORK_DIR: str = join(sep, "var", "lib", "bunkerweb", "letsencrypt") +LOGS_DIR: str = join(sep, "var", "log", "bunkerweb", "letsencrypt") + +PSL_URL: str = "https://publicsuffix.org/list/public_suffix_list.dat" +PSL_STATIC_FILE: Any = Path("public_suffix_list.dat") + +# ZeroSSL Configuration +ZEROSSL_ACME_SERVER: str = "https://acme.zerossl.com/v2/DV90" +ZEROSSL_STAGING_SERVER: str = "https://acme.zerossl.com/v2/DV90" +LETSENCRYPT_ACME_SERVER: str = ( + "https://acme-v02.api.letsencrypt.org/directory" +) +LETSENCRYPT_STAGING_SERVER: str = ( + "https://acme-staging-v02.api.letsencrypt.org/directory" +) -PLUGIN_PATH = Path(sep, "usr", "share", "bunkerweb", "core", "letsencrypt") -JOBS_PATH = PLUGIN_PATH.joinpath("jobs") -CACHE_PATH = Path(sep, "var", "cache", "bunkerweb", "letsencrypt") -DATA_PATH = CACHE_PATH.joinpath("etc") -WORK_DIR = join(sep, "var", "lib", "bunkerweb", "letsencrypt") -LOGS_DIR = join(sep, "var", "log", "bunkerweb", "letsencrypt") -PSL_URL = "https://publicsuffix.org/list/public_suffix_list.dat" -PSL_STATIC_FILE = "public_suffix_list.dat" +def debug_log(logger: Any, message: str) -> None: + # Log debug messages only when LOG_LEVEL environment variable is set to + # "debug" + if getenv("LOG_LEVEL") == "debug": + logger.debug(f"[DEBUG] {message}") -def load_public_suffix_list(job): - job_cache = job.get_cache(PSL_STATIC_FILE, with_info=True, with_data=True) +def load_public_suffix_list(job: Any) -> List[str]: + # Load and cache the public suffix list for domain validation. + # Fetches the PSL from the official source and caches it locally. + # Returns cached version if available and fresh (less than 1 day old). + debug_log(LOGGER, f"Loading public suffix list from cache or {PSL_URL}") + debug_log(LOGGER, "Checking if cached PSL is available and fresh") + + job_cache: Union[Dict[str, Any], bool] = job.get_cache( + PSL_STATIC_FILE.name, with_info=True, with_data=True + ) if ( isinstance(job_cache, dict) - and job_cache.get("last_update") - and job_cache["last_update"] < (datetime.now().astimezone() - timedelta(days=1)).timestamp() + and "last_update" in job_cache + and job_cache["last_update"] < ( + datetime.now().astimezone() - timedelta(days=1) + ).timestamp() ): - return job_cache["data"].decode("utf-8").splitlines() + debug_log(LOGGER, "Using cached public suffix list") + cache_last_update: float = float(job_cache['last_update']) + cache_age_hours: float = ( + (datetime.now().astimezone().timestamp() - + cache_last_update) / 3600 + ) + debug_log(LOGGER, f"Cache age: {cache_age_hours:.1f} hours") + cache_data: bytes = cast(bytes, job_cache["data"]) + return cache_data.decode("utf-8").splitlines() try: + debug_log(LOGGER, f"Downloading fresh PSL from {PSL_URL}") + debug_log(LOGGER, "Cached PSL is missing or older than 1 day") + resp = get(PSL_URL, timeout=5) resp.raise_for_status() - content = resp.text - cached, err = JOB.cache_file(PSL_STATIC_FILE, content.encode("utf-8")) + content: str = resp.text + + debug_log(LOGGER, + f"Downloaded PSL successfully, {len(content)} bytes") + debug_log(LOGGER, + f"PSL contains {len(content.splitlines())} lines") + + cached: Any + err: Any + cached, err = JOB.cache_file(PSL_STATIC_FILE.name, + content.encode("utf-8")) if not cached: - LOGGER.error(f"Error while saving public suffix list to cache : {err}") + LOGGER.error(f"Error while saving public suffix list to cache: " + f"{err}") + else: + debug_log(LOGGER, "PSL successfully cached for future use") + return content.splitlines() except BaseException as e: LOGGER.debug(format_exc()) - LOGGER.error(f"Error while downloading public suffix list : {e}") + LOGGER.error(f"Error while downloading public suffix list: {e}") + + debug_log(LOGGER, + "Download failed, checking for existing static file") + if PSL_STATIC_FILE.exists(): + debug_log(LOGGER, + f"Using existing static PSL file: {PSL_STATIC_FILE}") with PSL_STATIC_FILE.open("r", encoding="utf-8") as f: return f.read().splitlines() + + debug_log(LOGGER, "No PSL data available - returning empty list") return [] -def parse_psl(psl_lines): - # Parse PSL lines into rules and exceptions sets - rules = set() - exceptions = set() +def parse_psl(psl_lines: List[str]) -> Dict[str, Set[str]]: + # Parse PSL lines into rules and exceptions sets. + # Processes the public suffix list format, handling comments, + # exceptions (lines starting with !), and regular rules. + debug_log(LOGGER, f"Parsing {len(psl_lines)} PSL lines") + debug_log(LOGGER, "Processing rules, exceptions, and filtering comments") + + rules: Set[str] = set() + exceptions: Set[str] = set() + comments_skipped: int = 0 + empty_lines_skipped: int = 0 + for line in psl_lines: line = line.strip() - if not line or line.startswith("//"): - continue # Ignore empty lines and comments + if not line: + empty_lines_skipped += 1 + continue + if line.startswith("//"): + comments_skipped += 1 + continue if line.startswith("!"): - exceptions.add(line[1:]) # Exception rule + exceptions.add(line[1:]) continue - rules.add(line) # Normal or wildcard rule + rules.add(line) + + debug_log(LOGGER, f"Parsed {len(rules)} rules and {len(exceptions)} " + f"exceptions") + debug_log(LOGGER, f"Skipped {comments_skipped} comments and " + f"{empty_lines_skipped} empty lines") + debug_log(LOGGER, "PSL parsing completed successfully") + return {"rules": rules, "exceptions": exceptions} -def is_domain_blacklisted(domain, psl): - # Returns True if the domain is forbidden by PSL rules +def is_domain_blacklisted(domain: str, psl: Dict[str, Set[str]]) -> bool: + # Check if domain is forbidden by PSL rules. + # Validates whether a domain would be blacklisted according to the + # Public Suffix List rules and exceptions. domain = domain.lower().strip(".") - labels = domain.split(".") + labels: List[str] = domain.split(".") + + debug_log(LOGGER, f"Checking domain {domain} against PSL rules") + debug_log(LOGGER, f"Domain has {len(labels)} labels: {labels}") + debug_log(LOGGER, f"PSL contains {len(psl['rules'])} rules and " + f"{len(psl['exceptions'])} exceptions") + for i in range(len(labels)): - candidate = ".".join(labels[i:]) - # Allow if candidate is an exception + candidate: str = ".".join(str(label) for label in labels[i:]) + + debug_log(LOGGER, f"Checking candidate: {candidate}") + if candidate in psl["exceptions"]: + debug_log(LOGGER, f"Domain {domain} allowed by PSL exception " + f"{candidate}") return False - # Block if candidate matches a PSL rule + if candidate in psl["rules"]: + debug_log(LOGGER, f"Found PSL rule match: {candidate}") + debug_log(LOGGER, f"Checking blacklist conditions for i={i}") + if i == 0: - return True # Block exact match + debug_log(LOGGER, f"Domain {domain} blacklisted - exact PSL " + f"rule match") + return True if i == 0 and domain.startswith("*."): - return True # Block wildcard for the rule itself + debug_log(LOGGER, f"Wildcard domain {domain} blacklisted - " + f"exact PSL rule match") + return True if i == 0 or (i == 1 and labels[0] == "*"): - return True # Block *.domain.tld + debug_log(LOGGER, f"Domain {domain} blacklisted - PSL rule " + f"violation") + return True if len(labels[i:]) == len(labels): - return True # Block domain.tld - # Allow subdomains - # Block if candidate matches a PSL wildcard rule - if f"*.{candidate}" in psl["rules"]: + debug_log(LOGGER, f"Domain {domain} blacklisted - full label " + f"match") + return True + + wildcard_candidate: str = f"*.{candidate}" + if wildcard_candidate in psl["rules"]: + debug_log(LOGGER, f"Found PSL wildcard rule match: " + f"{wildcard_candidate}") + if len(labels[i:]) == 2: - return True # Block foo.bar and *.foo.bar + debug_log(LOGGER, f"Domain {domain} blacklisted - wildcard " + f"PSL rule match") + return True + + debug_log(LOGGER, f"Domain {domain} not blacklisted by PSL") return False +def get_certificate_authority_config( + ca_provider: str, + staging: bool = False +) -> Dict[str, str]: + # Get ACME server configuration for the specified CA provider. + # Returns the appropriate ACME server URL and name for the given + # certificate authority and environment (staging/production). + debug_log(LOGGER, f"Getting CA config for {ca_provider}, " + f"staging={staging}") + + config: Dict[str, str] + if ca_provider.lower() == "zerossl": + config = { + "server": (ZEROSSL_STAGING_SERVER if staging + else ZEROSSL_ACME_SERVER), + "name": "ZeroSSL" + } + else: # Default to Let's Encrypt + config = { + "server": (LETSENCRYPT_STAGING_SERVER if staging + else LETSENCRYPT_ACME_SERVER), + "name": "Let's Encrypt" + } + + debug_log(LOGGER, f"CA config: {config}") + + return config + + +def setup_zerossl_eab_credentials( + email: str, + api_key: Optional[str] = None +) -> Tuple[Optional[str], Optional[str]]: + # Setup External Account Binding (EAB) credentials for ZeroSSL. + # Contacts the ZeroSSL API to obtain EAB credentials required for + # ACME certificate issuance with ZeroSSL. + LOGGER.info(f"Setting up ZeroSSL EAB credentials for email: {email}") + + if not api_key: + LOGGER.error("❌ ZeroSSL API key not provided") + LOGGER.warning( + "ZeroSSL API key not provided, attempting registration with " + "email" + ) + return None, None + + debug_log(LOGGER, "Making request to ZeroSSL API for EAB credentials") + debug_log(LOGGER, f"Email: {email}") + debug_log(LOGGER, f"API key provided: {bool(api_key)}") + + LOGGER.info("Making request to ZeroSSL API for EAB credentials") + + # Try the correct ZeroSSL API endpoint + try: + # The correct endpoint for ZeroSSL EAB credentials + debug_log(LOGGER, "Attempting primary ZeroSSL EAB endpoint") + + response = get( + "https://api.zerossl.com/acme/eab-credentials", + headers={"Authorization": f"Bearer {api_key}"}, + timeout=30 + ) + + debug_log(LOGGER, f"ZeroSSL API response status: " + f"{response.status_code}") + debug_log(LOGGER, f"Response headers: {dict(response.headers)}") + LOGGER.info(f"ZeroSSL API response status: {response.status_code}") + + if response.status_code == 200: + response.raise_for_status() + eab_data: Dict[str, Any] = response.json() + + debug_log(LOGGER, f"ZeroSSL API response data: {eab_data}") + LOGGER.info(f"ZeroSSL API response data: {eab_data}") + + # ZeroSSL typically returns eab_kid and eab_hmac_key directly + if "eab_kid" in eab_data and "eab_hmac_key" in eab_data: + eab_kid: Optional[str] = eab_data.get("eab_kid") + eab_hmac_key: Optional[str] = eab_data.get("eab_hmac_key") + LOGGER.info(f"✓ Successfully obtained EAB credentials from " + f"ZeroSSL") + kid_display: str = (f"{eab_kid[:10]}..." if eab_kid + else "None") + hmac_display: str = (f"{eab_hmac_key[:10]}..." + if eab_hmac_key else "None") + LOGGER.info(f"EAB Kid: {kid_display}") + LOGGER.info(f"EAB HMAC Key: {hmac_display}") + return eab_kid, eab_hmac_key + else: + LOGGER.error(f"❌ Invalid ZeroSSL API response format: " + f"{eab_data}") + return None, None + else: + # Try alternative endpoint if first one fails + LOGGER.warning( + f"Primary endpoint failed with {response.status_code}, " + "trying alternative" + ) + response_text: str = response.text + + debug_log(LOGGER, f"Primary endpoint response: {response_text}") + LOGGER.info(f"Primary endpoint response: {response_text}") + + # Try alternative endpoint with email parameter + debug_log(LOGGER, "Attempting alternative ZeroSSL EAB endpoint") + + response = get( + "https://api.zerossl.com/acme/eab-credentials-email", + params={"email": email}, + headers={"Authorization": f"Bearer {api_key}"}, + timeout=30 + ) + + alt_status: int = response.status_code + debug_log(LOGGER, f"Alternative ZeroSSL API response status: " + f"{alt_status}") + LOGGER.info(f"Alternative ZeroSSL API response status: " + f"{response.status_code}") + response.raise_for_status() + eab_data = response.json() + + debug_log(LOGGER, f"Alternative ZeroSSL API response data: " + f"{eab_data}") + LOGGER.info(f"Alternative ZeroSSL API response data: {eab_data}") + + if eab_data.get("success"): + eab_kid = eab_data.get("eab_kid") + eab_hmac_key = eab_data.get("eab_hmac_key") + LOGGER.info( + "✓ Successfully obtained EAB credentials from ZeroSSL " + "(alternative endpoint)" + ) + kid_display = f"{eab_kid[:10]}..." if eab_kid else "None" + hmac_display = (f"{eab_hmac_key[:10]}..." if eab_hmac_key + else "None") + LOGGER.info(f"EAB Kid: {kid_display}") + LOGGER.info(f"EAB HMAC Key: {hmac_display}") + return eab_kid, eab_hmac_key + else: + LOGGER.error(f"❌ ZeroSSL EAB registration failed: " + f"{eab_data}") + return None, None + + except BaseException as e: + LOGGER.debug(format_exc()) + LOGGER.error(f"❌ Error setting up ZeroSSL EAB credentials: {e}") + + debug_log(LOGGER, "ZeroSSL EAB setup failed with exception") + + # Additional troubleshooting info + LOGGER.error("Troubleshooting steps:") + LOGGER.error("1. Verify your ZeroSSL API key is valid") + LOGGER.error("2. Check your ZeroSSL account has ACME access enabled") + LOGGER.error("3. Ensure the API key has the correct permissions") + LOGGER.error("4. Try regenerating your ZeroSSL API key") + + return None, None + + +def get_caa_records(domain: str) -> Optional[List[Dict[str, str]]]: + # Get CAA records for a domain using dig command. + # Queries DNS CAA records to check certificate authority authorization. + # Returns None if dig command is not available. + + # Check if dig command is available + if not which("dig"): + debug_log(LOGGER, + "dig command not available for CAA record checking") + LOGGER.info("dig command not available for CAA record checking") + return None + + try: + # Use dig to query CAA records + debug_log(LOGGER, f"Querying CAA records for domain: {domain}") + debug_log(LOGGER, "Using dig command with +short flag") + LOGGER.info(f"Querying CAA records for domain: {domain}") + + result = run( + ["dig", "+short", domain, "CAA"], + capture_output=True, + text=True, + timeout=10 + ) + + debug_log(LOGGER, f"dig command return code: {result.returncode}") + debug_log(LOGGER, f"dig stdout: {result.stdout}") + debug_log(LOGGER, f"dig stderr: {result.stderr}") + + if result.returncode == 0 and result.stdout.strip(): + LOGGER.info(f"Found CAA records for domain {domain}") + caa_records: List[Dict[str, str]] = [] + raw_lines: List[str] = result.stdout.strip().split('\n') + + debug_log(LOGGER, f"Processing {len(raw_lines)} CAA record lines") + + for line in raw_lines: + line = line.strip() + if line: + debug_log(LOGGER, f"Parsing CAA record line: {line}") + + # CAA record format: flags tag "value" + # Example: 0 issue "letsencrypt.org" + parts: List[str] = line.split(' ', 2) + if len(parts) >= 3: + flags: str = parts[0] + tag: str = parts[1] + value: str = parts[2].strip('"') + caa_records.append({ + 'flags': flags, + 'tag': tag, + 'value': value + }) + + debug_log(LOGGER, + f"Parsed CAA record: flags={flags}, " + f"tag={tag}, value={value}") + + record_count: int = len(caa_records) + debug_log(LOGGER, + f"Successfully parsed {record_count} CAA records " + f"for domain {domain}") + LOGGER.info(f"Parsed {len(caa_records)} CAA records for domain " + f"{domain}") + return caa_records + else: + debug_log(LOGGER, + f"No CAA records found for domain {domain} " + f"(dig return code: {result.returncode})") + LOGGER.info( + f"No CAA records found for domain {domain} " + f"(dig return code: {result.returncode})" + ) + return [] + + except BaseException as e: + debug_log(LOGGER, f"Error querying CAA records for {domain}: {e}") + LOGGER.info(f"Error querying CAA records for {domain}: {e}") + return None + + +def check_caa_authorization( + domain: str, + ca_provider: str, + is_wildcard: bool = False +) -> bool: + # Check if the CA provider is authorized by CAA records. + # Validates whether the certificate authority is permitted to issue + # certificates for the domain according to CAA DNS records. + debug_log(LOGGER, + f"Checking CAA authorization for domain: {domain}, " + f"CA: {ca_provider}, wildcard: {is_wildcard}") + + LOGGER.info( + f"Checking CAA authorization for domain: {domain}, " + f"CA: {ca_provider}, wildcard: {is_wildcard}" + ) + + # Map CA providers to their CAA identifiers + ca_identifiers: Dict[str, List[str]] = { + "letsencrypt": ["letsencrypt.org"], + "zerossl": ["sectigo.com", "zerossl.com"] # ZeroSSL uses Sectigo + } + + allowed_identifiers: List[str] = ca_identifiers.get( + ca_provider.lower(), [] + ) + if not allowed_identifiers: + LOGGER.warning(f"Unknown CA provider for CAA check: {ca_provider}") + debug_log(LOGGER, "Returning True for unknown CA provider " + "(conservative approach)") + return True # Allow unknown providers (conservative approach) + + debug_log(LOGGER, f"CA identifiers for {ca_provider}: " + f"{allowed_identifiers}") + + # Check CAA records for the domain and parent domains + check_domain: str = domain.lstrip("*.") + domain_parts: List[str] = check_domain.split(".") + + debug_log(LOGGER, f"Will check CAA records for domain chain: " + f"{check_domain}") + debug_log(LOGGER, f"Domain parts: {domain_parts}") + LOGGER.info(f"Will check CAA records for domain chain: {check_domain}") + + for i in range(len(domain_parts)): + current_domain: str = ".".join( + str(part) for part in domain_parts[i:] + ) + + debug_log(LOGGER, f"Checking CAA records for: {current_domain}") + LOGGER.info(f"Checking CAA records for: {current_domain}") + caa_records: Optional[List[Dict[str, str]]] = get_caa_records( + current_domain + ) + + if caa_records is None: + # dig not available, skip CAA check + LOGGER.info("CAA record checking skipped (dig command not " + "available)") + debug_log(LOGGER, "Returning True due to unavailable dig command") + return True + + if caa_records: + LOGGER.info(f"Found CAA records for {current_domain}") + + # Check relevant CAA records + issue_records: List[str] = [] + issuewild_records: List[str] = [] + + for record in caa_records: + if record['tag'] == 'issue': + issue_records.append(record['value']) + elif record['tag'] == 'issuewild': + issuewild_records.append(record['value']) + + # Log found records + if issue_records: + debug_log(LOGGER, f"CAA issue records: " + f"{', '.join(str(record) for record in issue_records)}") + LOGGER.info(f"CAA issue records: " + f"{', '.join(str(record) for record in issue_records)}") + if issuewild_records: + debug_log(LOGGER, f"CAA issuewild records: " + f"{', '.join(str(record) for record in issuewild_records)}") + LOGGER.info(f"CAA issuewild records: " + f"{', '.join(str(record) for record in issuewild_records)}") + + # Check authorization based on certificate type + check_records: List[str] + record_type: str + if is_wildcard: + # For wildcard certificates, check issuewild first, + # then fall back to issue + check_records = (issuewild_records if issuewild_records + else issue_records) + record_type = ("issuewild" if issuewild_records + else "issue") + else: + # For regular certificates, check issue records + check_records = issue_records + record_type = "issue" + + debug_log(LOGGER, f"Using CAA {record_type} records for " + f"authorization check") + debug_log(LOGGER, f"Records to check: {check_records}") + LOGGER.info(f"Using CAA {record_type} records for authorization " + f"check") + + if not check_records: + debug_log(LOGGER, + f"No relevant CAA {record_type} records found for " + f"{current_domain}") + LOGGER.info( + f"No relevant CAA {record_type} records found for " + f"{current_domain}" + ) + continue + + # Check if any of our CA identifiers are authorized + authorized: bool = False + + identifier_list: str = ', '.join( + str(id) for id in allowed_identifiers + ) + debug_log(LOGGER, + f"Checking authorization for CA identifiers: " + f"{identifier_list}") + LOGGER.info( + f"Checking authorization for CA identifiers: " + f"{identifier_list}" + ) + for identifier in allowed_identifiers: + for record in check_records: + debug_log(LOGGER, + f"Comparing identifier '{identifier}' " + f"with record '{record}'") + + # Handle explicit deny (empty value or semicolon) + + if record == ";" or record.strip() == "": + LOGGER.warning( + f"CAA {record_type} record explicitly denies " + f"all CAs" + ) + debug_log(LOGGER, "Found explicit deny record - " + "authorization failed") + return False + + # Check for CA authorization + if identifier in record: + authorized = True + LOGGER.info( + f"✓ CA {ca_provider} ({identifier}) authorized " + f"by CAA {record_type} record" + ) + debug_log(LOGGER, + f"Authorization found: {identifier} " + f"in {record}") + break + if authorized: + break + + if not authorized: + LOGGER.error( + f"❌ CA {ca_provider} is NOT authorized by " + f"CAA {record_type} records" + ) + allowed_list: str = ', '.join( + str(record) for record in check_records + ) + identifier_list = ', '.join( + str(id) for id in allowed_identifiers + ) + LOGGER.error( + f"Domain {current_domain} CAA {record_type} allows: " + f"{allowed_list}" + ) + LOGGER.error( + f"But {ca_provider} uses: {identifier_list}" + ) + debug_log(LOGGER, "CAA authorization failed - no matching " + "identifiers") + return False + + # If we found CAA records and we're authorized, we can stop + # checking parent domains + LOGGER.info(f"✓ CAA authorization successful for {domain}") + debug_log(LOGGER, + "CAA authorization successful - stopping parent " + "domain checks") + return True + + # No CAA records found in the entire chain + LOGGER.info( + f"No CAA records found for {check_domain} or parent domains - " + f"any CA allowed" + ) + debug_log(LOGGER, "No CAA records found in entire domain chain - " + "allowing any CA") + return True + + +def validate_domains_for_http_challenge( + domains_list: List[str], + ca_provider: str = "letsencrypt", + is_wildcard: bool = False +) -> bool: + # Validate that all domains have valid A/AAAA records and CAA + # authorization for HTTP challenge. + # Checks DNS resolution and certificate authority authorization for each + # domain in the list to ensure HTTP challenge will succeed. + domain_count: int = len(domains_list) + domain_list: str = ', '.join(str(domain) for domain in domains_list) + debug_log(LOGGER, + f"Validating {domain_count} domains for HTTP challenge: " + f"{domain_list}") + debug_log(LOGGER, + f"CA provider: {ca_provider}, wildcard: {is_wildcard}") + LOGGER.info( + f"Validating {len(domains_list)} domains for HTTP challenge: " + f"{domain_list}" + ) + invalid_domains: List[str] = [] + caa_blocked_domains: List[str] = [] + + # Check if CAA validation should be skipped + skip_caa_check: bool = getenv("ACME_SKIP_CAA_CHECK", "no") == "yes" + + caa_status: str = 'skipped' if skip_caa_check else 'performed' + debug_log(LOGGER, f"CAA check will be {caa_status}") + + # Get external IPs once for all domain checks + external_ips: Optional[Dict[str, Optional[str]]] = get_external_ip() + if external_ips: + if external_ips.get("ipv4"): + LOGGER.info(f"Server external IPv4 address: " + f"{external_ips['ipv4']}") + if external_ips.get("ipv6"): + LOGGER.info(f"Server external IPv6 address: " + f"{external_ips['ipv6']}") + else: + LOGGER.warning( + "Could not determine server external IP - skipping IP match " + "validation" + ) + + validation_passed: int = 0 + validation_failed: int = 0 + + for domain in domains_list: + debug_log(LOGGER, f"Validating domain: {domain}") + + # Check DNS A/AAAA records with retry mechanism + if not check_domain_a_record(domain, external_ips): + invalid_domains.append(domain) + validation_failed += 1 + debug_log(LOGGER, f"DNS validation failed for {domain}") + continue + + # Check CAA authorization + if not skip_caa_check: + if not check_caa_authorization(domain, ca_provider, is_wildcard): + caa_blocked_domains.append(domain) + validation_failed += 1 + debug_log(LOGGER, f"CAA authorization failed for {domain}") + else: + debug_log(LOGGER, f"CAA check skipped for {domain} " + f"(ACME_SKIP_CAA_CHECK=yes)") + LOGGER.info(f"CAA check skipped for {domain} " + f"(ACME_SKIP_CAA_CHECK=yes)") + + validation_passed += 1 + debug_log(LOGGER, f"Validation passed for {domain}") + + debug_log(LOGGER, f"Validation summary: {validation_passed} passed, " + f"{validation_failed} failed") + + # Report results + if invalid_domains: + invalid_list: str = ', '.join( + str(domain) for domain in invalid_domains + ) + LOGGER.error( + f"The following domains do not have valid A/AAAA records and " + f"cannot be used for HTTP challenge: {invalid_list}" + ) + LOGGER.error( + "Please ensure domains resolve to this server before requesting " + "certificates" + ) + return False + + if caa_blocked_domains: + blocked_list: str = ', '.join( + str(domain) for domain in caa_blocked_domains + ) + LOGGER.error( + f"The following domains have CAA records that block " + f"{ca_provider}: {blocked_list}" + ) + LOGGER.error( + "Please update CAA records to authorize the certificate " + "authority or use a different CA" + ) + LOGGER.info("You can skip CAA checking by setting " + "ACME_SKIP_CAA_CHECK=yes") + return False + + valid_list: str = ', '.join(str(domain) for domain in domains_list) + LOGGER.info( + f"All domains have valid DNS records and CAA authorization for " + f"HTTP challenge: {valid_list}" + ) + return True + + +def get_external_ip() -> Optional[Dict[str, Optional[str]]]: + # Get the external/public IP addresses of this server (both IPv4 + # and IPv6). + # Queries multiple external services to determine the server's public + # IP addresses for DNS validation purposes. + debug_log(LOGGER, "Getting external IP addresses for server") + LOGGER.info("Getting external IP addresses for server") + + ipv4_services: List[str] = [ + "https://ipv4.icanhazip.com", + "https://api.ipify.org", + "https://checkip.amazonaws.com", + "https://ipv4.jsonip.com" + ] + + ipv6_services: List[str] = [ + "https://ipv6.icanhazip.com", + "https://api6.ipify.org", + "https://ipv6.jsonip.com" + ] + + external_ips: Dict[str, Optional[str]] = {"ipv4": None, "ipv6": None} + + # Try to get IPv4 address + debug_log(LOGGER, "Attempting to get external IPv4 address") + debug_log(LOGGER, f"Trying {len(ipv4_services)} IPv4 services") + LOGGER.info("Attempting to get external IPv4 address") + + for i, service in enumerate(ipv4_services): + try: + service_num: str = f"{i+1}/{len(ipv4_services)}" + debug_log(LOGGER, + f"Trying IPv4 service {service_num}: {service}") + + if "jsonip.com" in service: + # This service returns JSON format + response = get(service, timeout=5) + response.raise_for_status() + json_data: Dict[str, Any] = response.json() + ip_str: str = json_data.get("ip", "").strip() + else: + # These services return plain text IP + response = get(service, timeout=5) + response.raise_for_status() + ip_str = response.text.strip() + + debug_log(LOGGER, f"Service returned: {ip_str}") + + # Basic IPv4 validation + if ip_str and "." in ip_str and len(ip_str.split(".")) == 4: + try: + # Validate it's a proper IPv4 address + getaddrinfo(ip_str, None, AF_INET) + # Type-safe assignment + ipv4_addr: str = str(ip_str) + external_ips["ipv4"] = ipv4_addr + + debug_log(LOGGER, + f"Successfully obtained external IPv4 " + f"address: {ipv4_addr}") + LOGGER.info(f"Successfully obtained external IPv4 " + f"address: {ipv4_addr}") + break + except gaierror: + debug_log(LOGGER, + f"Invalid IPv4 address returned: {ip_str}") + continue + except BaseException as e: + debug_log(LOGGER, + f"Failed to get IPv4 address from {service}: {e}") + LOGGER.info(f"Failed to get IPv4 address from {service}: {e}") + continue + + # Try to get IPv6 address + debug_log(LOGGER, "Attempting to get external IPv6 address") + debug_log(LOGGER, f"Trying {len(ipv6_services)} IPv6 services") + LOGGER.info("Attempting to get external IPv6 address") + + for i, service in enumerate(ipv6_services): + try: + service_num = f"{i+1}/{len(ipv6_services)}" + debug_log(LOGGER, + f"Trying IPv6 service {service_num}: {service}") + + if "jsonip.com" in service: + response = get(service, timeout=5) + response.raise_for_status() + json_data: Dict[str, Any] = response.json() + ip_str = json_data.get("ip", "").strip() + else: + response = get(service, timeout=5) + response.raise_for_status() + ip_str = response.text.strip() + + debug_log(LOGGER, f"Service returned: {ip_str}") + + # Basic IPv6 validation + if ip_str and ":" in ip_str: + try: + # Validate it's a proper IPv6 address + getaddrinfo(ip_str, None, AF_INET6) + # Type-safe assignment + ipv6_addr: str = str(ip_str) + external_ips["ipv6"] = ipv6_addr + + debug_log(LOGGER, + f"Successfully obtained external IPv6 " + f"address: {ipv6_addr}") + LOGGER.info(f"Successfully obtained external IPv6 " + f"address: {ipv6_addr}") + break + except gaierror: + debug_log(LOGGER, + f"Invalid IPv6 address returned: {ip_str}") + continue + except BaseException as e: + debug_log(LOGGER, + f"Failed to get IPv6 address from {service}: {e}") + LOGGER.info(f"Failed to get IPv6 address from {service}: {e}") + continue + + if not external_ips["ipv4"] and not external_ips["ipv6"]: + LOGGER.warning( + "Could not determine external IP address (IPv4 or IPv6) from " + "any service" + ) + debug_log(LOGGER, "All external IP services failed") + return None + + ipv4_status: str = external_ips['ipv4'] or 'not found' + ipv6_status: str = external_ips['ipv6'] or 'not found' + LOGGER.info( + f"External IP detection completed - " + f"IPv4: {ipv4_status}, IPv6: {ipv6_status}" + ) + return external_ips + + +def check_domain_a_record( + domain: str, + external_ips: Optional[Dict[str, Optional[str]]] = None +) -> bool: + # Check if domain has valid A/AAAA records for HTTP challenge. + # Validates DNS resolution and optionally checks if the domain's + # IP addresses match the server's external IPs. + debug_log(LOGGER, f"Checking DNS A/AAAA records for domain: {domain}") + LOGGER.info(f"Checking DNS A/AAAA records for domain: {domain}") + + # Remove wildcard prefix if present + check_domain: str = domain.lstrip("*.") + + try: + + debug_log(LOGGER, f"Checking domain after wildcard removal: " + f"{check_domain}") + + # Attempt to resolve the domain to IP addresses + result: List[Tuple[Any, ...]] = getaddrinfo(check_domain, None) + if result: + ipv4_addresses: List[str] = [ + addr[4][0] for addr in result if addr[0] == AF_INET + ] + ipv6_addresses: List[str] = [ + addr[4][0] for addr in result if addr[0] == AF_INET6 + ] + + debug_log(LOGGER, "DNS resolution results:") + debug_log(LOGGER, f" IPv4 addresses: {ipv4_addresses}") + debug_log(LOGGER, f" IPv6 addresses: {ipv6_addresses}") + + if not ipv4_addresses and not ipv6_addresses: + LOGGER.warning(f"Domain {check_domain} has no A or AAAA " + f"records") + debug_log(LOGGER, "No valid IP addresses found in DNS " + "resolution") + return False + + # Log found addresses + if ipv4_addresses: + ipv4_display: str = ', '.join( + str(addr) for addr in ipv4_addresses[:3] + ) + debug_log(LOGGER, + f"Domain {check_domain} IPv4 A records: " + f"{ipv4_display}") + LOGGER.info( + f"Domain {check_domain} IPv4 A records: {ipv4_display}" + ) + if ipv6_addresses: + ipv6_display: str = ', '.join( + str(addr) for addr in ipv6_addresses[:3] + ) + debug_log(LOGGER, + f"Domain {check_domain} IPv6 AAAA records: " + f"{ipv6_display}") + LOGGER.info( + f"Domain {check_domain} IPv6 AAAA records: " + f"{ipv6_display}" + ) + + # Check if any record matches the external IPs + if external_ips: + ipv4_match: bool = False + ipv6_match: bool = False + + debug_log(LOGGER, + "Checking IP address matches with server " + "external IPs") + + # Check IPv4 match + if external_ips.get("ipv4") and ipv4_addresses: + if external_ips["ipv4"] in ipv4_addresses: + external_ipv4: str = external_ips['ipv4'] + LOGGER.info( + f"✓ Domain {check_domain} IPv4 A record matches " + f"server external IP ({external_ipv4})" + ) + ipv4_match = True + else: + LOGGER.warning( + f"⚠ Domain {check_domain} IPv4 A record does " + "not match server external IP" + ) + ipv4_list: str = ', '.join( + str(addr) for addr in ipv4_addresses + ) + LOGGER.warning(f" Domain IPv4: {ipv4_list}") + LOGGER.warning(f" Server IPv4: " + f"{external_ips['ipv4']}") + + # Check IPv6 match + if external_ips.get("ipv6") and ipv6_addresses: + if external_ips["ipv6"] in ipv6_addresses: + external_ipv6: str = external_ips['ipv6'] + LOGGER.info( + f"✓ Domain {check_domain} IPv6 AAAA record " + f"matches server external IP ({external_ipv6})" + ) + ipv6_match = True + else: + LOGGER.warning( + f"⚠ Domain {check_domain} IPv6 AAAA record does " + "not match server external IP" + ) + ipv6_list: str = ', '.join( + str(addr) for addr in ipv6_addresses + ) + LOGGER.warning(f" Domain IPv6: {ipv6_list}") + LOGGER.warning(f" Server IPv6: " + f"{external_ips['ipv6']}") + + # Determine if we have any matching records + has_any_match: bool = ipv4_match or ipv6_match + has_external_ip: bool = bool( + external_ips.get("ipv4") or external_ips.get("ipv6") + ) + + debug_log(LOGGER, f"IP match results: IPv4={ipv4_match}, " + f"IPv6={ipv6_match}") + debug_log(LOGGER, f"Has external IP: {has_external_ip}, " + f"Has match: {has_any_match}") + + if has_external_ip and not has_any_match: + LOGGER.warning( + f"⚠ Domain {check_domain} records do not match " + "any server external IP" + ) + LOGGER.warning( + f" HTTP challenge may fail - ensure domain points " + f"to this server" + ) + + # Check if we should treat this as an error + strict_ip_check: bool = ( + getenv("ACME_HTTP_STRICT_IP_CHECK", "no") == "yes" + ) + if strict_ip_check: + LOGGER.error( + f"Strict IP check enabled - rejecting " + f"certificate request for {check_domain}" + ) + debug_log(LOGGER, "Strict IP check failed - " + "returning False") + return False + + LOGGER.info(f"✓ Domain {check_domain} DNS validation passed") + debug_log(LOGGER, "DNS validation completed successfully") + return True + else: + debug_log(LOGGER, + f"Domain {check_domain} validation failed - no DNS " + f"resolution") + LOGGER.info(f"Domain {check_domain} validation failed - no DNS " + f"resolution") + LOGGER.warning(f"Domain {check_domain} does not resolve") + return False + + except gaierror as e: + debug_log(LOGGER, f"Domain {check_domain} DNS resolution failed " + f"(gaierror): {e}") + LOGGER.info(f"Domain {check_domain} DNS resolution failed " + f"(gaierror): {e}") + LOGGER.warning(f"DNS resolution failed for domain {check_domain}: " + f"{e}") + return False + except BaseException as e: + LOGGER.info(format_exc()) + LOGGER.error(f"Error checking DNS records for domain " + f"{check_domain}: {e}") + debug_log(LOGGER, "DNS check failed with unexpected exception") + return False + + def certbot_new_with_retry( challenge_type: Literal["dns", "http"], domains: str, email: str, - provider: str = None, - credentials_path: Union[str, Path] = None, + provider: Optional[str] = None, + credentials_path: Optional[Any] = None, propagation: str = "default", profile: str = "classic", staging: bool = False, force: bool = False, - cmd_env: Dict[str, str] = None, + cmd_env: Optional[Dict[str, str]] = None, max_retries: int = 0, + ca_provider: str = "letsencrypt", + api_key: Optional[str] = None, + server_name: Optional[str] = None, ) -> int: - """Execute certbot with retry mechanism.""" - attempt = 1 - while attempt <= max_retries + 1: # +1 for the initial attempt + # Execute certbot with retry mechanism. + # Wrapper around certbot_new that implements automatic retries with + # exponential backoff for failed certificate generation attempts. + debug_log(LOGGER, f"Starting certbot with retry for domains: {domains}") + debug_log(LOGGER, f"Max retries: {max_retries}, CA: {ca_provider}") + debug_log(LOGGER, f"Challenge: {challenge_type}, Provider: {provider}") + + attempt: int = 1 + result: int = 1 # Initialize result + while attempt <= max_retries + 1: if attempt > 1: - LOGGER.warning(f"Certificate generation failed, retrying... (attempt {attempt}/{max_retries + 1})") - # Wait before retrying (exponential backoff: 30s, 60s, 120s...) - wait_time = min(30 * (2 ** (attempt - 2)), 300) # Cap at 5 minutes + LOGGER.warning( + f"Certificate generation failed, retrying... " + f"(attempt {attempt}/{max_retries + 1})" + ) + wait_time: int = min(30 * (2 ** (attempt - 2)), 300) + + debug_log(LOGGER, + f"Waiting {wait_time} seconds before retry...") + debug_log(LOGGER, f"Exponential backoff: base=30s, " + f"attempt={attempt}") LOGGER.info(f"Waiting {wait_time} seconds before retry...") sleep(wait_time) - result = certbot_new( + debug_log(LOGGER, f"Executing certbot attempt {attempt}") + + certbot_result: int = certbot_new( challenge_type, domains, email, @@ -169,18 +1172,28 @@ def certbot_new_with_retry( profile, staging, force, - cmd_env, + cmd_env or {}, + ca_provider, + api_key, + server_name, ) - if result == 0: + if certbot_result == 0: if attempt > 1: - LOGGER.info(f"Certificate generation succeeded on attempt {attempt}") - return result + LOGGER.info(f"Certificate generation succeeded on attempt " + f"{attempt}") + debug_log(LOGGER, "Certbot completed successfully") + return certbot_result if attempt >= max_retries + 1: - LOGGER.error(f"Certificate generation failed after {max_retries + 1} attempts") - return result + LOGGER.error(f"Certificate generation failed after " + f"{max_retries + 1} attempts") + debug_log(LOGGER, "Maximum retries reached - giving up") + return certbot_result + + result = certbot_result # Update the outer result + debug_log(LOGGER, f"Attempt {attempt} failed, will retry") attempt += 1 return result @@ -190,23 +1203,37 @@ def certbot_new( challenge_type: Literal["dns", "http"], domains: str, email: str, - provider: str = None, - credentials_path: Union[str, Path] = None, + provider: Optional[str] = None, + credentials_path: Optional[Any] = None, propagation: str = "default", profile: str = "classic", staging: bool = False, force: bool = False, - cmd_env: Dict[str, str] = None, + cmd_env: Optional[Dict[str, str]] = None, + ca_provider: str = "letsencrypt", + api_key: Optional[str] = None, + server_name: Optional[str] = None, ) -> int: + # Generate new certificate using certbot. + # Main function to request SSL/TLS certificates from a certificate + # authority using the ACME protocol via certbot. if isinstance(credentials_path, str): credentials_path = Path(credentials_path) - # * Building the certbot command - command = [ + ca_config: Dict[str, str] = get_certificate_authority_config( + ca_provider, staging + ) + + debug_log(LOGGER, f"Building certbot command for {domains}") + debug_log(LOGGER, f"CA config: {ca_config}") + debug_log(LOGGER, f"Challenge type: {challenge_type}") + debug_log(LOGGER, f"Profile: {profile}") + + command: List[str] = [ CERTBOT_BIN, "certonly", "--config-dir", - DATA_PATH.as_posix(), + str(DATA_PATH), "--work-dir", WORK_DIR, "--logs-dir", @@ -219,146 +1246,308 @@ def certbot_new( "--agree-tos", "--expand", f"--preferred-profile={profile}", + "--server", + ca_config["server"], ] - if not cmd_env: + # Ensure we have a valid environment dictionary to work with + if cmd_env is None: cmd_env = {} + + # Create a properly typed working environment dictionary + working_env: Dict[str, str] = {} + working_env.update(cmd_env) # Copy existing values if any + + # Handle certificate key type based on DNS provider and CA + if challenge_type == "dns" and provider in ("infomaniak", "ionos"): + # Infomaniak and IONOS require RSA certificates with 4096-bit keys + command.extend(["--rsa-key-size", "4096"]) + + debug_log(LOGGER, f"Using RSA-4096 for {provider} provider with " + f"{domains}") + LOGGER.info(f"Using RSA-4096 for {provider} provider with {domains}") + else: + # Use elliptic curve certificates for all other providers + if ca_provider.lower() == "zerossl": + # Use P-384 elliptic curve for ZeroSSL certificates + command.extend(["--elliptic-curve", "secp384r1"]) + + debug_log(LOGGER, f"Using ZeroSSL P-384 curve for {domains}") + LOGGER.info(f"Using ZeroSSL P-384 curve for {domains}") + else: + # Use P-256 elliptic curve for Let's Encrypt certificates + command.extend(["--elliptic-curve", "secp256r1"]) + + debug_log(LOGGER, + f"Using Let's Encrypt P-256 curve for {domains}") + LOGGER.info(f"Using Let's Encrypt P-256 curve for {domains}") + + # Handle ZeroSSL EAB credentials + if ca_provider.lower() == "zerossl": + debug_log(LOGGER, f"ZeroSSL detected as CA provider for {domains}") + LOGGER.info(f"ZeroSSL detected as CA provider for {domains}") + + # Check for manually provided EAB credentials first + eab_kid_env: str = ( + getenv("ACME_ZEROSSL_EAB_KID", "") or + (getenv(f"{server_name}_ACME_ZEROSSL_EAB_KID", "") + if server_name else "") + ) + eab_hmac_env: str = ( + getenv("ACME_ZEROSSL_EAB_HMAC_KEY", "") or + (getenv(f"{server_name}_ACME_ZEROSSL_EAB_HMAC_KEY", "") + if server_name else "") + ) + + debug_log(LOGGER, "Manual EAB credentials check:") + debug_log(LOGGER, f" EAB KID provided: {bool(eab_kid_env)}") + debug_log(LOGGER, f" EAB HMAC provided: {bool(eab_hmac_env)}") + + if eab_kid_env and eab_hmac_env: + LOGGER.info("✓ Using manually provided ZeroSSL EAB credentials " + "from environment") + command.extend(["--eab-kid", eab_kid_env, "--eab-hmac-key", + eab_hmac_env]) + LOGGER.info(f"✓ Using ZeroSSL EAB credentials for {domains}") + LOGGER.info(f"EAB Kid: {eab_kid_env[:10]}...") + elif api_key: + debug_log(LOGGER, f"ZeroSSL API key provided, setting up EAB " + f"credentials") + LOGGER.info(f"ZeroSSL API key provided, setting up EAB " + f"credentials") + eab_kid: Optional[str] + eab_hmac: Optional[str] + eab_kid, eab_hmac = setup_zerossl_eab_credentials(email, api_key) + if eab_kid and eab_hmac: + command.extend(["--eab-kid", eab_kid, "--eab-hmac-key", + eab_hmac]) + LOGGER.info(f"✓ Using ZeroSSL EAB credentials for {domains}") + LOGGER.info(f"EAB Kid: {eab_kid[:10]}...") + else: + LOGGER.error("❌ Failed to obtain ZeroSSL EAB credentials") + LOGGER.error( + "Alternative: Set ACME_ZEROSSL_EAB_KID and " + "ACME_ZEROSSL_EAB_HMAC_KEY environment variables" + ) + LOGGER.warning("Proceeding without EAB - this will likely " + "fail") + else: + LOGGER.error("❌ No ZeroSSL API key provided!") + LOGGER.error("Set ACME_ZEROSSL_API_KEY environment variable") + LOGGER.error( + "Or set ACME_ZEROSSL_EAB_KID and ACME_ZEROSSL_EAB_HMAC_KEY " + "directly" + ) + LOGGER.warning("Proceeding without EAB - this will likely fail") if challenge_type == "dns": - # * Adding DNS challenge hooks command.append("--preferred-challenges=dns") - # * Adding the propagation time to the command + debug_log(LOGGER, "DNS challenge configuration:") + debug_log(LOGGER, f" Provider: {provider}") + debug_log(LOGGER, f" Propagation: {propagation}") + debug_log(LOGGER, f" Credentials path: {credentials_path}") + if propagation != "default": if not propagation.isdigit(): - LOGGER.warning(f"Invalid propagation time : {propagation}, using provider's default...") + LOGGER.warning( + f"Invalid propagation time: {propagation}, " + "using provider's default..." + ) else: - command.extend([f"--dns-{provider}-propagation-seconds", propagation]) + command.extend([f"--dns-{provider}-propagation-seconds", + propagation]) + debug_log(LOGGER, f"Set DNS propagation time to " + f"{propagation} seconds") - # * Adding the credentials to the command if provider == "route53": - # ? Route53 credentials are different from the others, we need to add them to the environment - with credentials_path.open("r") as file: - for line in file: - key, value = line.strip().split("=", 1) - cmd_env[key] = value + debug_log(LOGGER, "Route53 provider - setting environment " + "variables") + if credentials_path: + with open(credentials_path, "r") as file: + for line in file: + if '=' in line: + key, value = line.strip().split("=", 1) + # Explicit type-safe assignment + env_key: str = str(key) + env_value: str = str(value) + working_env[env_key] = env_value else: - command.extend([f"--dns-{provider}-credentials", credentials_path.as_posix()]) - - # * Adding the RSA key size argument like in the infomaniak plugin documentation - if provider in ("infomaniak", "ionos"): - command.extend(["--rsa-key-size", "4096"]) + if credentials_path: + command.extend([f"--dns-{provider}-credentials", + str(credentials_path)]) - # * Adding plugin argument - if provider in ("desec", "infomaniak", "ionos", "njalla", "scaleway"): - # ? Desec, Infomaniak, IONOS, Njalla and Scaleway plugins use different arguments + if provider in ("desec", "infomaniak", "ionos", "njalla", + "scaleway"): command.extend(["--authenticator", f"dns-{provider}"]) + debug_log(LOGGER, f"Using explicit authenticator for {provider}") else: command.append(f"--dns-{provider}") elif challenge_type == "http": - # * Adding HTTP challenge hooks + auth_hook: Any = JOBS_PATH.joinpath('certbot-auth.py') + cleanup_hook: Any = JOBS_PATH.joinpath('certbot-cleanup.py') + debug_log(LOGGER, "HTTP challenge configuration:") + debug_log(LOGGER, f" Auth hook: {auth_hook}") + debug_log(LOGGER, f" Cleanup hook: {cleanup_hook}") + command.extend( [ "--manual", "--preferred-challenges=http", "--manual-auth-hook", - JOBS_PATH.joinpath("certbot-auth.py").as_posix(), + str(auth_hook), "--manual-cleanup-hook", - JOBS_PATH.joinpath("certbot-cleanup.py").as_posix(), + str(cleanup_hook), ] ) - if staging: - command.append("--staging") - if force: command.append("--force-renewal") + debug_log(LOGGER, "Force renewal enabled") - if getenv("CUSTOM_LOG_LEVEL", getenv("LOG_LEVEL", "INFO")).upper() == "DEBUG": + log_level: str = getenv("CUSTOM_LOG_LEVEL", + getenv("LOG_LEVEL", "INFO")) + if log_level.upper() == "DEBUG": command.append("-v") + debug_log(LOGGER, "Verbose mode enabled for certbot") + + LOGGER.info(f"Executing certbot command for {domains}") + # Show command but mask sensitive EAB values for security + safe_command: List[str] = [] + mask_next: bool = False + for item in command: + if mask_next: + safe_command.append("***MASKED***") + mask_next = False + elif item in ["--eab-kid", "--eab-hmac-key"]: + safe_command.append(item) + mask_next = True + else: + safe_command.append(item) + + debug_log(LOGGER, f"Command: {' '.join(safe_command)}") + debug_log(LOGGER, f"Environment variables: {len(working_env)} items") + for key in working_env.keys(): + is_sensitive: bool = any( + sensitive in key.lower() + for sensitive in ['key', 'secret', 'token'] + ) + value_display: str = '***MASKED***' if is_sensitive else 'set' + debug_log(LOGGER, f" {key}: {value_display}") + LOGGER.info(f"Command: {' '.join(safe_command)}") + + current_date: datetime = datetime.now() + debug_log(LOGGER, "Starting certbot process") + + process: Popen[str] = Popen( + command, stdin=DEVNULL, stderr=PIPE, + universal_newlines=True, env=working_env + ) - current_date = datetime.now() - process = Popen(command, stdin=DEVNULL, stderr=PIPE, universal_newlines=True, env=cmd_env) - + lines_processed: int = 0 while process.poll() is None: if process.stderr: rlist, _, _ = select([process.stderr], [], [], 2) if rlist: for line in process.stderr: LOGGER_CERTBOT.info(line.strip()) + lines_processed += 1 break if datetime.now() - current_date > timedelta(seconds=5): + challenge_info: str = ( + " (this may take a while depending on the provider)" + if challenge_type == "dns" else "" + ) LOGGER.info( - "⏳ Still generating certificate(s)" + (" (this may take a while depending on the provider)" if challenge_type == "dns" else "") + "..." + f"⏳ Still generating {ca_config['name']} certificate(s)" + f"{challenge_info}..." ) current_date = datetime.now() + + debug_log(LOGGER, f"Certbot still running, processed " + f"{lines_processed} output lines") - return process.returncode + final_return_code: Optional[int] = process.returncode + if final_return_code is None: + final_return_code = 1 + + debug_log(LOGGER, f"Certbot process completed with return code: " + f"{final_return_code}") + debug_log(LOGGER, f"Total output lines processed: {lines_processed}") + return final_return_code -IS_MULTISITE = getenv("MULTISITE", "no") == "yes" + +# Global configuration and setup +IS_MULTISITE: bool = getenv("MULTISITE", "no") == "yes" try: - servers = getenv("SERVER_NAME", "www.example.com").lower() or [] + # Main execution block for certificate generation + servers_env: str = getenv("SERVER_NAME", "www.example.com").lower() or "" + servers: List[str] = servers_env.split(" ") if servers_env else [] - if isinstance(servers, str): - servers = servers.split(" ") + debug_log(LOGGER, "Server configuration detected:") + debug_log(LOGGER, f" Multisite mode: {IS_MULTISITE}") + debug_log(LOGGER, f" Server count: {len(servers)}") + debug_log(LOGGER, f" Servers: {servers}") if not servers: LOGGER.warning("There are no server names, skipping generation...") sys_exit(0) - use_letsencrypt = False - use_letsencrypt_dns = False + use_letsencrypt: bool = False + use_letsencrypt_dns: bool = False + domains_server_names: Dict[str, str] if not IS_MULTISITE: use_letsencrypt = getenv("AUTO_LETS_ENCRYPT", "no") == "yes" - use_letsencrypt_dns = getenv("LETS_ENCRYPT_CHALLENGE", "http") == "dns" - domains_server_names = {servers[0]: " ".join(servers).lower()} + use_letsencrypt_dns = ( + getenv("LETS_ENCRYPT_CHALLENGE", "http") == "dns" + ) + all_servers: str = " ".join(servers).lower() + domains_server_names = {servers[0]: all_servers} + + debug_log(LOGGER, "Single-site configuration:") + debug_log(LOGGER, f" Let's Encrypt enabled: {use_letsencrypt}") + debug_log(LOGGER, f" DNS challenge: {use_letsencrypt_dns}") else: domains_server_names = {} for first_server in servers: - if first_server and getenv(f"{first_server}_AUTO_LETS_ENCRYPT", "no") == "yes": + auto_le_env: str = f"{first_server}_AUTO_LETS_ENCRYPT" + if (first_server and + getenv(auto_le_env, "no") == "yes"): use_letsencrypt = True - if first_server and getenv(f"{first_server}_LETS_ENCRYPT_CHALLENGE", "http") == "dns": + challenge_env: str = f"{first_server}_LETS_ENCRYPT_CHALLENGE" + if (first_server and + getenv(challenge_env, "http") == "dns"): use_letsencrypt_dns = True - domains_server_names[first_server] = getenv(f"{first_server}_SERVER_NAME", first_server).lower() + server_name_env: str = f"{first_server}_SERVER_NAME" + domains_server_names[first_server] = getenv( + server_name_env, first_server + ).lower() + + debug_log(LOGGER, "Multi-site configuration:") + debug_log(LOGGER, f" Let's Encrypt enabled anywhere: " + f"{use_letsencrypt}") + debug_log(LOGGER, f" DNS challenge used anywhere: " + f"{use_letsencrypt_dns}") + debug_log(LOGGER, f" Domain mappings: {domains_server_names}") if not use_letsencrypt: LOGGER.info("Let's Encrypt is not activated, skipping generation...") sys_exit(0) - provider_classes = {} + provider_classes: Dict[str, Any] = {} if use_letsencrypt_dns: - provider_classes: Dict[ - str, - Union[ - Type[CloudflareProvider], - Type[DesecProvider], - Type[DigitalOceanProvider], - Type[DnsimpleProvider], - Type[DnsMadeEasyProvider], - Type[GehirnProvider], - Type[GoogleProvider], - Type[InfomaniakProvider], - Type[IonosProvider], - Type[LinodeProvider], - Type[LuaDnsProvider], - Type[NjallaProvider], - Type[NSOneProvider], - Type[OvhProvider], - Type[Rfc2136Provider], - Type[Route53Provider], - Type[SakuraCloudProvider], - Type[ScalewayProvider], - ], - ] = { + debug_log(LOGGER, "DNS challenge detected - loading provider classes") + + provider_classes = { "cloudflare": CloudflareProvider, "desec": DesecProvider, "digitalocean": DigitalOceanProvider, @@ -379,27 +1568,40 @@ def certbot_new( "scaleway": ScalewayProvider, } - JOB = Job(LOGGER, __file__) + JOB: Any = Job(LOGGER, __file__) - # ? Restore data from db cache of certbot-renew job + # Restore data from db cache of certbot-renew job + debug_log(LOGGER, "Restoring certificate data from database cache") JOB.restore_cache(job_name="certbot-renew") - env = { - "PATH": getenv("PATH", ""), - "PYTHONPATH": getenv("PYTHONPATH", ""), - "RELOAD_MIN_TIMEOUT": getenv("RELOAD_MIN_TIMEOUT", "5"), - "DISABLE_CONFIGURATION_TESTING": getenv("DISABLE_CONFIGURATION_TESTING", "no").lower(), + # Initialize environment variables for certbot execution + env: Dict[str, str] = { + "PATH": getenv("PATH") or "", + "PYTHONPATH": getenv("PYTHONPATH") or "", + "RELOAD_MIN_TIMEOUT": getenv("RELOAD_MIN_TIMEOUT") or "5", + "DISABLE_CONFIGURATION_TESTING": ( + getenv("DISABLE_CONFIGURATION_TESTING") or "no" + ).lower(), } - env["PYTHONPATH"] = env["PYTHONPATH"] + (f":{DEPS_PATH}" if DEPS_PATH not in env["PYTHONPATH"] else "") - if getenv("DATABASE_URI"): - env["DATABASE_URI"] = getenv("DATABASE_URI") - + + env["PYTHONPATH"] = env["PYTHONPATH"] + ( + f":{DEPS_PATH}" if DEPS_PATH not in env["PYTHONPATH"] else "" + ) + database_uri: Optional[str] = getenv("DATABASE_URI") + if database_uri: # Only assign if not None and not empty + # Explicit assignment with type safety + env_key: str = "DATABASE_URI" + env_value: str = str(database_uri) + env[env_key] = env_value + + debug_log(LOGGER, "Checking existing certificates") + proc = run( [ CERTBOT_BIN, "certificates", "--config-dir", - DATA_PATH.as_posix(), + str(DATA_PATH), "--work-dir", WORK_DIR, "--logs-dir", @@ -412,39 +1614,72 @@ def certbot_new( env=env, check=False, ) - stdout = proc.stdout + stdout: str = proc.stdout or "" - WILDCARD_GENERATOR = WildcardGenerator() - credential_paths = set() - generated_domains = set() - domains_to_ask = {} - active_cert_names = set() # Track ALL active certificate names, not just processed ones + WILDCARD_GENERATOR: Any = WildcardGenerator() + credential_paths: Set[Any] = set() + generated_domains: Set[str] = set() + domains_to_ask: Dict[str, int] = {} + active_cert_names: Set[str] = set() if proc.returncode != 0: - LOGGER.error(f"Error while checking certificates :\n{proc.stdout}") + LOGGER.error(f"Error while checking certificates:\n{proc.stdout}") + debug_log(LOGGER, "Certificate listing failed - proceeding anyway") else: - certificate_blocks = stdout.split("Certificate Name: ")[1:] + debug_log(LOGGER, "Certificate listing successful - analyzing " + "existing certificates") + + certificate_blocks: List[str] = stdout.split("Certificate Name: ")[1:] + + debug_log(LOGGER, f"Found {len(certificate_blocks)} existing " + f"certificates") + for first_server, domains in domains_server_names.items(): - if (getenv(f"{first_server}_AUTO_LETS_ENCRYPT", "no") if IS_MULTISITE else getenv("AUTO_LETS_ENCRYPT", "no")) != "yes": + auto_le_check: str = ( + getenv(f"{first_server}_AUTO_LETS_ENCRYPT", "no") + if IS_MULTISITE + else getenv("AUTO_LETS_ENCRYPT", "no") + ) + if auto_le_check != "yes": continue - letsencrypt_challenge = getenv(f"{first_server}_LETS_ENCRYPT_CHALLENGE", "http") if IS_MULTISITE else getenv("LETS_ENCRYPT_CHALLENGE", "http") - original_first_server = deepcopy(first_server) + challenge_check: str = ( + getenv(f"{first_server}_LETS_ENCRYPT_CHALLENGE", "http") + if IS_MULTISITE + else getenv("LETS_ENCRYPT_CHALLENGE", "http") + ) + original_first_server: str = deepcopy(first_server) + + debug_log(LOGGER, f"Processing server: {first_server}") + debug_log(LOGGER, f" Challenge: {challenge_check}") + debug_log(LOGGER, f" Domains: {domains}") - if ( - letsencrypt_challenge == "dns" - and (getenv(f"{original_first_server}_USE_LETS_ENCRYPT_WILDCARD", "no") if IS_MULTISITE else getenv("USE_LETS_ENCRYPT_WILDCARD", "no")) == "yes" - ): - wildcards = WILDCARD_GENERATOR.extract_wildcards_from_domains((first_server,)) - first_server = wildcards[0].lstrip("*.") - domains = set(wildcards) + wildcard_check: str = ( + getenv(f"{original_first_server}_USE_LETS_ENCRYPT_WILDCARD", + "no") + if IS_MULTISITE + else getenv("USE_LETS_ENCRYPT_WILDCARD", "no") + ) + + domains_set: Set[str] + if (challenge_check == "dns" and wildcard_check == "yes"): + debug_log(LOGGER, f"Using wildcard mode for {first_server}") + + wildcard_domains: List[str] = ( + WILDCARD_GENERATOR.extract_wildcards_from_domains( + (first_server,) + ) + ) + first_server = wildcard_domains[0].lstrip("*.") + domains_set = set(wildcard_domains) else: - domains = set(domains.split(" ")) + domains_set = set(str(domains).split(" ")) - # Add the certificate name to our active set regardless if we're generating it or not + # Add the certificate name to our active set regardless + # if we're generating it or not active_cert_names.add(first_server) - certificate_block = None + certificate_block: Optional[str] = None for block in certificate_blocks: if block.startswith(f"{first_server}\n"): certificate_block = block @@ -452,50 +1687,153 @@ def certbot_new( if not certificate_block: domains_to_ask[first_server] = 1 - LOGGER.warning(f"[{original_first_server}] Certificate block for {first_server} not found, asking new certificate...") + LOGGER.warning( + f"[{original_first_server}] Certificate block for " + f"{first_server} not found, asking new certificate..." + ) continue + # Validating the credentials try: - cert_domains = search(r"Domains: (?P.*)\n\s*Expiry Date: (?P.*)\n", certificate_block, MULTILINE) + cert_domains_match = search( + r"Domains: (?P.*)\n\s*Expiry Date: " + r"(?P.*)\n", + certificate_block, + MULTILINE + ) except BaseException as e: LOGGER.debug(format_exc()) - LOGGER.error(f"[{original_first_server}] Error while parsing certificate block: {e}") + LOGGER.error( + f"[{original_first_server}] Error while parsing " + f"certificate block: {e}" + ) continue - if not cert_domains: - LOGGER.error(f"[{original_first_server}] Failed to parse domains and expiry date from certificate block.") + if not cert_domains_match: + LOGGER.error( + f"[{original_first_server}] Failed to parse domains " + "and expiry date from certificate block." + ) continue - cert_domains_list = cert_domains.group("domains").strip().split() - cert_domains_set = set(cert_domains_list) - desired_domains_set = set(domains) if isinstance(domains, (list, set)) else set(domains.split()) + cert_domains_list: List[str] = ( + cert_domains_match.group("domains").strip().split() + ) + cert_domains_set: Set[str] = set(cert_domains_list) + desired_domains_set: Set[str] = ( + set(domains_set) if isinstance(domains_set, (list, set)) + else set(str(domains_set).split()) + ) + + debug_log(LOGGER, f"Certificate domain comparison for " + f"{first_server}:") + debug_log(LOGGER, + f" Existing: {sorted(str(d) for d in cert_domains_set)}") + debug_log(LOGGER, + f" Desired: {sorted(str(d) for d in desired_domains_set)}") if cert_domains_set != desired_domains_set: domains_to_ask[first_server] = 2 + existing_sorted: List[str] = sorted( + str(d) for d in cert_domains_set + ) + desired_sorted: List[str] = sorted( + str(d) for d in desired_domains_set + ) LOGGER.warning( - f"[{original_first_server}] Domains for {first_server} differ from desired set (existing: {sorted(cert_domains_set)}, desired: {sorted(desired_domains_set)}), asking new certificate..." + f"[{original_first_server}] Domains for {first_server} " + f"differ from desired set (existing: {existing_sorted}, " + f"desired: {desired_sorted}), asking new certificate..." ) continue - use_letsencrypt_staging = ( - getenv(f"{original_first_server}_USE_LETS_ENCRYPT_STAGING", "no") if IS_MULTISITE else getenv("USE_LETS_ENCRYPT_STAGING", "no") - ) == "yes" - is_test_cert = "TEST_CERT" in cert_domains.group("expiry_date") + # Check if CA provider has changed + ca_provider_env: str = ( + f"{original_first_server}_ACME_SSL_CA_PROVIDER" + if IS_MULTISITE + else "ACME_SSL_CA_PROVIDER" + ) + ca_provider: str = getenv(ca_provider_env, "letsencrypt") + + renewal_file: Any = DATA_PATH.joinpath("renewal", + f"{first_server}.conf") + if renewal_file.is_file(): + current_server: Optional[str] = None + with renewal_file.open("r") as file: + for line in file: + if line.startswith("server"): + current_server = line.strip().split("=", 1)[1].strip() + break + + staging_env: str = ( + f"{original_first_server}_USE_LETS_ENCRYPT_STAGING" + if IS_MULTISITE + else "USE_LETS_ENCRYPT_STAGING" + ) + staging_mode: bool = getenv(staging_env, "no") == "yes" + expected_config: Dict[str, str] = ( + get_certificate_authority_config( + ca_provider, staging_mode + ) + ) + + debug_log(LOGGER, f"CA server comparison for {first_server}:") + debug_log(LOGGER, f" Current: {current_server}") + debug_log(LOGGER, f" Expected: {expected_config['server']}") + + if (current_server and + current_server != expected_config["server"]): + domains_to_ask[first_server] = 2 + LOGGER.warning( + f"[{original_first_server}] CA provider for " + f"{first_server} has changed, asking new " + f"certificate..." + ) + continue + + staging_env = ( + f"{original_first_server}_USE_LETS_ENCRYPT_STAGING" + if IS_MULTISITE + else "USE_LETS_ENCRYPT_STAGING" + ) + use_staging: bool = getenv(staging_env, "no") == "yes" + is_test_cert: bool = ( + "TEST_CERT" in cert_domains_match.group("expiry_date") + ) + + debug_log(LOGGER, f"Staging environment check for {first_server}:") + debug_log(LOGGER, f" Use staging: {use_staging}") + debug_log(LOGGER, f" Is test cert: {is_test_cert}") - if (is_test_cert and not use_letsencrypt_staging) or (not is_test_cert and use_letsencrypt_staging): + staging_mismatch: bool = ( + (is_test_cert and not use_staging) or + (not is_test_cert and use_staging) + ) + if staging_mismatch: domains_to_ask[first_server] = 2 - LOGGER.warning(f"[{original_first_server}] Certificate environment (staging/production) changed for {first_server}, asking new certificate...") + LOGGER.warning( + f"[{original_first_server}] Certificate environment " + f"(staging/production) changed for {first_server}, " + "asking new certificate..." + ) continue - letsencrypt_provider = getenv(f"{original_first_server}_LETS_ENCRYPT_DNS_PROVIDER", "") if IS_MULTISITE else getenv("LETS_ENCRYPT_DNS_PROVIDER", "") + provider_env: str = ( + f"{original_first_server}_LETS_ENCRYPT_DNS_PROVIDER" + if IS_MULTISITE + else "LETS_ENCRYPT_DNS_PROVIDER" + ) + provider: str = getenv(provider_env, "") - renewal_file = DATA_PATH.joinpath("renewal", f"{first_server}.conf") if not renewal_file.is_file(): - LOGGER.error(f"[{original_first_server}] Renewal file for {first_server} not found, asking new certificate...") + LOGGER.error( + f"[{original_first_server}] Renewal file for " + f"{first_server} not found, asking new certificate..." + ) domains_to_ask[first_server] = 1 continue - current_provider = None + current_provider: Optional[str] = None with renewal_file.open("r") as file: for line in file: if line.startswith("authenticator"): @@ -503,126 +1841,294 @@ def certbot_new( current_provider = value.strip().replace("dns-", "") break - if letsencrypt_challenge == "dns": - if letsencrypt_provider and current_provider != letsencrypt_provider: + debug_log(LOGGER, f"Provider comparison for {first_server}:") + debug_log(LOGGER, f" Current: {current_provider}") + debug_log(LOGGER, f" Configured: {provider}") + + if challenge_check == "dns": + if provider and current_provider != provider: domains_to_ask[first_server] = 2 - LOGGER.warning(f"[{original_first_server}] Provider for {first_server} is not the same as in the certificate, asking new certificate...") + LOGGER.warning( + f"[{original_first_server}] Provider for " + f"{first_server} is not the same as in the " + "certificate, asking new certificate..." + ) continue # Check if DNS credentials have changed - if letsencrypt_provider and current_provider == letsencrypt_provider: - credential_key = f"{original_first_server}_LETS_ENCRYPT_DNS_CREDENTIAL_ITEM" if IS_MULTISITE else "LETS_ENCRYPT_DNS_CREDENTIAL_ITEM" - current_credential_items = {} + if provider and current_provider == provider: + debug_log(LOGGER, f"Checking DNS credentials for " + f"{first_server}") + + credential_key: str = ( + f"{original_first_server}_LETS_ENCRYPT_DNS_CREDENTIAL_ITEM" + if IS_MULTISITE + else "LETS_ENCRYPT_DNS_CREDENTIAL_ITEM" + ) + current_credential_items: Dict[str, str] = {} - # Collect current credential items for env_key, env_value in environ.items(): if env_value and env_key.startswith(credential_key): if " " not in env_value: current_credential_items["json_data"] = env_value continue key, value = env_value.split(" ", 1) - current_credential_items[key.lower()] = ( - value.removeprefix("= ").replace("\\n", "\n").replace("\\t", "\t").replace("\\r", "\r").strip() + cleaned_value: str = ( + value.removeprefix("= ").replace("\\n", "\n") + .replace("\\t", "\t").replace("\\r", "\r") + .strip() ) + current_credential_items[key.lower()] = cleaned_value if "json_data" in current_credential_items: value = current_credential_items.pop("json_data") - if not current_credential_items and len(value) % 4 == 0 and match(r"^[A-Za-z0-9+/=]+$", value): + is_base64_like: bool = ( + not current_credential_items and + len(value) % 4 == 0 and + match(r"^[A-Za-z0-9+/=]+$", value) is not None + ) + if is_base64_like: with suppress(BaseException): - decoded = b64decode(value).decode("utf-8") - json_data = loads(decoded) + decoded: str = b64decode(value).decode("utf-8") + json_data: Dict[str, Any] = loads(decoded) if isinstance(json_data, dict): - current_credential_items = { - k.lower(): str(v).removeprefix("= ").replace("\\n", "\n").replace("\\t", "\t").replace("\\r", "\r").strip() - for k, v in json_data.items() - } + new_items: Dict[str, str] = {} + for k, v in json_data.items(): + cleaned_v: str = ( + str(v).removeprefix("= ") + .replace("\\n", "\n") + .replace("\\t", "\t") + .replace("\\r", "\r") + .strip() + ) + new_items[k.lower()] = cleaned_v + current_credential_items = new_items if current_credential_items: - # Process regular credentials for base64 decoding for key, value in current_credential_items.items(): - if letsencrypt_provider != "rfc2136" and len(value) % 4 == 0 and match(r"^[A-Za-z0-9+/=]+$", value): + is_base64_candidate: bool = ( + provider != "rfc2136" and + len(value) % 4 == 0 and + match(r"^[A-Za-z0-9+/=]+$", value) is not None + ) + if is_base64_candidate: with suppress(BaseException): decoded = b64decode(value).decode("utf-8") if decoded != value: + cleaned_decoded: str = ( + decoded.removeprefix("= ") + .replace("\\n", "\n") + .replace("\\t", "\t") + .replace("\\r", "\r") + .strip() + ) current_credential_items[key] = ( - decoded.removeprefix("= ").replace("\\n", "\n").replace("\\t", "\t").replace("\\r", "\r").strip() + cleaned_decoded ) - # Generate current credentials content - if letsencrypt_provider in provider_classes: + if provider in provider_classes: with suppress(ValidationError, KeyError): - current_provider_instance = provider_classes[letsencrypt_provider](**current_credential_items) - current_credentials_content = current_provider_instance.get_formatted_credentials() - - # Check if stored credentials file exists and compare - file_type = current_provider_instance.get_file_type() - stored_credentials_path = CACHE_PATH.joinpath(first_server, f"credentials.{file_type}") + provider_instance: Any = provider_classes[provider]( + **current_credential_items + ) + current_credentials_content: bytes = ( + provider_instance.get_formatted_credentials() + ) + + file_type: str = ( + provider_instance.get_file_type() + ) + stored_credentials_path: Any = ( + CACHE_PATH.joinpath( + first_server, + f"credentials.{file_type}" + ) + ) if stored_credentials_path.is_file(): - stored_credentials_content = stored_credentials_path.read_bytes() - if stored_credentials_content != current_credentials_content: + stored_credentials_content: bytes = ( + stored_credentials_path.read_bytes() + ) + content_differs: bool = ( + stored_credentials_content != + current_credentials_content + ) + if content_differs: domains_to_ask[first_server] = 2 - LOGGER.warning(f"[{original_first_server}] DNS credentials for {first_server} have changed, asking new certificate...") + LOGGER.warning( + f"[{original_first_server}] DNS " + f"credentials for {first_server} " + f"have changed, asking new " + f"certificate..." + ) continue - elif current_provider != "manual" and letsencrypt_challenge == "http": + elif (current_provider != "manual" and + challenge_check == "http"): domains_to_ask[first_server] = 2 - LOGGER.warning(f"[{original_first_server}] {first_server} is no longer using DNS challenge, asking new certificate...") + LOGGER.warning( + f"[{original_first_server}] {first_server} is no longer " + "using DNS challenge, asking new certificate..." + ) continue domains_to_ask[first_server] = 0 - LOGGER.info(f"[{original_first_server}] Certificates already exist for domain(s) {domains}, expiry date: {cert_domains.group('expiry_date')}") + LOGGER.info( + f"[{original_first_server}] Certificates already exist for " + f"domain(s) {domains_set}, expiry date: " + f"{cert_domains_match.group('expiry_date')}" + ) - psl_lines = None - psl_rules = None + psl_lines: Optional[List[str]] = None + psl_rules: Optional[Dict[str, Set[str]]] = None + certificates_generated: int = 0 + certificates_failed: int = 0 + + # Process each server configuration for first_server, domains in domains_server_names.items(): - if (getenv(f"{first_server}_AUTO_LETS_ENCRYPT", "no") if IS_MULTISITE else getenv("AUTO_LETS_ENCRYPT", "no")) != "yes": - LOGGER.info(f"Let's Encrypt is not activated for {first_server}, skipping...") + auto_le_check = ( + getenv(f"{first_server}_AUTO_LETS_ENCRYPT", "no") + if IS_MULTISITE + else getenv("AUTO_LETS_ENCRYPT", "no") + ) + if auto_le_check != "yes": + LOGGER.info( + f"SSL certificate generation is not activated for " + f"{first_server}, skipping..." + ) continue - # * Getting all the necessary data - data = { - "email": (getenv(f"{first_server}_EMAIL_LETS_ENCRYPT", "") if IS_MULTISITE else getenv("EMAIL_LETS_ENCRYPT", "")) or f"contact@{first_server}", - "challenge": getenv(f"{first_server}_LETS_ENCRYPT_CHALLENGE", "http") if IS_MULTISITE else getenv("LETS_ENCRYPT_CHALLENGE", "http"), - "staging": (getenv(f"{first_server}_USE_LETS_ENCRYPT_STAGING", "no") if IS_MULTISITE else getenv("USE_LETS_ENCRYPT_STAGING", "no")) == "yes", - "use_wildcard": (getenv(f"{first_server}_USE_LETS_ENCRYPT_WILDCARD", "no") if IS_MULTISITE else getenv("USE_LETS_ENCRYPT_WILDCARD", "no")) == "yes", - "provider": getenv(f"{first_server}_LETS_ENCRYPT_DNS_PROVIDER", "") if IS_MULTISITE else getenv("LETS_ENCRYPT_DNS_PROVIDER", ""), - "propagation": ( - getenv(f"{first_server}_LETS_ENCRYPT_DNS_PROPAGATION", "default") if IS_MULTISITE else getenv("LETS_ENCRYPT_DNS_PROPAGATION", "default") - ), - "profile": getenv(f"{first_server}_LETS_ENCRYPT_PROFILE", "classic") if IS_MULTISITE else getenv("LETS_ENCRYPT_PROFILE", "classic"), - "check_psl": ( - getenv(f"{first_server}_LETS_ENCRYPT_DISABLE_PUBLIC_SUFFIXES", "yes") if IS_MULTISITE else getenv("LETS_ENCRYPT_DISABLE_PUBLIC_SUFFIXES", "yes") - ) - == "no", - "max_retries": getenv(f"{first_server}_LETS_ENCRYPT_MAX_RETRIES", "0") if IS_MULTISITE else getenv("LETS_ENCRYPT_MAX_RETRIES", "0"), + # Getting all the necessary data + email_env: str = ( + f"{first_server}_EMAIL_LETS_ENCRYPT" if IS_MULTISITE + else "EMAIL_LETS_ENCRYPT" + ) + challenge_env = ( + f"{first_server}_LETS_ENCRYPT_CHALLENGE" + if IS_MULTISITE + else "LETS_ENCRYPT_CHALLENGE" + ) + staging_env = ( + f"{first_server}_USE_LETS_ENCRYPT_STAGING" + if IS_MULTISITE + else "USE_LETS_ENCRYPT_STAGING" + ) + wildcard_env = ( + f"{first_server}_USE_LETS_ENCRYPT_WILDCARD" + if IS_MULTISITE + else "USE_LETS_ENCRYPT_WILDCARD" + ) + provider_env = ( + f"{first_server}_LETS_ENCRYPT_DNS_PROVIDER" + if IS_MULTISITE + else "LETS_ENCRYPT_DNS_PROVIDER" + ) + propagation_env = ( + f"{first_server}_LETS_ENCRYPT_DNS_PROPAGATION" + if IS_MULTISITE + else "LETS_ENCRYPT_DNS_PROPAGATION" + ) + profile_env = ( + f"{first_server}_LETS_ENCRYPT_PROFILE" + if IS_MULTISITE + else "LETS_ENCRYPT_PROFILE" + ) + psl_env = ( + f"{first_server}_LETS_ENCRYPT_DISABLE_PUBLIC_SUFFIXES" + if IS_MULTISITE + else "LETS_ENCRYPT_DISABLE_PUBLIC_SUFFIXES" + ) + retries_env = ( + f"{first_server}_LETS_ENCRYPT_MAX_RETRIES" + if IS_MULTISITE + else "LETS_ENCRYPT_MAX_RETRIES" + ) + ca_env = ( + f"{first_server}_ACME_SSL_CA_PROVIDER" if IS_MULTISITE + else "ACME_SSL_CA_PROVIDER" + ) + api_key_env = ( + f"{first_server}_ACME_ZEROSSL_API_KEY" + if IS_MULTISITE + else "ACME_ZEROSSL_API_KEY" + ) + + server_data: Dict[str, Any] = { + "email": (getenv(email_env, "") or f"contact@{first_server}"), + "challenge": getenv(challenge_env, "http"), + "staging": getenv(staging_env, "no") == "yes", + "use_wildcard": getenv(wildcard_env, "no") == "yes", + "provider": getenv(provider_env, ""), + "propagation": getenv(propagation_env, "default"), + "profile": getenv(profile_env, "classic"), + "check_psl": getenv(psl_env, "yes") == "no", + "max_retries": getenv(retries_env, "0"), + "ca_provider": getenv(ca_env, "letsencrypt"), + "api_key": getenv(api_key_env, ""), "credential_items": {}, } + + debug_log(LOGGER, f"Service {first_server} configuration: {server_data}") + + LOGGER.info(f"Service {first_server} configuration:") + LOGGER.info(f" CA Provider: {server_data['ca_provider']}") + api_key_status: str = 'Yes' if server_data['api_key'] else 'No' + LOGGER.info(f" API Key provided: {api_key_status}") + LOGGER.info(f" Challenge type: {server_data['challenge']}") + LOGGER.info(f" Staging: {server_data['staging']}") + LOGGER.info(f" Wildcard: {server_data['use_wildcard']}") # Override profile if custom profile is set - custom_profile = (getenv(f"{first_server}_LETS_ENCRYPT_CUSTOM_PROFILE", "") if IS_MULTISITE else getenv("LETS_ENCRYPT_CUSTOM_PROFILE", "")).strip() - if custom_profile: - data["profile"] = custom_profile - - if data["challenge"] == "http" and data["use_wildcard"]: - LOGGER.warning(f"Wildcard is not supported with HTTP challenge, disabling wildcard for service {first_server}...") - data["use_wildcard"] = False - - if (not data["use_wildcard"] and not domains_to_ask.get(first_server)) or ( - data["use_wildcard"] and not domains_to_ask.get(WILDCARD_GENERATOR.extract_wildcards_from_domains((first_server,))[0].lstrip("*.")) - ): + custom_profile_env: str = ( + f"{first_server}_LETS_ENCRYPT_CUSTOM_PROFILE" + if IS_MULTISITE + else "LETS_ENCRYPT_CUSTOM_PROFILE" + ) + custom_profile_str: str = getenv(custom_profile_env, "").strip() + if custom_profile_str: + server_data["profile"] = custom_profile_str + debug_log(LOGGER, f"Using custom profile: {custom_profile_str}") + + if server_data["challenge"] == "http" and server_data["use_wildcard"]: + LOGGER.warning( + f"Wildcard is not supported with HTTP challenge, " + f"disabling wildcard for service {first_server}..." + ) + server_data["use_wildcard"] = False + + should_skip_cert_check: bool = ( + (not server_data["use_wildcard"] and + not domains_to_ask.get(first_server)) or + (server_data["use_wildcard"] and not domains_to_ask.get( + WILDCARD_GENERATOR.extract_wildcards_from_domains( + (first_server,) + )[0].lstrip("*.") + )) + ) + if should_skip_cert_check: + debug_log(LOGGER, f"No certificate needed for {first_server}") continue - if not data["max_retries"].isdigit(): - LOGGER.warning(f"Invalid max retries value for service {first_server} : {data['max_retries']}, using default value of 0...") - data["max_retries"] = 0 + if not server_data["max_retries"].isdigit(): + LOGGER.warning( + f"Invalid max retries value for service {first_server}: " + f"{server_data['max_retries']}, using default value of 0..." + ) + server_data["max_retries"] = 0 else: - data["max_retries"] = int(data["max_retries"]) - - # * Getting the DNS provider data if necessary - if data["challenge"] == "dns": - credential_key = f"{first_server}_LETS_ENCRYPT_DNS_CREDENTIAL_ITEM" if IS_MULTISITE else "LETS_ENCRYPT_DNS_CREDENTIAL_ITEM" - credential_items = {} + server_data["max_retries"] = int(server_data["max_retries"]) + + # Getting the DNS provider data if necessary + if server_data["challenge"] == "dns": + debug_log(LOGGER, + f"Processing DNS credentials for {first_server}") + + credential_key = ( + f"{first_server}_LETS_ENCRYPT_DNS_CREDENTIAL_ITEM" + if IS_MULTISITE + else "LETS_ENCRYPT_DNS_CREDENTIAL_ITEM" + ) + credential_items: Dict[str, str] = {} # Collect all credential items for env_key, env_value in environ.items(): @@ -631,217 +2137,432 @@ def certbot_new( credential_items["json_data"] = env_value continue key, value = env_value.split(" ", 1) - credential_items[key.lower()] = value.removeprefix("= ").replace("\\n", "\n").replace("\\t", "\t").replace("\\r", "\r").strip() + cleaned_value = ( + value.removeprefix("= ") + .replace("\\n", "\n") + .replace("\\t", "\t") + .replace("\\r", "\r").strip() + ) + credential_items[key.lower()] = cleaned_value if "json_data" in credential_items: value = credential_items.pop("json_data") - # Handle the case of a single credential that might be base64-encoded JSON - if not credential_items and len(value) % 4 == 0 and match(r"^[A-Za-z0-9+/=]+$", value): + # Handle the case of a single credential that might be + # base64-encoded JSON + is_potential_json: bool = ( + not credential_items and + len(value) % 4 == 0 and + match(r"^[A-Za-z0-9+/=]+$", value) is not None + ) + if is_potential_json: try: decoded = b64decode(value).decode("utf-8") json_data = loads(decoded) if isinstance(json_data, dict): - data["credential_items"] = { - k.lower(): str(v).removeprefix("= ").replace("\\n", "\n").replace("\\t", "\t").replace("\\r", "\r").strip() - for k, v in json_data.items() - } + new_items = {} + for k, v in json_data.items(): + cleaned_v = ( + str(v).removeprefix("= ") + .replace("\\n", "\n") + .replace("\\t", "\t") + .replace("\\r", "\r").strip() + ) + new_items[k.lower()] = cleaned_v + server_data["credential_items"] = new_items except BaseException as e: LOGGER.debug(format_exc()) - LOGGER.error(f"Error while decoding JSON data for service {first_server} : {value} : \n{e}") + LOGGER.error( + f"Error while decoding JSON data for service " + f"{first_server}: {value} : \n{e}" + ) - if not data["credential_items"]: + if not server_data["credential_items"]: # Process regular credentials - data["credential_items"] = {} + server_data["credential_items"] = {} for key, value in credential_items.items(): # Check for base64 encoding - if data["provider"] != "rfc2136" and len(value) % 4 == 0 and match(r"^[A-Za-z0-9+/=]+$", value): + is_base64_candidate = ( + server_data["provider"] != "rfc2136" and + len(value) % 4 == 0 and + match(r"^[A-Za-z0-9+/=]+$", value) is not None + ) + if is_base64_candidate: try: decoded = b64decode(value).decode("utf-8") if decoded != value: - value = decoded.removeprefix("= ").replace("\\n", "\n").replace("\\t", "\t").replace("\\r", "\r").strip() + value = ( + decoded.removeprefix("= ") + .replace("\\n", "\n") + .replace("\\t", "\t") + .replace("\\r", "\r").strip() + ) except BaseException as e: LOGGER.debug(format_exc()) - LOGGER.debug(f"Error while decoding credential item {key} for service {first_server} : {value} : \n{e}") - data["credential_items"][key] = value - - LOGGER.debug(f"Data for service {first_server} : {dumps(data)}") + LOGGER.debug( + f"Error while decoding credential item {key} " + f"for service {first_server}: {value} : \n{e}" + ) + server_data["credential_items"][key] = value - # * Checking if the DNS data is valid - if data["challenge"] == "dns": - if not data["provider"]: + safe_data: Dict[str, Any] = server_data.copy() + masked_items: Dict[str, str] = { + k: "***MASKED***" + for k in server_data["credential_items"].keys() + } + safe_data["credential_items"] = masked_items + if server_data["api_key"]: + safe_data["api_key"] = "***MASKED***" + debug_log(LOGGER, f"Safe data for service {first_server}: " + f"{dumps(safe_data)}") + + # Validate CA provider and API key requirements + api_key_status = 'Yes' if server_data['api_key'] else 'No' + LOGGER.info( + f"Service {first_server} - CA Provider: {server_data['ca_provider']}, " + f"API Key provided: {api_key_status}" + ) + + if server_data["ca_provider"].lower() == "zerossl": + if not server_data["api_key"]: LOGGER.warning( - f"No provider found for service {first_server} (available providers : {', '.join(provider_classes.keys())}), skipping certificate(s) generation..." # noqa: E501 + f"ZeroSSL API key not provided for service " + f"{first_server}, falling back to Let's Encrypt..." + ) + server_data["ca_provider"] = "letsencrypt" + else: + LOGGER.info(f"✓ ZeroSSL configuration valid for service " + f"{first_server}") + + # Checking if the DNS data is valid + if server_data["challenge"] == "dns": + if not server_data["provider"]: + available_providers: str = ', '.join( + str(p) for p in provider_classes.keys() + ) + LOGGER.warning( + f"No provider found for service {first_server} " + f"(available providers: {available_providers}), " + "skipping certificate(s) generation..." ) continue - elif data["provider"] not in provider_classes: + elif server_data["provider"] not in provider_classes: + available_providers = ', '.join( + str(p) for p in provider_classes.keys() + ) LOGGER.warning( - f"Provider {data['provider']} not found for service {first_server} (available providers : {', '.join(provider_classes.keys())}), skipping certificate(s) generation..." # noqa: E501 + f"Provider {server_data['provider']} not found for service " + f"{first_server} (available providers: " + f"{available_providers}), skipping certificate(s) " + f"generation..." ) continue - elif not data["credential_items"]: + elif not server_data["credential_items"]: LOGGER.warning( - f"No valid credentials items found for service {first_server} (you should have at least one), skipping certificate(s) generation..." + f"No valid credentials items found for service " + f"{first_server} (you should have at least one), " + f"skipping certificate(s) generation..." ) continue - # * Validating the credentials try: - provider = provider_classes[data["provider"]](**data["credential_items"]) + dns_provider_instance: Any = provider_classes[server_data["provider"]]( + **server_data["credential_items"] + ) except ValidationError as ve: LOGGER.debug(format_exc()) - LOGGER.error(f"Error while validating credentials for service {first_server} :\n{ve}") + LOGGER.error( + f"Error while validating credentials for service " + f"{first_server}:\n{ve}" + ) continue - content = provider.get_formatted_credentials() + content: bytes = dns_provider_instance.get_formatted_credentials() else: content = b"http_challenge" - is_blacklisted = False + is_blacklisted: bool = False - # * Adding the domains to Wildcard Generator if necessary - file_type = provider.get_file_type() if data["challenge"] == "dns" else "txt" - file_path = (first_server, f"credentials.{file_type}") - if data["use_wildcard"]: + # Adding the domains to Wildcard Generator if necessary + file_type_str: str = ( + dns_provider_instance.get_file_type() if server_data["challenge"] == "dns" + else "txt" + ) + file_path: Tuple[str, ...] = (first_server, + f"credentials.{file_type_str}") + + if server_data["use_wildcard"]: # Use the improved method for generating consistent group names - group = WILDCARD_GENERATOR.create_group_name( + hash_value: Any = bytes_hash(content, algorithm="sha1") + group: str = WILDCARD_GENERATOR.create_group_name( domain=first_server, - provider=data["provider"] if data["challenge"] == "dns" else "http", - challenge_type=data["challenge"], - staging=data["staging"], - content_hash=bytes_hash(content, algorithm="sha1"), - profile=data["profile"], + provider=(server_data["provider"] if server_data["challenge"] == "dns" + else "http"), + challenge_type=server_data["challenge"], + staging=server_data["staging"], + content_hash=hash_value, + profile=server_data["profile"], ) + wildcard_info: str = ( + "the propagation time will be the provider's default and " + if server_data["challenge"] == "dns" else "" + ) LOGGER.info( f"Service {first_server} is using wildcard, " - + ("the propagation time will be the provider's default and " if data["challenge"] == "dns" else "") - + "the email will be the same as the first domain that created the group..." + f"{wildcard_info}the email will be the same as the first " + f"domain that created the group..." ) - if data["check_psl"]: + if server_data["check_psl"]: if psl_lines is None: + debug_log(LOGGER, "Loading PSL for wildcard domain " + "validation") psl_lines = load_public_suffix_list(JOB) if psl_rules is None: + debug_log(LOGGER, "Parsing PSL rules") psl_rules = parse_psl(psl_lines) - wildcards = WILDCARD_GENERATOR.extract_wildcards_from_domains(domains.split(" ")) + wildcards_list: List[str] = ( + WILDCARD_GENERATOR.extract_wildcards_from_domains( + str(domains).split(" ") + ) + ) - LOGGER.debug(f"Wildcard domains for {first_server} : {wildcards}") + wildcard_str: str = ', '.join( + str(w) for w in wildcards_list + ) + LOGGER.info(f"Wildcard domains for {first_server}: " + f"{wildcard_str}") - for d in wildcards: + for d in wildcards_list: if is_domain_blacklisted(d, psl_rules): - LOGGER.error(f"Wildcard domain {d} is blacklisted by Public Suffix List, refusing certificate request for {first_server}.") + LOGGER.error( + f"Wildcard domain {d} is blacklisted by Public " + f"Suffix List, refusing certificate request for " + f"{first_server}." + ) is_blacklisted = True break if not is_blacklisted: - WILDCARD_GENERATOR.extend(group, domains.split(" "), data["email"], data["staging"]) - file_path = (f"{group}.{file_type}",) - LOGGER.debug(f"[{first_server}] Wildcard group {group}") - elif data["check_psl"]: + WILDCARD_GENERATOR.extend( + group, str(domains).split(" "), server_data["email"], + server_data["staging"] + ) + file_path = (f"{group}.{file_type_str}",) + LOGGER.info(f"[{first_server}] Wildcard group {group}") + elif server_data["check_psl"]: if psl_lines is None: + debug_log(LOGGER, + "Loading PSL for regular domain validation") psl_lines = load_public_suffix_list(JOB) if psl_rules is None: + debug_log(LOGGER, "Parsing PSL rules") psl_rules = parse_psl(psl_lines) - for d in domains.split(): + for d in str(domains).split(): if is_domain_blacklisted(d, psl_rules): - LOGGER.error(f"Domain {d} is blacklisted by Public Suffix List, refusing certificate request for {first_server}.") + LOGGER.error( + f"Domain {d} is blacklisted by Public Suffix List, " + f"refusing certificate request for {first_server}." + ) is_blacklisted = True break if is_blacklisted: + debug_log(LOGGER, f"Skipping {first_server} due to PSL blacklist") continue - # * Generating the credentials file - credentials_path = CACHE_PATH.joinpath(*file_path) + # Generating the credentials file + credentials_path: Any = CACHE_PATH.joinpath(*file_path) - if data["challenge"] == "dns": + if server_data["challenge"] == "dns": + debug_log(LOGGER, + f"Managing credentials file for {first_server}: " + f"{credentials_path}") + if not credentials_path.is_file(): + service_id: str = (first_server if not server_data["use_wildcard"] + else "") + cached: Any + err: Any cached, err = JOB.cache_file( - credentials_path.name, content, job_name="certbot-renew", service_id=first_server if not data["use_wildcard"] else "" + credentials_path.name, content, job_name="certbot-renew", + service_id=service_id ) if not cached: - LOGGER.error(f"Error while saving service {first_server}'s credentials file in cache : {err}") + LOGGER.error( + f"Error while saving service {first_server}'s " + f"credentials file in cache: {err}" + ) continue - LOGGER.info(f"Successfully saved service {first_server}'s credentials file in cache") - elif data["use_wildcard"]: - LOGGER.info(f"Service {first_server}'s wildcard credentials file has already been generated") + LOGGER.info( + f"Successfully saved service {first_server}'s " + "credentials file in cache" + ) + elif server_data["use_wildcard"]: + LOGGER.info( + f"Service {first_server}'s wildcard credentials file " + "has already been generated" + ) else: - old_content = credentials_path.read_bytes() + old_content: bytes = credentials_path.read_bytes() if old_content != content: - LOGGER.warning(f"Service {first_server}'s credentials file is outdated, updating it...") - cached, err = JOB.cache_file(credentials_path.name, content, job_name="certbot-renew", service_id=first_server) - if not cached: - LOGGER.error(f"Error while updating service {first_server}'s credentials file in cache : {err}") + LOGGER.warning( + f"Service {first_server}'s credentials file is " + "outdated, updating it..." + ) + cached_updated: Any + err_updated: Any + cached_updated, err_updated = JOB.cache_file( + credentials_path.name, content, + job_name="certbot-renew", + service_id=first_server + ) + if not cached_updated: + LOGGER.error( + f"Error while updating service {first_server}'s " + f"credentials file in cache: {err_updated}" + ) continue - LOGGER.info(f"Successfully updated service {first_server}'s credentials file in cache") + LOGGER.info( + f"Successfully updated service {first_server}'s " + "credentials file in cache" + ) else: - LOGGER.info(f"Service {first_server}'s credentials file is up to date") + LOGGER.info( + f"Service {first_server}'s credentials file is " + f"up to date" + ) credential_paths.add(credentials_path) - credentials_path.chmod(0o600) # ? Setting the permissions to 600 (this is important to avoid warnings from certbot) + # Setting the permissions to 600 (this is important to avoid + # warnings from certbot) + credentials_path.chmod(0o600) - if data["use_wildcard"]: + if server_data["use_wildcard"]: + debug_log(LOGGER, f"Wildcard processing complete for " + f"{first_server}") continue - domains = domains.replace(" ", ",") + domains_str: str = str(domains).replace(" ", ",") + ca_name: str = get_certificate_authority_config( + server_data["ca_provider"] + )["name"] + staging_info: str = ' using staging' if server_data['staging'] else '' LOGGER.info( - f"Asking certificates for domain(s) : {domains} (email = {data['email']}){' using staging' if data['staging'] else ''} with {data['challenge']} challenge, using {data['profile']!r} profile..." - ) - - if ( - certbot_new_with_retry( - data["challenge"], - domains, - data["email"], - data["provider"], - credentials_path, - data["propagation"], - data["profile"], - data["staging"], - domains_to_ask[first_server] == 2, - cmd_env=env, - max_retries=data["max_retries"], - ) - != 0 - ): + f"Asking {ca_name} certificates for domain(s): {domains_str} " + f"(email = {server_data['email']}){staging_info} " + f" with {server_data['challenge']} challenge, using " + f"{server_data['profile']!r} profile..." + ) + + debug_log(LOGGER, f"Requesting certificate for {domains_str}") + + cert_result: int = certbot_new_with_retry( + cast(Literal["dns", "http"], server_data["challenge"]), + domains_str, + server_data["email"], + server_data["provider"], + credentials_path, + server_data["propagation"], + server_data["profile"], + server_data["staging"], + domains_to_ask[first_server] == 2, + cmd_env=env, + max_retries=server_data["max_retries"], + ca_provider=server_data["ca_provider"], + api_key=server_data["api_key"], + server_name=first_server, + ) + + if cert_result != 0: status = 2 - LOGGER.error(f"Certificate generation failed for domain(s) {domains} ...") + certificates_failed += 1 + LOGGER.error(f"Certificate generation failed for domain(s) " + f"{domains_str}...") else: status = 1 if status == 0 else status - LOGGER.info(f"Certificate generation succeeded for domain(s) : {domains}") - - generated_domains.update(domains.split(",")) - - # * Generating the wildcards if necessary - wildcards = WILDCARD_GENERATOR.get_wildcards() - if wildcards: - for group, data in wildcards.items(): - if not data: + certificates_generated += 1 + LOGGER.info(f"Certificate generation succeeded for domain(s): " + f"{domains_str}") + + generated_domains.update(domains_str.split(",")) + + # Generating the wildcards if necessary + wildcard_groups: Dict[str, Any] = WILDCARD_GENERATOR.get_wildcards() + if wildcard_groups: + debug_log(LOGGER, f"Processing {len(wildcard_groups)} wildcard groups") + + for group, group_data in wildcard_groups.items(): + if not group_data: continue - # * Generating the certificate from the generated credentials - group_parts = group.split("_") - provider = group_parts[0] - profile = group_parts[2] - base_domain = group_parts[3] + + # Generating the certificate from the generated credentials + group_parts: List[str] = group.split("_") + provider_name: str = group_parts[0] + profile: str = group_parts[2] + base_domain: str = group_parts[3] + + debug_log(LOGGER, f"Processing wildcard group: {group}") + debug_log(LOGGER, f" Provider: {provider_name}") + debug_log(LOGGER, f" Profile: {profile}") + debug_log(LOGGER, f" Base domain: {base_domain}") + + email: str = group_data.pop("email") + wildcard_file_type: str = ( + str(provider_classes[provider_name].get_file_type()) + if provider_name in provider_classes else 'txt' + ) + credentials_file: Any = CACHE_PATH.joinpath( + f"{group}.{wildcard_file_type}" + ) - email = data.pop("email") - credentials_file = CACHE_PATH.joinpath(f"{group}.{provider_classes[provider].get_file_type() if provider in provider_classes else 'txt'}") + # Get CA provider for this group + original_server: Optional[str] = None + for server in domains_server_names.keys(): + if base_domain in server or server in base_domain: + original_server = server + break + + ca_provider = "letsencrypt" # default + api_key: Optional[str] = None + if original_server: + ca_env = ( + f"{original_server}_ACME_SSL_CA_PROVIDER" + if IS_MULTISITE + else "ACME_SSL_CA_PROVIDER" + ) + ca_provider = getenv(ca_env, "letsencrypt") + + api_key_env = ( + f"{original_server}_ACME_ZEROSSL_API_KEY" + if IS_MULTISITE + else "ACME_ZEROSSL_API_KEY" + ) + api_key = getenv(api_key_env, "") or None # Process different environment types (staging/prod) - for key, domains in data.items(): + for key, domains in group_data.items(): if not domains: continue - staging = key == "staging" + staging: bool = key == "staging" + ca_name = get_certificate_authority_config( + ca_provider + )["name"] + staging_info = ' using staging ' if staging else '' + challenge_type: str = ( + 'dns' if provider_name in provider_classes else 'http' + ) LOGGER.info( - f"Asking wildcard certificates for domain(s): {domains} (email = {email})" - f"{' using staging ' if staging else ''} with {'dns' if provider in provider_classes else 'http'} challenge, " + f"Asking {ca_name} wildcard certificates for domain(s): " + f"{domains} (email = {email}){staging_info} " + f"with {challenge_type} challenge, " f"using {profile!r} profile..." ) - domains_split = domains.split(",") + domains_split: List[str] = domains.split(",") # Add wildcard certificate names to active set for domain in domains_split: @@ -849,45 +2570,82 @@ def certbot_new( base_domain = WILDCARD_GENERATOR.get_base_domain(domain) active_cert_names.add(base_domain) - if ( - certbot_new_with_retry( - "dns", - domains, - email, - provider, - credentials_file, - "default", - profile, - staging, - domains_to_ask.get(base_domain, 0) == 2, - cmd_env=env, - ) - != 0 - ): + debug_log(LOGGER, f"Requesting wildcard certificate for " + f"{domains}") + + wildcard_result: int = certbot_new_with_retry( + "dns", + domains, + email, + provider_name, + credentials_file, + "default", + profile, + staging, + domains_to_ask.get(base_domain, 0) == 2, + cmd_env=env, + ca_provider=ca_provider, + api_key=api_key, + server_name=original_server, + ) + + if wildcard_result != 0: status = 2 - LOGGER.error(f"Certificate generation failed for domain(s) {domains} ...") + certificates_failed += 1 + LOGGER.error(f"Certificate generation failed for " + f"domain(s) {domains}...") else: status = 1 if status == 0 else status - LOGGER.info(f"Certificate generation succeeded for domain(s): {domains}") + certificates_generated += 1 + LOGGER.info(f"Certificate generation succeeded for " + f"domain(s): {domains}") generated_domains.update(domains_split) else: - LOGGER.info("No wildcard domains found, skipping wildcard certificate(s) generation...") + LOGGER.info( + "No wildcard domains found, skipping wildcard certificate(s) " + "generation..." + ) + + debug_log(LOGGER, "Certificate generation summary:") + debug_log(LOGGER, f" Generated: {certificates_generated}") + debug_log(LOGGER, f" Failed: {certificates_failed}") + debug_log(LOGGER, f" Total domains: {len(generated_domains)}") if CACHE_PATH.is_dir(): - # * Clearing all missing credentials files - for ext in ("*.ini", "*.env", "*.json"): + # Clearing all missing credentials files + debug_log(LOGGER, "Cleaning up old credentials files") + + cleaned_files: int = 0 + ext_patterns: Tuple[str, ...] = ("*.ini", "*.env", "*.json") + for ext in ext_patterns: for file in list(CACHE_PATH.rglob(ext)): if "etc" in file.parts or not file.is_file(): continue - # ? If the file is not in the wildcard groups, remove it + # If the file is not in the wildcard groups, remove it if file not in credential_paths: - LOGGER.debug(f"Removing old credentials file {file}") - JOB.del_cache(file.name, job_name="certbot-renew", service_id=file.parent.name if file.parent.name != "letsencrypt" else "") + LOGGER.info(f"Removing old credentials file {file}") + service_id = ( + file.parent.name + if file.parent.name != "letsencrypt" else "" + ) + JOB.del_cache( + file.name, job_name="certbot-renew", + service_id=service_id + ) + cleaned_files += 1 + + debug_log(LOGGER, + f"Cleaned up {cleaned_files} old credentials files") - # * Clearing all no longer needed certificates + # Clearing all no longer needed certificates if getenv("LETS_ENCRYPT_CLEAR_OLD_CERTS", "no") == "yes": - LOGGER.info("Clear old certificates is activated, removing old / no longer used certificates...") + LOGGER.info( + "Clear old certificates is activated, removing old / no longer " + "used certificates..." + ) + + debug_log(LOGGER, "Starting certificate cleanup process") # Get list of all certificates proc = run( @@ -895,7 +2653,7 @@ def certbot_new( CERTBOT_BIN, "certificates", "--config-dir", - DATA_PATH.as_posix(), + str(DATA_PATH), "--work-dir", WORK_DIR, "--logs-dir", @@ -911,15 +2669,26 @@ def certbot_new( if proc.returncode == 0: certificate_blocks = proc.stdout.split("Certificate Name: ")[1:] + certificates_removed: int = 0 + + debug_log(LOGGER, + f"Found {len(certificate_blocks)} certificates " + f"to evaluate") + debug_log(LOGGER, f"Active certificates: " + f"{sorted(str(name) for name in active_cert_names)}") + for block in certificate_blocks: - cert_name = block.split("\n", 1)[0].strip() + cert_name: str = block.split("\n", 1)[0].strip() # Skip certificates that are in our active list if cert_name in active_cert_names: - LOGGER.debug(f"Keeping active certificate: {cert_name}") + LOGGER.info(f"Keeping active certificate: {cert_name}") continue - LOGGER.warning(f"Removing old certificate {cert_name} (not in active certificates list)") + LOGGER.warning( + f"Removing old certificate {cert_name} " + "(not in active certificates list)" + ) # Use certbot's delete command delete_proc = run( @@ -927,14 +2696,14 @@ def certbot_new( CERTBOT_BIN, "delete", "--config-dir", - DATA_PATH.as_posix(), + str(DATA_PATH), "--work-dir", WORK_DIR, "--logs-dir", LOGS_DIR, "--cert-name", cert_name, - "-n", # non-interactive + "-n", ], stdin=DEVNULL, stdout=PIPE, @@ -945,11 +2714,15 @@ def certbot_new( ) if delete_proc.returncode == 0: - LOGGER.info(f"Successfully deleted certificate {cert_name}") - # Remove any remaining files for this certificate - cert_dir = DATA_PATH.joinpath("live", cert_name) - archive_dir = DATA_PATH.joinpath("archive", cert_name) - renewal_file = DATA_PATH.joinpath("renewal", f"{cert_name}.conf") + LOGGER.info(f"Successfully deleted certificate " + f"{cert_name}") + certificates_removed += 1 + cert_dir: Any = DATA_PATH.joinpath("live", cert_name) + archive_dir: Any = DATA_PATH.joinpath("archive", + cert_name) + cert_renewal_file: Any = DATA_PATH.joinpath("renewal", + f"{cert_name}.conf") + path: Any for path in (cert_dir, archive_dir): if path.exists(): try: @@ -957,34 +2730,62 @@ def certbot_new( try: file.unlink() except Exception as e: - LOGGER.error(f"Failed to remove file {file}: {e}") + LOGGER.error( + f"Failed to remove file " + f"{file}: {e}" + ) path.rmdir() LOGGER.info(f"Removed directory {path}") except Exception as e: - LOGGER.error(f"Failed to remove directory {path}: {e}") - if renewal_file.exists(): + LOGGER.error(f"Failed to remove directory " + f"{path}: {e}") + if cert_renewal_file.exists(): try: - renewal_file.unlink() - LOGGER.info(f"Removed renewal file {renewal_file}") + cert_renewal_file.unlink() + LOGGER.info(f"Removed renewal file " + f"{cert_renewal_file}") except Exception as e: - LOGGER.error(f"Failed to remove renewal file {renewal_file}: {e}") + LOGGER.error( + f"Failed to remove renewal file " + f"{cert_renewal_file}: {e}" + ) else: - LOGGER.error(f"Failed to delete certificate {cert_name}: {delete_proc.stdout}") + LOGGER.error( + f"Failed to delete certificate {cert_name}: " + f"{delete_proc.stdout}" + ) + + debug_log(LOGGER, f"Certificate cleanup completed - removed " + f"{certificates_removed} certificates") else: LOGGER.error(f"Error listing certificates: {proc.stdout}") - # * Save data to db cache + # Save data to db cache if DATA_PATH.is_dir() and list(DATA_PATH.iterdir()): - cached, err = JOB.cache_dir(DATA_PATH, job_name="certbot-renew") - if not cached: - LOGGER.error(f"Error while saving data to db cache : {err}") + debug_log(LOGGER, "Saving certificate data to database cache") + + cached_final: Any + err_final: Any + cached_final, err_final = JOB.cache_dir(DATA_PATH, job_name="certbot-renew") + if not cached_final: + LOGGER.error(f"Error while saving data to db cache: {err_final}") else: LOGGER.info("Successfully saved data to db cache") + debug_log(LOGGER, "Database cache update completed") + else: + debug_log(LOGGER, "No certificate data to cache") + except SystemExit as e: - status = e.code + exit_code: int = cast(int, e.code) + status = exit_code + debug_log(LOGGER, f"Script exiting via SystemExit with code: {exit_code}") except BaseException as e: status = 1 LOGGER.debug(format_exc()) - LOGGER.error(f"Exception while running certbot-new.py :\n{e}") + LOGGER.error(f"Exception while running certbot-new.py:\n{e}") + debug_log(LOGGER, "Script failed with unexpected exception") + +debug_log(LOGGER, f"Certificate generation process completed with status: " + f"{status}") -sys_exit(status) +sys_exit(status) \ No newline at end of file diff --git a/src/common/core/letsencrypt/jobs/certbot-renew.py b/src/common/core/letsencrypt/jobs/certbot-renew.py index 79cc7f7378..d808a2640d 100644 --- a/src/common/core/letsencrypt/jobs/certbot-renew.py +++ b/src/common/core/letsencrypt/jobs/certbot-renew.py @@ -7,7 +7,8 @@ from sys import exit as sys_exit, path as sys_path from traceback import format_exc -for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) for paths in (("deps", "python"), ("utils",), ("db",))]: +for deps_path in [join(sep, "usr", "share", "bunkerweb", *paths) + for paths in (("deps", "python"), ("utils",), ("db",))]: if deps_path not in sys_path: sys_path.append(deps_path) @@ -16,7 +17,8 @@ LOGGER = setup_logger("LETS-ENCRYPT.renew") LIB_PATH = Path(sep, "var", "lib", "bunkerweb", "letsencrypt") -CERTBOT_BIN = join(sep, "usr", "share", "bunkerweb", "deps", "python", "bin", "certbot") +CERTBOT_BIN = join(sep, "usr", "share", "bunkerweb", "deps", "python", + "bin", "certbot") DEPS_PATH = join(sep, "usr", "share", "bunkerweb", "deps", "python") LOGGER_CERTBOT = setup_logger("LETS-ENCRYPT.renew.certbot") @@ -27,73 +29,250 @@ WORK_DIR = join(sep, "var", "lib", "bunkerweb", "letsencrypt") LOGS_DIR = join(sep, "var", "log", "bunkerweb", "letsencrypt") + +def debug_log(logger, message): + # Log debug messages only when LOG_LEVEL environment variable is set to + # "debug" + if getenv("LOG_LEVEL") == "debug": + logger.debug(f"[DEBUG] {message}") + + try: - # Check if we're using let's encrypt + # Determine if Let's Encrypt is enabled in the current configuration + # This checks both single-site and multi-site deployment modes + debug_log(LOGGER, "Starting Let's Encrypt certificate renewal process") + debug_log(LOGGER, "Checking if Let's Encrypt is enabled in configuration") + debug_log(LOGGER, "Will check both single-site and multi-site modes") + use_letsencrypt = False + multisite_mode = getenv("MULTISITE", "no") == "yes" + + debug_log(LOGGER, f"Multisite mode detected: {multisite_mode}") + debug_log(LOGGER, "Determining which Let's Encrypt check method to use") - if getenv("MULTISITE", "no") == "no": + # Single-site mode: Check global AUTO_LETS_ENCRYPT setting + if not multisite_mode: use_letsencrypt = getenv("AUTO_LETS_ENCRYPT", "no") == "yes" + + debug_log(LOGGER, "Checking single-site mode configuration") + debug_log(LOGGER, f"Global AUTO_LETS_ENCRYPT setting: {use_letsencrypt}") + debug_log(LOGGER, "Single setting controls all domains in this mode") + + # Multi-site mode: Check per-server AUTO_LETS_ENCRYPT settings else: - for first_server in getenv("SERVER_NAME", "www.example.com").split(" "): - if first_server and getenv(f"{first_server}_AUTO_LETS_ENCRYPT", "no") == "yes": - use_letsencrypt = True - break + server_names = getenv("SERVER_NAME", "www.example.com").split(" ") + + debug_log(LOGGER, "Checking multi-site mode configuration") + debug_log(LOGGER, f"Found {len(server_names)} configured servers") + debug_log(LOGGER, f"Server list: {server_names}") + debug_log(LOGGER, "Checking each server for Let's Encrypt enablement") + + # Check if any server has Let's Encrypt enabled + for i, first_server in enumerate(server_names): + if first_server: + server_le_enabled = getenv(f"{first_server}_AUTO_LETS_ENCRYPT", + "no") == "yes" + + debug_log(LOGGER, + f"Server {i+1} ({first_server}): " + f"AUTO_LETS_ENCRYPT = {server_le_enabled}") + + if server_le_enabled: + use_letsencrypt = True + debug_log(LOGGER, f"Found Let's Encrypt enabled on {first_server}") + debug_log(LOGGER, "At least one server needs renewal - proceeding") + break + # Exit early if Let's Encrypt is not configured if not use_letsencrypt: + debug_log(LOGGER, "Let's Encrypt not enabled on any servers") + debug_log(LOGGER, "No certificates to renew - exiting early") + debug_log(LOGGER, "Renewal process skipped entirely") LOGGER.info("Let's Encrypt is not activated, skipping renew...") sys_exit(0) + debug_log(LOGGER, "Let's Encrypt is enabled - proceeding with renewal") + debug_log(LOGGER, "Will attempt to renew all existing certificates") + + # Initialize job handler for caching operations + debug_log(LOGGER, "Initializing job handler for database operations") + debug_log(LOGGER, "Job handler manages certificate data caching") + JOB = Job(LOGGER, __file__) + # Set up environment variables for certbot execution + # These control paths, timeouts, and configuration testing behavior + debug_log(LOGGER, "Setting up environment for certbot execution") + debug_log(LOGGER, "Configuring paths and operational parameters") + env = { "PATH": getenv("PATH", ""), "PYTHONPATH": getenv("PYTHONPATH", ""), "RELOAD_MIN_TIMEOUT": getenv("RELOAD_MIN_TIMEOUT", "5"), - "DISABLE_CONFIGURATION_TESTING": getenv("DISABLE_CONFIGURATION_TESTING", "no").lower(), + "DISABLE_CONFIGURATION_TESTING": getenv( + "DISABLE_CONFIGURATION_TESTING", "no" + ).lower(), } - env["PYTHONPATH"] = env["PYTHONPATH"] + (f":{DEPS_PATH}" if DEPS_PATH not in env["PYTHONPATH"] else "") - if getenv("DATABASE_URI"): - env["DATABASE_URI"] = getenv("DATABASE_URI") + + # Ensure our Python dependencies are in the path + env["PYTHONPATH"] = env["PYTHONPATH"] + ( + f":{DEPS_PATH}" if DEPS_PATH not in env["PYTHONPATH"] else "" + ) + + # Pass database URI if configured (for cluster deployments) + database_uri = getenv("DATABASE_URI") + if database_uri is not None: + env["DATABASE_URI"] = database_uri + + debug_log(LOGGER, "Environment configuration for certbot:") + path_display = (env['PATH'][:100] + "..." if len(env['PATH']) > 100 + else env['PATH']) + pythonpath_display = (env['PYTHONPATH'][:100] + "..." + if len(env['PYTHONPATH']) > 100 + else env['PYTHONPATH']) + debug_log(LOGGER, f" PATH: {path_display}") + debug_log(LOGGER, f" PYTHONPATH: {pythonpath_display}") + debug_log(LOGGER, f" RELOAD_MIN_TIMEOUT: {env['RELOAD_MIN_TIMEOUT']}") + debug_log(LOGGER, f" DISABLE_CONFIGURATION_TESTING: {env['DISABLE_CONFIGURATION_TESTING']}") + debug_log(LOGGER, f" DATABASE_URI configured: {'Yes' if database_uri else 'No'}") + + # Construct certbot renew command with appropriate options + # --no-random-sleep-on-renew: Prevents random delays in scheduled runs + # Paths are configured to use BunkerWeb's certificate storage locations + command = [ + CERTBOT_BIN, + "renew", + "--no-random-sleep-on-renew", # Disable random sleep for scheduled runs + "--config-dir", + DATA_PATH.as_posix(), # Where certificates are stored + "--work-dir", + WORK_DIR, # Temporary working directory + "--logs-dir", + LOGS_DIR, # Log output directory + ] + + # Add verbose flag if debug logging is enabled + if getenv("CUSTOM_LOG_LEVEL", getenv("LOG_LEVEL", "INFO")).upper() == "DEBUG": + command.append("-v") + debug_log(LOGGER, "Debug mode enabled - adding verbose flag to certbot") + debug_log(LOGGER, "Certbot will provide detailed output") + + debug_log(LOGGER, "Certbot command configuration:") + debug_log(LOGGER, f" Command: {' '.join(command)}") + debug_log(LOGGER, f" Working directory: {WORK_DIR}") + debug_log(LOGGER, f" Config directory: {DATA_PATH.as_posix()}") + debug_log(LOGGER, f" Logs directory: {LOGS_DIR}") + debug_log(LOGGER, "Command will check all existing certificates for renewal") + LOGGER.info("Starting certificate renewal process") + + # Execute certbot renew command + # Process output is captured and logged through our logger + debug_log(LOGGER, "Executing certbot renew command") + debug_log(LOGGER, "Will capture and relay all certbot output") + debug_log(LOGGER, "Process runs with isolated environment") + process = Popen( - [ - CERTBOT_BIN, - "renew", - "--no-random-sleep-on-renew", - "--config-dir", - DATA_PATH.as_posix(), - "--work-dir", - WORK_DIR, - "--logs-dir", - LOGS_DIR, - ] - + (["-v"] if getenv("CUSTOM_LOG_LEVEL", getenv("LOG_LEVEL", "INFO")).upper() == "DEBUG" else []), - stdin=DEVNULL, - stderr=PIPE, - universal_newlines=True, - env=env, + command, + stdin=DEVNULL, # No input needed + stderr=PIPE, # Capture error output + universal_newlines=True, # Text mode + env=env, # Controlled environment ) + + # Stream certbot output to our logger in real-time + # This ensures all certbot messages are captured in BunkerWeb logs + line_count = 0 + debug_log(LOGGER, "Starting real-time output capture from certbot") + while process.poll() is None: if process.stderr: for line in process.stderr: + line_count += 1 LOGGER_CERTBOT.info(line.strip()) + + if (getenv("LOG_LEVEL") == "debug" + and line_count % 10 == 0): + debug_log(LOGGER, f"Processed {line_count} lines of certbot output") + + # Wait for process completion and check return code + final_return_code = process.returncode + + debug_log(LOGGER, "Certbot process completed") + debug_log(LOGGER, f"Final return code: {final_return_code}") + debug_log(LOGGER, f"Total output lines processed: {line_count}") + debug_log(LOGGER, "Analyzing return code to determine success/failure") - if process.returncode != 0: + # Handle renewal results + if final_return_code != 0: status = 2 LOGGER.error("Certificates renewal failed") + debug_log(LOGGER, "Certbot returned non-zero exit code") + debug_log(LOGGER, "Certificate renewal process failed") + debug_log(LOGGER, "Will not cache certificate data due to failure") + else: + LOGGER.info("Certificate renewal completed successfully") + debug_log(LOGGER, "Certbot completed successfully") + debug_log(LOGGER, "All eligible certificates have been renewed") + debug_log(LOGGER, "Proceeding to cache updated certificate data") - # Save Let's Encrypt data to db cache + # Save Let's Encrypt certificate data to database cache + # This ensures certificate data is available for distribution to cluster nodes + debug_log(LOGGER, "Checking certificate data directory for caching") + debug_log(LOGGER, f"Certificate data path: {DATA_PATH}") + debug_log(LOGGER, f"Directory exists: {DATA_PATH.is_dir()}") + if DATA_PATH.is_dir(): + dir_contents = list(DATA_PATH.iterdir()) + debug_log(LOGGER, f"Directory contains {len(dir_contents)} items") + debug_log(LOGGER, "Directory listing:") + for item in dir_contents[:5]: # Show first 5 items + debug_log(LOGGER, f" {item.name}") + if len(dir_contents) > 5: + debug_log(LOGGER, f" ... and {len(dir_contents) - 5} more items") + + # Only cache if directory exists and contains files if DATA_PATH.is_dir() and list(DATA_PATH.iterdir()): + debug_log(LOGGER, "Certificate data found - proceeding with caching") + debug_log(LOGGER, "This will store certificates in database for cluster distribution") + cached, err = JOB.cache_dir(DATA_PATH) if not cached: - LOGGER.error(f"Error while saving Let's Encrypt data to db cache : {err}") + LOGGER.error( + f"Error while saving Let's Encrypt data to db cache: {err}" + ) + debug_log(LOGGER, f"Cache operation failed with error: {err}") + debug_log(LOGGER, "Certificates renewed but not cached for distribution") else: LOGGER.info("Successfully saved Let's Encrypt data to db cache") + debug_log(LOGGER, "Certificate data successfully cached to database") + debug_log(LOGGER, "Cached certificates available for cluster distribution") + else: + debug_log(LOGGER, "No certificate data directory found or directory empty") + debug_log(LOGGER, "This may be normal if no certificates needed renewal") + LOGGER.warning("No certificate data found to cache") + except SystemExit as e: status = e.code + debug_log(LOGGER, f"Script exiting via SystemExit with code: {e.code}") + debug_log(LOGGER, "This is typically a normal exit condition") except BaseException as e: status = 2 LOGGER.debug(format_exc()) - LOGGER.error(f"Exception while running certbot-renew.py :\n{e}") + LOGGER.error(f"Exception while running certbot-renew.py:\n{e}") + + debug_log(LOGGER, "Unexpected exception occurred during renewal") + debug_log(LOGGER, "Full exception traceback logged above") + debug_log(LOGGER, "Setting exit status to 2 due to unexpected exception") + debug_log(LOGGER, "Renewal process aborted due to error") + +debug_log(LOGGER, f"Certificate renewal process completed with final status: {status}") +if status == 0: + debug_log(LOGGER, "Renewal process completed successfully") + debug_log(LOGGER, "All certificates are up to date") +elif status == 2: + debug_log(LOGGER, "Renewal process failed") + debug_log(LOGGER, "Manual intervention may be required") +else: + debug_log(LOGGER, f"Renewal completed with status {status}") -sys_exit(status) +sys_exit(status) \ No newline at end of file diff --git a/src/common/core/letsencrypt/jobs/letsencrypt.py b/src/common/core/letsencrypt/jobs/letsencrypt.py index 18ed75da69..3e66e10070 100644 --- a/src/common/core/letsencrypt/jobs/letsencrypt.py +++ b/src/common/core/letsencrypt/jobs/letsencrypt.py @@ -1,4 +1,5 @@ # -*- coding: utf-8 -*- +from os import getenv from pathlib import Path from sys import path as sys_path from typing import Dict, List, Literal, Optional @@ -15,37 +16,77 @@ sys_path.append(python_path_str) -def alias_model_validator(field_map: dict): - """Factory function for creating a `model_validator` for alias mapping.""" +def debug_log(message): + # Log debug messages only when LOG_LEVEL environment variable is set to + # "debug" + if getenv("LOG_LEVEL") == "debug": + print(f"[DEBUG] {message}") + +def alias_model_validator(field_map: dict): + # Factory function for creating a model_validator for alias mapping. + # This allows DNS providers to accept credentials under multiple field + # names for better compatibility with different configuration formats. def validator(cls, values): + debug_log(f"Processing aliases for {cls.__name__}") + debug_log(f"Input values: {list(values.keys())}") + debug_log(f"Field mapping has {len(field_map)} canonical fields") + for field, aliases in field_map.items(): + debug_log(f"Checking field '{field}' with {len(aliases)} aliases") + for alias in aliases: if alias in values: + debug_log(f"Found alias '{alias}' for field '{field}'") + debug_log(f"Mapping alias '{alias}' to canonical field '{field}'") values[field] = values[alias] break + + debug_log(f"Final mapped values: {list(values.keys())}") + debug_log(f"Alias processing completed for {cls.__name__}") + return values return model_validator(mode="before")(validator) class Provider(BaseModel): - """Base class for DNS providers.""" + # Base class for DNS providers. + # Provides common functionality for credential formatting and file type + # handling. All DNS provider classes inherit from this base class. model_config = ConfigDict(extra="ignore") def get_formatted_credentials(self) -> bytes: - """Return the formatted credentials to be written to a file.""" - return "\n".join(f"{key} = {value}" for key, value in self.model_dump(exclude={"file_type"}).items()).encode("utf-8") + # Return the formatted credentials to be written to a file. + # Default implementation creates INI-style key=value format. + excluded_fields = {"file_type"} + fields = self.model_dump(exclude=excluded_fields) + debug_log(f"{self.__class__.__name__} formatting {len(fields)} fields") + debug_log(f"Excluded fields: {excluded_fields}") + debug_log("Using default INI-style key=value format") + + content = "\n".join( + f"{key} = {value}" + for key, value in self.model_dump(exclude={"file_type"}).items() + ).encode("utf-8") + + debug_log(f"Generated {len(content)} bytes of credential content") + debug_log("Content will be written as UTF-8 encoded text") + + return content @staticmethod def get_file_type() -> Literal["ini"]: - """Return the file type that the credentials should be written to.""" + # Return the file type that the credentials should be written to. + # Default implementation returns 'ini' for most providers. return "ini" class CloudflareProvider(Provider): - """Cloudflare DNS provider.""" + # Supports both API token (recommended) and legacy email/API key + # authentication. Requires either api_token OR both email and api_key + # for authentication. dns_cloudflare_api_token: str = "" dns_cloudflare_email: str = "" @@ -53,26 +94,70 @@ class CloudflareProvider(Provider): _validate_aliases = alias_model_validator( { - "dns_cloudflare_api_token": ("dns_cloudflare_api_token", "cloudflare_api_token", "api_token"), - "dns_cloudflare_email": ("dns_cloudflare_email", "cloudflare_email", "email"), - "dns_cloudflare_api_key": ("dns_cloudflare_api_key", "cloudflare_api_key", "api_key"), + "dns_cloudflare_api_token": ( + "dns_cloudflare_api_token", "cloudflare_api_token", "api_token" + ), + "dns_cloudflare_email": ( + "dns_cloudflare_email", "cloudflare_email", "email" + ), + "dns_cloudflare_api_key": ( + "dns_cloudflare_api_key", "cloudflare_api_key", "api_key" + ), } ) def get_formatted_credentials(self) -> bytes: - """Return the formatted credentials, excluding defaults.""" - return "\n".join(f"{key} = {value}" for key, value in self.model_dump(exclude={"file_type"}, exclude_defaults=True).items()).encode("utf-8") + # Return the formatted credentials, excluding defaults. + # Only includes non-empty credential fields to avoid cluttering + # output. + all_fields = self.model_dump(exclude={"file_type"}) + non_default_fields = self.model_dump( + exclude={"file_type"}, exclude_defaults=True + ) + debug_log(f"Cloudflare provider has {len(all_fields)} total fields") + debug_log(f"{len(non_default_fields)} non-default fields will be included") + debug_log("Excluding empty/default values to minimize credential file") + + content = "\n".join( + f"{key} = {value}" + for key, value in self.model_dump( + exclude={"file_type"}, exclude_defaults=True + ).items() + ).encode("utf-8") + + debug_log(f"Generated {len(content)} bytes of Cloudflare credentials") + + return content @model_validator(mode="after") def validate_cloudflare_credentials(self): - """Validate Cloudflare credentials.""" - if not self.dns_cloudflare_api_token and not (self.dns_cloudflare_email and self.dns_cloudflare_api_key): - raise ValueError("Either 'dns_cloudflare_api_token' or both 'dns_cloudflare_email' and 'dns_cloudflare_api_key' must be provided.") + # Validate Cloudflare credentials. + # Ensures either API token or email+API key combination is provided. + has_token = bool(self.dns_cloudflare_api_token) + has_legacy = bool(self.dns_cloudflare_email and self.dns_cloudflare_api_key) + + debug_log("Cloudflare credential validation:") + debug_log(f"API token provided: {has_token}") + debug_log(f"Legacy email+key provided: {has_legacy}") + debug_log("At least one authentication method must be complete") + + if not has_token and not has_legacy: + debug_log("Neither authentication method is complete") + debug_log("Validation will fail") + raise ValueError( + "Either 'dns_cloudflare_api_token' or both " + "'dns_cloudflare_email' and 'dns_cloudflare_api_key' must be provided." + ) + + debug_log("Cloudflare credentials validation passed") + auth_method = "API token" if has_token else "email+API key" + debug_log(f"Using {auth_method} authentication method") + return self class DesecProvider(Provider): - """deSEC DNS provider.""" + # Requires only an API token for authentication. dns_desec_token: str @@ -84,19 +169,21 @@ class DesecProvider(Provider): class DigitalOceanProvider(Provider): - """DigitalOcean DNS provider.""" + # Requires a personal access token with read/write scope. dns_digitalocean_token: str _validate_aliases = alias_model_validator( { - "dns_digitalocean_token": ("dns_digitalocean_token", "digitalocean_token", "token"), + "dns_digitalocean_token": ( + "dns_digitalocean_token", "digitalocean_token", "token" + ), } ) class DnsimpleProvider(Provider): - """DNSimple DNS provider.""" + # Requires an API token for authentication. dns_dnsimple_token: str @@ -108,35 +195,46 @@ class DnsimpleProvider(Provider): class DnsMadeEasyProvider(Provider): - """DNS Made Easy DNS provider.""" + # Requires API key and secret key. + # Both keys are required for authentication. dns_dnsmadeeasy_api_key: str dns_dnsmadeeasy_secret_key: str _validate_aliases = alias_model_validator( { - "dns_dnsmadeeasy_api_key": ("dns_dnsmadeeasy_api_key", "dnsmadeeasy_api_key", "api_key"), - "dns_dnsmadeeasy_secret_key": ("dns_dnsmadeeasy_secret_key", "dnsmadeeasy_secret_key", "secret_key"), + "dns_dnsmadeeasy_api_key": ( + "dns_dnsmadeeasy_api_key", "dnsmadeeasy_api_key", "api_key" + ), + "dns_dnsmadeeasy_secret_key": ( + "dns_dnsmadeeasy_secret_key", "dnsmadeeasy_secret_key", + "secret_key" + ), } ) class GehirnProvider(Provider): - """Gehirn DNS provider.""" + # Requires both API token and API secret for authentication. dns_gehirn_api_token: str dns_gehirn_api_secret: str _validate_aliases = alias_model_validator( { - "dns_gehirn_api_token": ("dns_gehirn_api_token", "gehirn_api_token", "api_token"), - "dns_gehirn_api_secret": ("dns_gehirn_api_secret", "gehirn_api_secret", "api_secret"), + "dns_gehirn_api_token": ( + "dns_gehirn_api_token", "gehirn_api_token", "api_token" + ), + "dns_gehirn_api_secret": ( + "dns_gehirn_api_secret", "gehirn_api_secret", "api_secret" + ), } ) class GoogleProvider(Provider): - """Google Cloud DNS provider.""" + # Uses Google Cloud service account credentials in JSON format. + # Requires a service account with DNS admin permissions. type: str = "service_account" project_id: str @@ -146,48 +244,83 @@ class GoogleProvider(Provider): client_id: str auth_uri: str = "https://accounts.google.com/o/oauth2/auth" token_uri: str = "https://accounts.google.com/o/oauth2/token" - auth_provider_x509_cert_url: str = "https://www.googleapis.com/oauth2/v1/certs" + auth_provider_x509_cert_url: str = ("https://www.googleapis.com/" + "oauth2/v1/certs") client_x509_cert_url: str _validate_aliases = alias_model_validator( { "type": ("type", "google_type", "dns_google_type"), - "project_id": ("project_id", "google_project_id", "dns_google_project_id"), - "private_key_id": ("private_key_id", "google_private_key_id", "dns_google_private_key_id"), - "private_key": ("private_key", "google_private_key", "dns_google_private_key"), - "client_email": ("client_email", "google_client_email", "dns_google_client_email"), - "client_id": ("client_id", "google_client_id", "dns_google_client_id"), + "project_id": ("project_id", "google_project_id", + "dns_google_project_id"), + "private_key_id": ( + "private_key_id", "google_private_key_id", + "dns_google_private_key_id" + ), + "private_key": ( + "private_key", "google_private_key", "dns_google_private_key" + ), + "client_email": ( + "client_email", "google_client_email", "dns_google_client_email" + ), + "client_id": ("client_id", "google_client_id", + "dns_google_client_id"), "auth_uri": ("auth_uri", "google_auth_uri", "dns_google_auth_uri"), - "token_uri": ("token_uri", "google_token_uri", "dns_google_token_uri"), - "auth_provider_x509_cert_url": ("auth_provider_x509_cert_url", "google_auth_provider_x509_cert_url", "dns_google_auth_provider_x509_cert_url"), - "client_x509_cert_url": ("client_x509_cert_url", "google_client_x509_cert_url", "dns_google_client_x509_cert_url"), + "token_uri": ("token_uri", "google_token_uri", + "dns_google_token_uri"), + "auth_provider_x509_cert_url": ( + "auth_provider_x509_cert_url", + "google_auth_provider_x509_cert_url", + "dns_google_auth_provider_x509_cert_url" + ), + "client_x509_cert_url": ( + "client_x509_cert_url", + "google_client_x509_cert_url", + "dns_google_client_x509_cert_url" + ), } ) def get_formatted_credentials(self) -> bytes: - """Return the formatted credentials in JSON format.""" - return self.model_dump_json(indent=2, exclude={"file_type"}).encode("utf-8") + # Return the formatted credentials in JSON format. + # Google Cloud requires credentials in JSON service account format. + debug_log("Google provider formatting credentials as JSON") + debug_log("Using service account JSON format required by Google Cloud") + + json_content = self.model_dump_json( + indent=2, exclude={"file_type"} + ).encode("utf-8") + + debug_log(f"Generated {len(json_content)} bytes of JSON credentials") + debug_log("JSON format includes proper indentation for readability") + + return json_content @staticmethod def get_file_type() -> Literal["json"]: - """Return the file type that the credentials should be written to.""" + # Return the file type that the credentials should be written to. + # Google provider requires JSON format for service account + # credentials. return "json" class InfomaniakProvider(Provider): - """Infomaniak DNS provider.""" + # Requires an API token for authentication. dns_infomaniak_token: str _validate_aliases = alias_model_validator( { - "dns_infomaniak_token": ("dns_infomaniak_token", "infomaniak_token", "token"), + "dns_infomaniak_token": ( + "dns_infomaniak_token", "infomaniak_token", "token" + ), } ) class IonosProvider(Provider): - """Ionos DNS provider.""" + # Requires prefix and secret for authentication, with configurable + # endpoint. dns_ionos_prefix: str dns_ionos_secret: str @@ -197,13 +330,14 @@ class IonosProvider(Provider): { "dns_ionos_prefix": ("dns_ionos_prefix", "ionos_prefix", "prefix"), "dns_ionos_secret": ("dns_ionos_secret", "ionos_secret", "secret"), - "dns_ionos_endpoint": ("dns_ionos_endpoint", "ionos_endpoint", "endpoint"), + "dns_ionos_endpoint": ("dns_ionos_endpoint", "ionos_endpoint", + "endpoint"), } ) class LinodeProvider(Provider): - """Linode DNS provider.""" + # Requires an API key for authentication. dns_linode_key: str @@ -215,7 +349,8 @@ class LinodeProvider(Provider): class LuaDnsProvider(Provider): - """LuaDns DNS provider.""" + # Requires email and token authentication. + # Both email and token are required for API access. dns_luadns_email: str dns_luadns_token: str @@ -229,19 +364,20 @@ class LuaDnsProvider(Provider): class NSOneProvider(Provider): - """NS1 DNS provider.""" + # Requires an API key for authentication. dns_nsone_api_key: str _validate_aliases = alias_model_validator( { - "dns_nsone_api_key": ("dns_nsone_api_key", "nsone_api_key", "api_key"), + "dns_nsone_api_key": ("dns_nsone_api_key", "nsone_api_key", + "api_key"), } ) class OvhProvider(Provider): - """OVH DNS provider.""" + # Requires application key, secret, and consumer key for authentication. dns_ovh_endpoint: str = "ovh-eu" dns_ovh_application_key: str @@ -251,15 +387,24 @@ class OvhProvider(Provider): _validate_aliases = alias_model_validator( { "dns_ovh_endpoint": ("dns_ovh_endpoint", "ovh_endpoint", "endpoint"), - "dns_ovh_application_key": ("dns_ovh_application_key", "ovh_application_key", "application_key"), - "dns_ovh_application_secret": ("dns_ovh_application_secret", "ovh_application_secret", "application_secret"), - "dns_ovh_consumer_key": ("dns_ovh_consumer_key", "ovh_consumer_key", "consumer_key"), + "dns_ovh_application_key": ( + "dns_ovh_application_key", "ovh_application_key", + "application_key" + ), + "dns_ovh_application_secret": ( + "dns_ovh_application_secret", "ovh_application_secret", + "application_secret" + ), + "dns_ovh_consumer_key": ( + "dns_ovh_consumer_key", "ovh_consumer_key", "consumer_key" + ), } ) class Rfc2136Provider(Provider): - """RFC 2136 DNS provider.""" + # Standard protocol for dynamic DNS updates using TSIG authentication. + # Supports HMAC-based authentication with configurable algorithms. dns_rfc2136_server: str dns_rfc2136_port: Optional[str] = None @@ -270,228 +415,351 @@ class Rfc2136Provider(Provider): _validate_aliases = alias_model_validator( { - "dns_rfc2136_server": ("dns_rfc2136_server", "rfc2136_server", "server"), + "dns_rfc2136_server": ("dns_rfc2136_server", "rfc2136_server", + "server"), "dns_rfc2136_port": ("dns_rfc2136_port", "rfc2136_port", "port"), "dns_rfc2136_name": ("dns_rfc2136_name", "rfc2136_name", "name"), - "dns_rfc2136_secret": ("dns_rfc2136_secret", "rfc2136_secret", "secret"), - "dns_rfc2136_algorithm": ("dns_rfc2136_algorithm", "rfc2136_algorithm", "algorithm"), - "dns_rfc2136_sign_query": ("dns_rfc2136_sign_query", "rfc2136_sign_query", "sign_query"), + "dns_rfc2136_secret": ("dns_rfc2136_secret", "rfc2136_secret", + "secret"), + "dns_rfc2136_algorithm": ( + "dns_rfc2136_algorithm", "rfc2136_algorithm", "algorithm" + ), + "dns_rfc2136_sign_query": ( + "dns_rfc2136_sign_query", "rfc2136_sign_query", "sign_query" + ), } ) def get_formatted_credentials(self) -> bytes: - """Return the formatted credentials, excluding defaults.""" - return "\n".join(f"{key} = {value}" for key, value in self.model_dump(exclude={"file_type"}, exclude_defaults=True).items()).encode("utf-8") + # Return the formatted credentials, excluding defaults. + # RFC2136 provider excludes default values to minimize configuration. + all_fields = self.model_dump(exclude={"file_type"}) + non_default_fields = self.model_dump( + exclude={"file_type"}, exclude_defaults=True + ) + debug_log(f"RFC2136 provider has {len(all_fields)} total fields") + debug_log(f"{len(non_default_fields)} non-default fields included") + debug_log("Excluding defaults to minimize RFC2136 configuration") + + content = "\n".join( + f"{key} = {value}" + for key, value in self.model_dump( + exclude={"file_type"}, exclude_defaults=True + ).items() + ).encode("utf-8") + + debug_log(f"Generated {len(content)} bytes of RFC2136 credentials") + + return content class Route53Provider(Provider): - """AWS Route 53 DNS provider.""" + # Uses IAM credentials. + # Requires AWS access key ID and secret access key. aws_access_key_id: str aws_secret_access_key: str _validate_aliases = alias_model_validator( { - "aws_access_key_id": ("aws_access_key_id", "dns_aws_access_key_id", "access_key_id"), - "aws_secret_access_key": ("aws_secret_access_key", "dns_aws_secret_access_key", "secret_access_key"), + "aws_access_key_id": ( + "aws_access_key_id", "dns_aws_access_key_id", "access_key_id" + ), + "aws_secret_access_key": ( + "aws_secret_access_key", "dns_aws_secret_access_key", + "secret_access_key" + ), } ) def get_formatted_credentials(self) -> bytes: - """Return the formatted credentials in environment variable format.""" - return "\n".join(f"{key.upper()}={value!r}" for key, value in self.model_dump(exclude={"file_type"}).items()).encode("utf-8") + # Return the formatted credentials in environment variable format. + # Route53 uses environment variables for AWS credentials. + fields = self.model_dump(exclude={"file_type"}) + debug_log(f"Route53 provider formatting {len(fields)} fields as env vars") + debug_log("Using environment variable format for AWS credentials") + + content = "\n".join( + f"{key.upper()}={value!r}" + for key, value in self.model_dump(exclude={"file_type"}).items() + ).encode("utf-8") + + debug_log(f"Generated {len(content)} bytes of environment variables") + debug_log("Variables will be uppercase as per AWS convention") + + return content @staticmethod def get_file_type() -> Literal["env"]: - """Return the file type that the credentials should be written to.""" + # Return the file type that the credentials should be written to. + # Route53 provider uses environment variable format. return "env" class SakuraCloudProvider(Provider): - """Sakura Cloud DNS provider.""" + # Requires API token and secret for authentication. dns_sakuracloud_api_token: str dns_sakuracloud_api_secret: str _validate_aliases = alias_model_validator( { - "dns_sakuracloud_api_token": ("dns_sakuracloud_api_token", "sakuracloud_api_token", "api_token"), - "dns_sakuracloud_api_secret": ("dns_sakuracloud_api_secret", "sakuracloud_api_secret", "api_secret"), + "dns_sakuracloud_api_token": ( + "dns_sakuracloud_api_token", "sakuracloud_api_token", + "api_token" + ), + "dns_sakuracloud_api_secret": ( + "dns_sakuracloud_api_secret", "sakuracloud_api_secret", + "api_secret" + ), } ) class ScalewayProvider(Provider): - """Scaleway DNS provider.""" + # Requires an application token for authentication. dns_scaleway_application_token: str _validate_aliases = alias_model_validator( { - "dns_scaleway_application_token": ("dns_scaleway_application_token", "scaleway_application_token", "application_token"), + "dns_scaleway_application_token": ( + "dns_scaleway_application_token", "scaleway_application_token", + "application_token" + ), } ) class NjallaProvider(Provider): - """Njalla DNS provider.""" + # Requires an API token for authentication. dns_njalla_token: str _validate_aliases = alias_model_validator( { - "dns_njalla_token": ("dns_njalla_token", "njalla_token", "token", "api_token", "auth_token"), + "dns_njalla_token": ( + "dns_njalla_token", "njalla_token", "token", "api_token", + "auth_token" + ), } ) class WildcardGenerator: - """Manages the generation of wildcard domains across domain groups.""" + # Manages the generation of wildcard domains across domain groups. + # Handles grouping of domains and automatic wildcard pattern generation + # for efficient certificate management across multiple subdomains. def __init__(self): - self.__domain_groups = {} # Stores raw domains grouped by identifier - self.__wildcards = {} # Stores generated wildcard patterns - - def extend(self, group: str, domains: List[str], email: str, staging: bool = False): - """ - Add domains to a group and regenerate wildcards. - - Args: - group: Group identifier for these domains - domains: List of domains to add - email: Contact email for this domain group - staging: Whether these domains are for staging environment - """ + debug_log("Initializing WildcardGenerator") + debug_log("Setting up empty domain groups and wildcard storage") + + # Stores raw domains grouped by identifier + self.__domain_groups = {} + # Stores generated wildcard patterns + self.__wildcards = {} + + debug_log("WildcardGenerator initialized with empty groups") + debug_log("Ready to accept domain groups for wildcard generation") + + def extend(self, group: str, domains: List[str], email: str, + staging: bool = False): + # Add domains to a group and regenerate wildcards. + # Organizes domains by group and environment for wildcard generation. + debug_log(f"Extending group '{group}' with {len(domains)} domains") + debug_log(f"Environment: {'staging' if staging else 'production'}") + debug_log(f"Contact email: {email}") + debug_log(f"Domain list: {domains}") + # Initialize group if it doesn't exist if group not in self.__domain_groups: - self.__domain_groups[group] = {"staging": set(), "prod": set(), "email": email} + self.__domain_groups[group] = { + "staging": set(), + "prod": set(), + "email": email + } + debug_log(f"Created new domain group '{group}'") + debug_log("Group initialized with empty staging and prod sets") # Add domains to appropriate environment env_type = "staging" if staging else "prod" + domains_added = 0 for domain in domains: if domain := domain.strip(): self.__domain_groups[group][env_type].add(domain) + domains_added += 1 + + debug_log(f"Added {domains_added} valid domains to {env_type} environment") + total_staging = len(self.__domain_groups[group]["staging"]) + total_prod = len(self.__domain_groups[group]["prod"]) + debug_log(f"Group '{group}' totals: {total_staging} staging, {total_prod} prod domains") # Regenerate wildcards after adding new domains self.__generate_wildcards(staging) def __generate_wildcards(self, staging: bool = False): - """ - Generate wildcard patterns for the specified environment. - - Args: - staging: Whether to generate wildcards for staging environment - """ - self.__wildcards.clear() + # Generate wildcard patterns for the specified environment. + # Creates optimized wildcard certificates that cover multiple + # subdomains. env_type = "staging" if staging else "prod" + debug_log(f"Generating wildcards for {env_type} environment") + debug_log(f"Processing {len(self.__domain_groups)} domain groups") + debug_log("Will convert subdomains to wildcard patterns") + + self.__wildcards.clear() + wildcards_generated = 0 # Process each domain group for group, types in self.__domain_groups.items(): if group not in self.__wildcards: - self.__wildcards[group] = {"staging": set(), "prod": set(), "email": types["email"]} + self.__wildcards[group] = { + "staging": set(), + "prod": set(), + "email": types["email"] + } # Process each domain in the group for domain in types[env_type]: # Convert domain to wildcards and add to appropriate group self.__add_domain_wildcards(domain, group, env_type) + wildcards_generated += 1 + + debug_log(f"Generated wildcard patterns for {wildcards_generated} domains") + debug_log("Wildcard generation completed") def __add_domain_wildcards(self, domain: str, group: str, env_type: str): - """ - Convert a domain to wildcard patterns and add to the wildcards collection. - - Args: - domain: Domain to process - group: Group identifier - env_type: Environment type (staging or prod) - """ + # Convert a domain to wildcard patterns and add to the wildcards + # collection. Determines optimal wildcard patterns based on domain + # structure. + debug_log(f"Processing domain '{domain}' for wildcard patterns") + parts = domain.split(".") + + debug_log(f"Domain has {len(parts)} parts: {parts}") + debug_log("Analyzing domain structure for wildcard generation") # Handle subdomains (domains with more than 2 parts) if len(parts) > 2: # Create wildcard for the base domain (e.g., *.example.com) base_domain = ".".join(parts[1:]) - self.__wildcards[group][env_type].add(f"*.{base_domain}") + wildcard_domain = f"*.{base_domain}" + + self.__wildcards[group][env_type].add(wildcard_domain) self.__wildcards[group][env_type].add(base_domain) + + debug_log(f"Subdomain detected - created wildcard '{wildcard_domain}'") + debug_log(f"Also added base domain '{base_domain}'") + debug_log("Wildcard will cover all subdomains of base") else: # Just add the raw domain for top-level domains self.__wildcards[group][env_type].add(domain) - - def get_wildcards(self) -> Dict[str, Dict[Literal["staging", "prod", "email"], str]]: - """ - Get formatted wildcard domains for each group. - - Returns: - Dictionary of group data with formatted wildcard domains - """ + + debug_log(f"Top-level domain - added '{domain}' directly") + debug_log("No wildcard needed for top-level domain") + + def get_wildcards(self) -> Dict[str, Dict[Literal["staging", "prod", + "email"], str]]: + # Get formatted wildcard domains for each group. + # Returns organized wildcard data ready for certificate generation. + debug_log(f"Formatting wildcards for {len(self.__wildcards)} groups") + debug_log("Converting wildcard sets to comma-separated strings") + result = {} + total_domains = 0 + for group, data in self.__wildcards.items(): result[group] = {"email": data["email"]} + for env_type in ("staging", "prod"): if domains := data[env_type]: # Sort domains with wildcards first - result[group][env_type] = ",".join(sorted(domains, key=lambda x: x[0] != "*")) + sorted_domains = sorted(domains, key=lambda x: x[0] != "*") + result[group][env_type] = ",".join(sorted_domains) + total_domains += len(domains) + + debug_log(f"Group '{group}' {env_type}: {len(domains)} domains") + debug_log(f"Sorted with wildcards first: {sorted_domains[:3]}...") + + debug_log(f"Formatted {total_domains} total wildcard domains") + debug_log("Ready for certificate generation") + return result @staticmethod def extract_wildcards_from_domains(domains: List[str]) -> List[str]: - """ - Generate wildcard patterns from a list of domains. - - Args: - domains: List of domains to process - - Returns: - List of extracted wildcard domains - """ + # Generate wildcard patterns from a list of domains. + # Static method for generating wildcards without managing groups. + debug_log(f"Extracting wildcards from {len(domains)} domains") + debug_log(f"Input domains: {domains}") + debug_log("Static method - no group management") + wildcards = set() + for domain in domains: parts = domain.split(".") + + debug_log(f"Processing '{domain}' with {len(parts)} parts") + # Generate wildcards for subdomains if len(parts) > 2: base_domain = ".".join(parts[1:]) wildcards.add(f"*.{base_domain}") wildcards.add(base_domain) + debug_log(f"Added wildcard *.{base_domain} and base {base_domain}") else: # Just add the domain for top-level domains wildcards.add(domain) + debug_log(f"Added top-level domain {domain} directly") # Sort with wildcards first - return sorted(wildcards, key=lambda x: x[0] != "*") + result = sorted(wildcards, key=lambda x: x[0] != "*") + + debug_log(f"Generated {len(result)} wildcard patterns") + debug_log(f"Final result: {result}") + + return result @staticmethod def get_base_domain(domain: str) -> str: - """ - Extract the base domain from a domain name. - - Args: - domain: Input domain name - - Returns: - Base domain (without wildcard prefix if present) - """ - return domain.lstrip("*.") + # Extract the base domain from a domain name. + # Removes wildcard prefix if present to get the actual domain. + base = domain.lstrip("*.") + + if domain != base: + debug_log(f"Extracted base domain '{base}' from wildcard '{domain}'") + else: + debug_log(f"Domain '{domain}' is already a base domain") + + return base @staticmethod - def create_group_name(domain: str, provider: str, challenge_type: str, staging: bool, content_hash: str, profile: str = "classic") -> str: - """ - Generate a consistent group name for wildcards. - - Args: - domain: The domain name - provider: DNS provider name or 'http' for HTTP challenge - challenge_type: Challenge type (dns or http) - staging: Whether this is for staging environment - content_hash: Hash of credential content - profile: Certificate profile (classic, tlsserver or shortlived) - - Returns: - A formatted group name string - """ + def create_group_name(domain: str, provider: str, challenge_type: str, + staging: bool, content_hash: str, + profile: str = "classic") -> str: + # Generate a consistent group name for wildcards. + # Creates a unique identifier for grouping related wildcard + # certificates. + debug_log(f"Creating group name for domain '{domain}'") + debug_log(f"Provider: {provider}, Challenge: {challenge_type}") + debug_log(f"Environment: {'staging' if staging else 'production'}") + debug_log(f"Profile: {profile}") + debug_log(f"Content hash: {content_hash[:10]}... (truncated)") + # Extract base domain and format it for the group name - base_domain = WildcardGenerator.get_base_domain(domain).replace(".", "-") + base_domain = WildcardGenerator.get_base_domain(domain).replace(".", + "-") env = "staging" if staging else "prod" # Use provider name for DNS challenge, otherwise use 'http' challenge_identifier = provider if challenge_type == "dns" else "http" - return f"{challenge_identifier}_{env}_{profile}_{base_domain}_{content_hash}" + group_name = (f"{challenge_identifier}_{env}_{profile}_{base_domain}_" + f"{content_hash}") + + debug_log(f"Base domain formatted: {base_domain}") + debug_log(f"Challenge identifier: {challenge_identifier}") + debug_log(f"Generated group name: '{group_name}'") + debug_log("Group name ensures consistent certificate grouping") + + return group_name \ No newline at end of file diff --git a/src/common/core/letsencrypt/letsencrypt.lua b/src/common/core/letsencrypt/letsencrypt.lua index 09646547da..693223e536 100644 --- a/src/common/core/letsencrypt/letsencrypt.lua +++ b/src/common/core/letsencrypt/letsencrypt.lua @@ -28,259 +28,411 @@ local decode = cjson.decode local execute = os.execute local remove = os.remove +-- Log debug messages only when LOG_LEVEL environment variable is set to +-- "debug" +local function debug_log(logger, message) + if os.getenv("LOG_LEVEL") == "debug" then + logger:log(NOTICE, "[DEBUG] " .. message) + end +end + +-- Initialize the letsencrypt plugin with the given context +-- @param ctx The context object containing plugin configuration function letsencrypt:initialize(ctx) - -- Call parent initialize - plugin.initialize(self, "letsencrypt", ctx) + -- Call parent initialize + plugin.initialize(self, "letsencrypt", ctx) end +-- Set the https_configured flag based on AUTO_LETS_ENCRYPT variable +-- Configures HTTPS settings for the plugin function letsencrypt:set() - local https_configured = self.variables["AUTO_LETS_ENCRYPT"] - if https_configured == "yes" then - self.ctx.bw.https_configured = "yes" - end - return self:ret(true, "set https_configured to " .. https_configured) + local https_configured = self.variables["AUTO_LETS_ENCRYPT"] + if https_configured == "yes" then + self.ctx.bw.https_configured = "yes" + end + debug_log(self.logger, "Set https_configured to " .. https_configured) + return self:ret(true, "set https_configured to " .. https_configured) end +-- Initialize SSL certificates and load them into the datastore +-- Handles both multisite and single-site configurations, processes wildcard +-- certificates, and loads certificate data from filesystem function letsencrypt:init() - local ret_ok, ret_err = true, "success" - local wildcard_servers = {} + local ret_ok, ret_err = true, "success" + local wildcard_servers = {} + + debug_log(self.logger, "Starting letsencrypt init phase") - if has_variable("AUTO_LETS_ENCRYPT", "yes") then - local multisite, err = get_variable("MULTISITE", false) - if not multisite then - return self:ret(false, "can't get MULTISITE variable : " .. err) - end - if multisite == "yes" then - local vars - vars, err = get_multiple_variables({ - "AUTO_LETS_ENCRYPT", - "LETS_ENCRYPT_CHALLENGE", - "LETS_ENCRYPT_DNS_PROVIDER", - "USE_LETS_ENCRYPT_WILDCARD", - "SERVER_NAME", - }) - if not vars then - return self:ret(false, "can't get required variables : " .. err) - end - local credential_items - credential_items, err = get_multiple_variables({ "LETS_ENCRYPT_DNS_CREDENTIAL_ITEM" }) - if not credential_items then - return self:ret(false, "can't get credential items : " .. err) - end - for server_name, multisite_vars in pairs(vars) do - if - multisite_vars["AUTO_LETS_ENCRYPT"] == "yes" - and server_name ~= "global" - and ( - multisite_vars["LETS_ENCRYPT_CHALLENGE"] == "http" - or ( - multisite_vars["LETS_ENCRYPT_CHALLENGE"] == "dns" - and multisite_vars["LETS_ENCRYPT_DNS_PROVIDER"] ~= "" - and credential_items[server_name] - ) - ) - then - local data - if - multisite_vars["LETS_ENCRYPT_CHALLENGE"] == "dns" - and multisite_vars["USE_LETS_ENCRYPT_WILDCARD"] == "yes" - then - for part in server_name:gmatch("%S+") do - wildcard_servers[part] = true - end - local parts = {} - for part in server_name:gmatch("[^.]+") do - table.insert(parts, part) - end - server_name = table.concat(parts, ".", 2) - data = self.datastore:get("plugin_letsencrypt_" .. server_name, true) - else - for part in server_name:gmatch("%S+") do - wildcard_servers[part] = false - end - end - if not data then - -- Load certificate - local check - check, data = read_files({ - "/var/cache/bunkerweb/letsencrypt/etc/live/" .. server_name .. "/fullchain.pem", - "/var/cache/bunkerweb/letsencrypt/etc/live/" .. server_name .. "/privkey.pem", - }) - if not check then - self.logger:log(ERR, "error while reading files : " .. data) - ret_ok = false - ret_err = "error reading files" - else - if - multisite_vars["LETS_ENCRYPT_CHALLENGE"] == "dns" - and multisite_vars["USE_LETS_ENCRYPT_WILDCARD"] == "yes" - then - check, err = self:load_data(data, server_name) - else - check, err = self:load_data(data, multisite_vars["SERVER_NAME"]) - end - if not check then - self.logger:log(ERR, "error while loading data : " .. err) - ret_ok = false - ret_err = "error loading data" - end - end - end - end - end - else - local server_name - server_name, err = get_variable("SERVER_NAME", false) - if not server_name then - return self:ret(false, "can't get SERVER_NAME variable : " .. err) - end - local use_wildcard - use_wildcard, err = get_variable("USE_LETS_ENCRYPT_WILDCARD", false) - if not use_wildcard then - return self:ret(false, "can't get USE_LETS_ENCRYPT_WILDCARD variable : " .. err) - end - local challenge - challenge, err = get_variable("LETS_ENCRYPT_CHALLENGE", false) - if not challenge then - return self:ret(false, "can't get LETS_ENCRYPT_CHALLENGE variable : " .. err) - end - server_name = server_name:match("%S+") - if challenge == "dns" and use_wildcard == "yes" then - for part in server_name:gmatch("%S+") do - wildcard_servers[part] = true - end - local parts = {} - for part in server_name:gmatch("[^.]+") do - table.insert(parts, part) - end - server_name = table.concat(parts, ".", 2) - else - for part in server_name:gmatch("%S+") do - wildcard_servers[part] = false - end - end - local check, data = read_files({ - "/var/cache/bunkerweb/letsencrypt/etc/live/" .. server_name .. "/fullchain.pem", - "/var/cache/bunkerweb/letsencrypt/etc/live/" .. server_name .. "/privkey.pem", - }) - if not check then - self.logger:log(ERR, "error while reading files : " .. data) - ret_ok = false - ret_err = "error reading files" - else - check, err = self:load_data(data, server_name) - if not check then - self.logger:log(ERR, "error while loading data : " .. err) - ret_ok = false - ret_err = "error loading data" - end - end - end - else - ret_err = "let's encrypt is not used" - end + if has_variable("AUTO_LETS_ENCRYPT", "yes") then + debug_log(self.logger, "AUTO_LETS_ENCRYPT is enabled") + + local multisite, err = get_variable("MULTISITE", false) + if not multisite then + return self:ret(false, "can't get MULTISITE variable : " .. err) + end + + debug_log(self.logger, "MULTISITE mode is " .. multisite) + + if multisite == "yes" then + debug_log(self.logger, "Processing multisite configuration") + + local vars + vars, err = get_multiple_variables({ + "AUTO_LETS_ENCRYPT", + "LETS_ENCRYPT_CHALLENGE", + "LETS_ENCRYPT_DNS_PROVIDER", + "USE_LETS_ENCRYPT_WILDCARD", + "SERVER_NAME", + }) + if not vars then + return self:ret(false, + "can't get required variables : " .. err) + end + + local credential_items + credential_items, err = get_multiple_variables({ + "LETS_ENCRYPT_DNS_CREDENTIAL_ITEM" + }) + if not credential_items then + return self:ret(false, + "can't get credential items : " .. err) + end + + for server_name, multisite_vars in pairs(vars) do + debug_log(self.logger, + "Processing server: " .. server_name) + + if multisite_vars["AUTO_LETS_ENCRYPT"] == "yes" + and server_name ~= "global" + and ( + multisite_vars["LETS_ENCRYPT_CHALLENGE"] == "http" + or ( + multisite_vars["LETS_ENCRYPT_CHALLENGE"] == "dns" + and multisite_vars["LETS_ENCRYPT_DNS_PROVIDER"] + ~= "" + and credential_items[server_name] + ) + ) + then + debug_log(self.logger, + "Server " .. server_name .. " qualifies for SSL") + + local data + if multisite_vars["LETS_ENCRYPT_CHALLENGE"] == "dns" + and multisite_vars["USE_LETS_ENCRYPT_WILDCARD"] + == "yes" + then + debug_log(self.logger, + "Using wildcard configuration for " .. + server_name) + + for part in server_name:gmatch("%S+") do + wildcard_servers[part] = true + end + local parts = {} + for part in server_name:gmatch("[^.]+") do + table.insert(parts, part) + end + server_name = table.concat(parts, ".", 2) + data = self.datastore:get("plugin_letsencrypt_" .. + server_name, true) + else + for part in server_name:gmatch("%S+") do + wildcard_servers[part] = false + end + end + + if not data then + debug_log(self.logger, + "Loading certificate files for " .. server_name) + + -- Load certificate + local check + check, data = read_files({ + "/var/cache/bunkerweb/letsencrypt/etc/live/" .. + server_name .. "/fullchain.pem", + "/var/cache/bunkerweb/letsencrypt/etc/live/" .. + server_name .. "/privkey.pem", + }) + if not check then + self.logger:log(ERR, + "error while reading files : " .. data) + ret_ok = false + ret_err = "error reading files" + else + if multisite_vars["LETS_ENCRYPT_CHALLENGE"] + == "dns" + and multisite_vars["USE_LETS_ENCRYPT_WILDCARD"] + == "yes" + then + check, err = self:load_data(data, server_name) + else + check, err = self:load_data(data, + multisite_vars["SERVER_NAME"]) + end + if not check then + self.logger:log(ERR, + "error while loading data : " .. err) + ret_ok = false + ret_err = "error loading data" + end + end + end + end + end + else + debug_log(self.logger, "Processing single-site configuration") + + local server_name + server_name, err = get_variable("SERVER_NAME", false) + if not server_name then + return self:ret(false, + "can't get SERVER_NAME variable : " .. err) + end + + local use_wildcard + use_wildcard, err = get_variable("USE_LETS_ENCRYPT_WILDCARD", + false) + if not use_wildcard then + return self:ret(false, + "can't get USE_LETS_ENCRYPT_WILDCARD variable : " .. err) + end + + local challenge + challenge, err = get_variable("LETS_ENCRYPT_CHALLENGE", false) + if not challenge then + return self:ret(false, + "can't get LETS_ENCRYPT_CHALLENGE variable : " .. err) + end + + server_name = server_name:match("%S+") + debug_log(self.logger, + "Processing server_name: " .. server_name) + + if challenge == "dns" and use_wildcard == "yes" then + debug_log(self.logger, + "Using wildcard DNS challenge for " .. server_name) + + for part in server_name:gmatch("%S+") do + wildcard_servers[part] = true + end + local parts = {} + for part in server_name:gmatch("[^.]+") do + table.insert(parts, part) + end + server_name = table.concat(parts, ".", 2) + else + for part in server_name:gmatch("%S+") do + wildcard_servers[part] = false + end + end + + debug_log(self.logger, + "Loading certificates for " .. server_name) + + local check, data = read_files({ + "/var/cache/bunkerweb/letsencrypt/etc/live/" .. + server_name .. "/fullchain.pem", + "/var/cache/bunkerweb/letsencrypt/etc/live/" .. + server_name .. "/privkey.pem", + }) + if not check then + self.logger:log(ERR, "error while reading files : " .. data) + ret_ok = false + ret_err = "error reading files" + else + check, err = self:load_data(data, server_name) + if not check then + self.logger:log(ERR, "error while loading data : " .. err) + ret_ok = false + ret_err = "error loading data" + end + end + end + else + debug_log(self.logger, "Let's Encrypt is not enabled") + ret_err = "let's encrypt is not used" + end - local ok, err = self.datastore:set("plugin_letsencrypt_wildcard_servers", wildcard_servers, nil, true) - if not ok then - return self:ret(false, "error while setting wildcard servers into datastore : " .. err) - end + debug_log(self.logger, "Storing wildcard servers configuration") + + local ok, err = self.datastore:set("plugin_letsencrypt_wildcard_servers", + wildcard_servers, nil, true) + if not ok then + return self:ret(false, + "error while setting wildcard servers into datastore : " .. err) + end - return self:ret(ret_ok, ret_err) + debug_log(self.logger, "Init phase completed with status: " .. + tostring(ret_ok)) + + return self:ret(ret_ok, ret_err) end +-- Handle SSL certificate selection based on SNI +-- Determines which certificate to use based on server name indication and +-- wildcard configuration function letsencrypt:ssl_certificate() - local server_name, err = ssl_server_name() - if not server_name then - return self:ret(false, "can't get server_name : " .. err) - end - local wildcard_servers, err = self.datastore:get("plugin_letsencrypt_wildcard_servers", true) - if not wildcard_servers then - return self:ret(false, "can't get wildcard servers : " .. err) - end - if wildcard_servers[server_name] then - local parts = {} - for part in server_name:gmatch("[^.]+") do - table.insert(parts, part) - end - server_name = table.concat(parts, ".", 2) - end - local data - data, err = self.datastore:get("plugin_letsencrypt_" .. server_name, true) - if not data and err ~= "not found" then - return self:ret(false, "error while getting plugin_letsencrypt_" .. server_name .. " from datastore : " .. err) - elseif data then - return self:ret(true, "certificate/key data found", data) - end - return self:ret(true, "let's encrypt is not used") + debug_log(self.logger, "SSL certificate phase started") + + local server_name, err = ssl_server_name() + if not server_name then + return self:ret(false, "can't get server_name : " .. err) + end + + debug_log(self.logger, "Processing SSL for server: " .. server_name) + + local wildcard_servers, err = self.datastore:get( + "plugin_letsencrypt_wildcard_servers", true) + if not wildcard_servers then + return self:ret(false, "can't get wildcard servers : " .. err) + end + + if wildcard_servers[server_name] then + debug_log(self.logger, + "Using wildcard certificate for " .. server_name) + + local parts = {} + for part in server_name:gmatch("[^.]+") do + table.insert(parts, part) + end + server_name = table.concat(parts, ".", 2) + end + + local data + data, err = self.datastore:get("plugin_letsencrypt_" .. server_name, true) + if not data and err ~= "not found" then + return self:ret(false, "error while getting plugin_letsencrypt_" .. + server_name .. " from datastore : " .. err) + elseif data then + debug_log(self.logger, + "Certificate data found for " .. server_name) + return self:ret(true, "certificate/key data found", data) + end + + debug_log(self.logger, "No certificate data found for " .. server_name) + return self:ret(true, "let's encrypt is not used") end +-- Load certificate and private key data into the datastore +-- Parses PEM certificate and private key files and caches them in the +-- datastore for quick retrieval +-- @param data Table containing certificate and key file contents +-- @param server_name The server name to associate with the certificate function letsencrypt:load_data(data, server_name) - -- Load certificate - local cert_chain, err = parse_pem_cert(data[1]) - if not cert_chain then - return false, "error while parsing pem cert : " .. err - end - -- Load key - local priv_key - priv_key, err = parse_pem_priv_key(data[2]) - if not priv_key then - return false, "error while parsing pem priv key : " .. err - end - -- Cache data - for key in server_name:gmatch("%S+") do - local cache_key = "plugin_letsencrypt_" .. key - local ok - ok, err = self.datastore:set(cache_key, { cert_chain, priv_key }, nil, true) - if not ok then - return false, "error while setting data into datastore : " .. err - end - end - return true + debug_log(self.logger, "Loading certificate data for " .. server_name) + + -- Load certificate + local cert_chain, err = parse_pem_cert(data[1]) + if not cert_chain then + return false, "error while parsing pem cert : " .. err + end + + -- Load key + local priv_key + priv_key, err = parse_pem_priv_key(data[2]) + if not priv_key then + return false, "error while parsing pem priv key : " .. err + end + + debug_log(self.logger, "Certificate and key parsed successfully") + + -- Cache data + for key in server_name:gmatch("%S+") do + debug_log(self.logger, "Caching certificate data for " .. key) + + local cache_key = "plugin_letsencrypt_" .. key + local ok + ok, err = self.datastore:set(cache_key, { cert_chain, priv_key }, + nil, true) + if not ok then + return false, "error while setting data into datastore : " .. err + end + end + + debug_log(self.logger, "Certificate data cached successfully") + return true end +-- Handle ACME challenge requests during certificate generation +-- Allows Let's Encrypt to access challenge files for domain validation function letsencrypt:access() - if - self.variables["LETS_ENCRYPT_PASSTHROUGH"] == "no" - and sub(self.ctx.bw.uri, 1, string.len("/.well-known/acme-challenge/")) == "/.well-known/acme-challenge/" - then - self.logger:log(NOTICE, "got a visit from Let's Encrypt, let's whitelist it") - return self:ret(true, "visit from LE", OK) - end - return self:ret(true, "success") + debug_log(self.logger, "Access phase started") + + if self.variables["LETS_ENCRYPT_PASSTHROUGH"] == "no" + and sub(self.ctx.bw.uri, 1, + string.len("/.well-known/acme-challenge/")) == + "/.well-known/acme-challenge/" + then + debug_log(self.logger, "ACME challenge request detected") + + self.logger:log(NOTICE, + "got a visit from Let's Encrypt, let's whitelist it") + return self:ret(true, "visit from LE", OK) + end + + return self:ret(true, "success") end --- luacheck: ignore 212 +-- Handle API requests for certificate challenge management +-- Provides endpoints for creating and removing ACME challenge validation +-- tokens during the certificate issuance process function letsencrypt:api() - if - not match(self.ctx.bw.uri, "^/lets%-encrypt/challenge$") - or (self.ctx.bw.request_method ~= "POST" and self.ctx.bw.request_method ~= "DELETE") - then - return self:ret(false, "success") - end - local acme_folder = "/var/tmp/bunkerweb/lets-encrypt/.well-known/acme-challenge/" - local ngx_req = ngx.req - ngx_req.read_body() - local ret, data = pcall(decode, ngx_req.get_body_data()) - if not ret then - return self:ret(true, "json body decoding failed", HTTP_BAD_REQUEST) - end - execute("mkdir -p " .. acme_folder) - if self.ctx.bw.request_method == "POST" then - local file, err = open(acme_folder .. data.token, "w+") - if not file then - return self:ret(true, "can't write validation token : " .. err, HTTP_INTERNAL_SERVER_ERROR) - end - file:write(data.validation) - file:close() - return self:ret(true, "validation token written", HTTP_OK) - elseif self.ctx.bw.request_method == "DELETE" then - local ok, err = remove(acme_folder .. data.token) - if not ok then - return self:ret(true, "can't remove validation token : " .. err, HTTP_INTERNAL_SERVER_ERROR) - end - return self:ret(true, "validation token removed", HTTP_OK) - end - return self:ret(true, "unknown request", HTTP_NOT_FOUND) + debug_log(self.logger, "API endpoint called") + + if not match(self.ctx.bw.uri, "^/lets%-encrypt/challenge$") + or (self.ctx.bw.request_method ~= "POST" + and self.ctx.bw.request_method ~= "DELETE") + then + debug_log(self.logger, "API request does not match expected pattern") + return self:ret(false, "success") + end + + local acme_folder = "/var/tmp/bunkerweb/lets-encrypt/" .. + ".well-known/acme-challenge/" + local ngx_req = ngx.req + ngx_req.read_body() + local ret, data = pcall(decode, ngx_req.get_body_data()) + if not ret then + debug_log(self.logger, "Failed to decode JSON body") + return self:ret(true, "json body decoding failed", HTTP_BAD_REQUEST) + end + + debug_log(self.logger, "Creating ACME challenge directory: " .. + acme_folder) + execute("mkdir -p " .. acme_folder) + + if self.ctx.bw.request_method == "POST" then + debug_log(self.logger, "Processing POST request for token: " .. + data.token) + + local file, err = open(acme_folder .. data.token, "w+") + if not file then + return self:ret(true, "can't write validation token : " .. err, + HTTP_INTERNAL_SERVER_ERROR) + end + file:write(data.validation) + file:close() + + debug_log(self.logger, "Validation token written successfully") + return self:ret(true, "validation token written", HTTP_OK) + + elseif self.ctx.bw.request_method == "DELETE" then + debug_log(self.logger, "Processing DELETE request for token: " .. + data.token) + + local ok, err = remove(acme_folder .. data.token) + if not ok then + return self:ret(true, "can't remove validation token : " .. err, + HTTP_INTERNAL_SERVER_ERROR) + end + + debug_log(self.logger, "Validation token removed successfully") + return self:ret(true, "validation token removed", HTTP_OK) + end + + debug_log(self.logger, "Unknown request method") + return self:ret(true, "unknown request", HTTP_NOT_FOUND) end -return letsencrypt +return letsencrypt \ No newline at end of file diff --git a/src/common/core/letsencrypt/ui/actions.py b/src/common/core/letsencrypt/ui/actions.py index b1d0ba9bbe..b1edff73af 100644 --- a/src/common/core/letsencrypt/ui/actions.py +++ b/src/common/core/letsencrypt/ui/actions.py @@ -1,7 +1,6 @@ -# -*- coding: utf-8 -*- - from io import BytesIO from logging import getLogger +from os import getenv from os.path import sep from pathlib import Path from shutil import rmtree @@ -11,23 +10,132 @@ from uuid import uuid4 from cryptography import x509 +from cryptography.x509 import oid from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import hashes +def debug_log(logger, message): + # Log debug messages only when LOG_LEVEL environment variable is set to + # "debug" + if getenv("LOG_LEVEL") == "debug": + logger.debug(f"[DEBUG] {message}") + + def extract_cache(folder_path, cache_files): + # Extract Let's Encrypt cache files to specified folder path. + logger = getLogger("UI") + is_debug = getenv("LOG_LEVEL") == "debug" + + debug_log(logger, f"Starting cache extraction to {folder_path}") + debug_log(logger, f"Processing {len(cache_files)} cache files") + debug_log(logger, f"Target folder exists: {folder_path.exists()}") + folder_path.mkdir(parents=True, exist_ok=True) + + debug_log(logger, f"Created directory structure: {folder_path}") + debug_log(logger, + f"Directory permissions: {oct(folder_path.stat().st_mode)}") - for cache_file in cache_files: - if cache_file["file_name"].endswith(".tgz") and cache_file["file_name"].startswith("folder:"): - with tar_open(fileobj=BytesIO(cache_file["data"]), mode="r:gz") as tar: - try: - tar.extractall(folder_path, filter="fully_trusted") - except TypeError: - tar.extractall(folder_path) + extracted_files = 0 + total_bytes = 0 + + for i, cache_file in enumerate(cache_files): + file_name = cache_file.get("file_name", "unknown") + file_data = cache_file.get("data", b"") + + debug_log(logger, f"Examining cache file {i+1}/{len(cache_files)}: " + f"{file_name}") + debug_log(logger, f"File size: {len(file_data)} bytes") + + if (cache_file["file_name"].endswith(".tgz") and + cache_file["file_name"].startswith("folder:")): + + debug_log(logger, + f"Processing archive: {cache_file['file_name']}") + debug_log(logger, + f"Archive size: {len(cache_file['data'])} bytes") + + try: + with tar_open(fileobj=BytesIO(cache_file["data"]), + mode="r:gz") as tar: + + members = tar.getmembers() + debug_log(logger, + f"Archive contains {len(members)} members") + # Show first few members + for j, member in enumerate(members[:5]): + debug_log(logger, + f" Member {j+1}: {member.name} " + f"({member.size} bytes, " + f"{'dir' if member.isdir() else 'file'})") + if len(members) > 5: + debug_log(logger, + f" ... and {len(members) - 5} more members") + + try: + tar.extractall(folder_path, filter="fully_trusted") + debug_log(logger, + "Extraction completed with fully_trusted filter") + except TypeError: + # Fallback for older Python versions without filter + debug_log(logger, + "Using fallback extraction without filter") + tar.extractall(folder_path) + + extracted_files += 1 + total_bytes += len(cache_file['data']) + + debug_log(logger, + f"Successfully extracted {cache_file['file_name']}") + debug_log(logger, + f"Extracted {len(members)} items from archive") + + except Exception as e: + logger.error(f"Failed to extract {cache_file['file_name']}: " + f"{e}") + debug_log(logger, f"Extraction error details: {format_exc()}") + else: + debug_log(logger, f"Skipping non-archive file: {file_name}") + + debug_log(logger, "Cache extraction completed:") + debug_log(logger, f" - Files processed: {len(cache_files)}") + debug_log(logger, f" - Archives extracted: {extracted_files}") + debug_log(logger, f" - Total bytes processed: {total_bytes}") + + # List final directory contents + if folder_path.exists(): + all_items = list(folder_path.rglob("*")) + files = [item for item in all_items if item.is_file()] + dirs = [item for item in all_items if item.is_dir()] + + debug_log(logger, "Final directory structure:") + debug_log(logger, f" - Total items: {len(all_items)}") + debug_log(logger, f" - Files: {len(files)}") + debug_log(logger, f" - Directories: {len(dirs)}") + + # Show some example files + for i, file_item in enumerate(files[:5]): + rel_path = file_item.relative_to(folder_path) + debug_log(logger, + f" File {i+1}: {rel_path} " + f"({file_item.stat().st_size} bytes)") + if len(files) > 5: + debug_log(logger, f" ... and {len(files) - 5} more files") -def retrieve_certificates_info(folder_paths: Tuple[Path, Path]) -> dict: +def retrieve_certificates_info(folder_paths: Tuple[Path, ...]) -> dict: + # Retrieve comprehensive certificate information from folder paths. + # + # Parses Let's Encrypt certificate files and renewal configurations + # to extract detailed certificate information including validity dates, + # issuer information, and configuration details. + logger = getLogger("UI") + is_debug = getenv("LOG_LEVEL") == "debug" + + debug_log(logger, + f"Retrieving certificate info from {len(folder_paths)} folder paths") + certificates = { "domain": [], "common_name": [], @@ -41,14 +149,30 @@ def retrieve_certificates_info(folder_paths: Tuple[Path, Path]) -> dict: "challenge": [], "authenticator": [], "key_type": [], + "ocsp_support": [], } - for folder_path in folder_paths: - for cert_file in folder_path.joinpath("live").glob("*/fullchain.pem"): + total_certs_processed = 0 + + for folder_idx, folder_path in enumerate(folder_paths): + debug_log(logger, + f"Processing folder {folder_idx + 1}/{len(folder_paths)}: " + f"{folder_path}") + + cert_files = list(folder_path.joinpath("live").glob("*/fullchain.pem")) + + debug_log(logger, f"Found {len(cert_files)} certificate files in " + f"{folder_path}") + + for cert_file in cert_files: domain = cert_file.parent.name certificates["domain"].append(domain) + total_certs_processed += 1 + + debug_log(logger, + f"Processing certificate {total_certs_processed}: {domain}") - # Default values + # Initialize default certificate information cert_info = { "common_name": "Unknown", "issuer": "Unknown", @@ -62,65 +186,227 @@ def retrieve_certificates_info(folder_paths: Tuple[Path, Path]) -> dict: "challenge": "Unknown", "authenticator": "Unknown", "key_type": "Unknown", + "ocsp_support": "Unknown", } - # * Parsing the certificate + # Parse the certificate file try: - cert = x509.load_pem_x509_certificate(cert_file.read_bytes(), default_backend()) + debug_log(logger, + f"Loading X.509 certificate from {cert_file}") + debug_log(logger, + f"Certificate file size: {cert_file.stat().st_size} bytes") + + cert_bytes = cert_file.read_bytes() + debug_log(logger, + f"Read {len(cert_bytes)} bytes from certificate file") + debug_log(logger, + f"Certificate data preview: {cert_bytes[:100]}...") + + cert = x509.load_pem_x509_certificate( + cert_bytes, default_backend() + ) + + debug_log(logger, + f"Successfully loaded certificate for {domain}") + debug_log(logger, f"Certificate version: {cert.version}") + debug_log(logger, f"Certificate serial: {cert.serial_number}") - # ? Getting the subject - subject = cert.subject.get_attributes_for_oid(x509.NameOID.COMMON_NAME) + # Extract subject (Common Name) + subject = cert.subject.get_attributes_for_oid( + x509.NameOID.COMMON_NAME + ) if subject: cert_info["common_name"] = subject[0].value + debug_log(logger, + f"Certificate CN: {cert_info['common_name']}") + else: + debug_log(logger, + "No Common Name found in certificate subject") + debug_log(logger, f"Full subject: {cert.subject}") - # ? Getting the issuer - issuer = cert.issuer.get_attributes_for_oid(x509.NameOID.COMMON_NAME) + # Extract issuer (Certificate Authority) + issuer = cert.issuer.get_attributes_for_oid( + x509.NameOID.COMMON_NAME + ) if issuer: cert_info["issuer"] = issuer[0].value + debug_log(logger, + f"Certificate issuer: {cert_info['issuer']}") + else: + debug_log(logger, + "No Common Name found in certificate issuer") + debug_log(logger, f"Full issuer: {cert.issuer}") - # ? Getting the validity period - cert_info["valid_from"] = cert.not_valid_before.strftime("%d-%m-%Y %H:%M:%S UTC") - cert_info["valid_to"] = cert.not_valid_after.strftime("%d-%m-%Y %H:%M:%S UTC") + # Extract validity period + cert_info["valid_from"] = ( + cert.not_valid_before.strftime("%d-%m-%Y %H:%M:%S UTC") + ) + cert_info["valid_to"] = ( + cert.not_valid_after.strftime("%d-%m-%Y %H:%M:%S UTC") + ) + + debug_log(logger, + f"Certificate validity: {cert_info['valid_from']} to " + f"{cert_info['valid_to']}") + # Check if certificate is currently valid + from datetime import datetime, timezone + now = datetime.now(timezone.utc) + is_valid = (cert.not_valid_before <= now <= + cert.not_valid_after) + debug_log(logger, f"Certificate currently valid: {is_valid}") - # ? Getting the serial number + # Extract serial number cert_info["serial_number"] = str(cert.serial_number) - # ? Getting the fingerprint - cert_info["fingerprint"] = cert.fingerprint(hashes.SHA256()).hex() + # Calculate fingerprint + fingerprint_bytes = cert.fingerprint(hashes.SHA256()) + cert_info["fingerprint"] = fingerprint_bytes.hex() + + debug_log(logger, + f"Certificate fingerprint: {cert_info['fingerprint']}") - # ? Getting the version + # Extract version cert_info["version"] = cert.version.name - except BaseException: - print(f"Error while parsing certificate {cert_file}: {format_exc()}", flush=True) + + # Check for OCSP support via Authority Information Access ext + try: + aia_ext = cert.extensions.get_extension_for_oid( + oid.ExtensionOID.AUTHORITY_INFORMATION_ACCESS + ) + ocsp_urls = [] + for access_description in aia_ext.value: + if (access_description.access_method == + oid.AuthorityInformationAccessOID.OCSP): + ocsp_urls.append( + str(access_description.access_location.value) + ) + + if ocsp_urls: + cert_info["ocsp_support"] = "Yes" + debug_log(logger, + f"OCSP URLs found for {domain}: {ocsp_urls}") + else: + cert_info["ocsp_support"] = "No" + debug_log(logger, + f"AIA extension found for {domain} but no OCSP URLs") + + except x509.ExtensionNotFound: + cert_info["ocsp_support"] = "No" + debug_log(logger, + f"No Authority Information Access extension found for " + f"{domain}") + except Exception as ocsp_error: + cert_info["ocsp_support"] = "Unknown" + debug_log(logger, + f"Error checking OCSP support for {domain}: " + f"{ocsp_error}") + + debug_log(logger, + f"Certificate processing completed for {domain}") + debug_log(logger, f" - Serial: {cert_info['serial_number']}") + debug_log(logger, f" - Version: {cert_info['version']}") + debug_log(logger, f" - Subject: {cert_info['common_name']}") + debug_log(logger, f" - Issuer: {cert_info['issuer']}") + debug_log(logger, + f" - OCSP Support: {cert_info['ocsp_support']}") + + except BaseException as e: + error_msg = (f"Error while parsing certificate {cert_file}: " + f"{e}") + logger.error(error_msg) + debug_log(logger, "Certificate parsing error details:") + debug_log(logger, f" - Error type: {type(e).__name__}") + debug_log(logger, f" - Error message: {str(e)}") + debug_log(logger, f" - Full traceback: {format_exc()}") + debug_log(logger, f" - Certificate file: {cert_file}") + debug_log(logger, f" - File exists: {cert_file.exists()}") + debug_log(logger, f" - File readable: {cert_file.is_file()}") - # * Parsing the renewal configuration + # Parse the renewal configuration file try: - renewal_file = folder_path.joinpath("renewal", f"{domain}.conf") + renewal_file = folder_path.joinpath("renewal", + f"{domain}.conf") + if renewal_file.exists(): + debug_log(logger, + f"Processing renewal configuration: {renewal_file}") + with renewal_file.open("r") as f: - for line in f: + for line_num, line in enumerate(f, 1): + line = line.strip() + if not line or line.startswith("#"): + continue + if line.startswith("preferred_profile = "): - cert_info["preferred_profile"] = line.split(" = ")[1].strip() + cert_info["preferred_profile"] = ( + line.split(" = ")[1].strip() + ) elif line.startswith("pref_challs = "): - cert_info["challenge"] = line.split(" = ")[1].strip().split(",")[0] + # Take first challenge from comma-separated list + challenges = line.split(" = ")[1].strip() + cert_info["challenge"] = challenges.split(",")[0] elif line.startswith("authenticator = "): - cert_info["authenticator"] = line.split(" = ")[1].strip() + cert_info["authenticator"] = ( + line.split(" = ")[1].strip() + ) elif line.startswith("server = "): - cert_info["issuer_server"] = line.split(" = ")[1].strip() + cert_info["issuer_server"] = ( + line.split(" = ")[1].strip() + ) elif line.startswith("key_type = "): - cert_info["key_type"] = line.split(" = ")[1].strip() - except BaseException: - print(f"Error while parsing renewal configuration {renewal_file}: {format_exc()}", flush=True) + cert_info["key_type"] = ( + line.split(" = ")[1].strip() + ) + + debug_log(logger, + f"Renewal config parsed - Profile: " + f"{cert_info['preferred_profile']}, " + f"Challenge: {cert_info['challenge']}, " + f"Key type: {cert_info['key_type']}") + else: + debug_log(logger, + f"No renewal configuration found for {domain}") + + except BaseException as e: + error_msg = (f"Error while parsing renewal configuration " + f"{renewal_file}: {e}") + logger.error(error_msg) + debug_log(logger, f"Renewal config parsing error: " + f"{format_exc()}") - # Append values to corresponding lists in certificates dictionary + # Append all certificate information to the results for key in cert_info: certificates[key].append(cert_info[key]) + debug_log(logger, + f"Certificate retrieval complete. Processed " + f"{total_certs_processed} certificates from {len(folder_paths)} " + f"folders") + + # Summary of OCSP support + ocsp_support_counts = {"Yes": 0, "No": 0, "Unknown": 0} + for ocsp_status in certificates.get('ocsp_support', []): + ocsp_support_counts[ocsp_status] = ( + ocsp_support_counts.get(ocsp_status, 0) + 1 + ) + debug_log(logger, f"OCSP support summary: {ocsp_support_counts}") + return certificates def pre_render(app, *args, **kwargs): + # Pre-render function to prepare Let's Encrypt certificate data for UI. + # + # Retrieves certificate information from database cache files and + # prepares the data structure for rendering in the web interface. + # Handles extraction of cache files, certificate parsing, and error + # handling for the certificate management interface. logger = getLogger("UI") + is_debug = getenv("LOG_LEVEL") == "debug" + + debug_log(logger, "Starting pre-render for Let's Encrypt certificates") + + # Initialize return structure with default values ret = { "list_certificates": { "data": { @@ -137,6 +423,7 @@ def pre_render(app, *args, **kwargs): "challenge": [], "authenticator": [], "key_type": [], + "ocsp_support": [], }, "order": { "column": 5, @@ -148,23 +435,107 @@ def pre_render(app, *args, **kwargs): } root_folder = Path(sep, "var", "tmp", "bunkerweb", "ui") + folder_path = None + try: - # ? Fetching Let's Encrypt cache files - regular_cache_files = kwargs["db"].get_jobs_cache_files(job_name="certbot-renew") + debug_log(logger, "Starting Let's Encrypt data retrieval process") + debug_log(logger, f"Database connection available: {'db' in kwargs}") + debug_log(logger, f"Root folder: {root_folder}") + + # Retrieve cache files from database + debug_log(logger, + "Fetching cache files from database for job: certbot-renew") + + regular_cache_files = kwargs["db"].get_jobs_cache_files( + job_name="certbot-renew" + ) + + debug_log(logger, f"Retrieved {len(regular_cache_files)} cache files") + for i, cache_file in enumerate(regular_cache_files): + file_name = cache_file.get("file_name", "unknown") + file_size = len(cache_file.get("data", b"")) + debug_log(logger, + f" Cache file {i+1}: {file_name} ({file_size} bytes)") - # ? Extracting cache files - folder_path = root_folder.joinpath("letsencrypt", str(uuid4())) + # Create unique temporary folder for extraction + folder_uuid = str(uuid4()) + folder_path = root_folder.joinpath("letsencrypt", folder_uuid) regular_le_folder = folder_path.joinpath("regular") + + debug_log(logger, f"Using temporary folder UUID: {folder_uuid}") + debug_log(logger, f"Temporary folder path: {folder_path}") + debug_log(logger, f"Regular LE folder: {regular_le_folder}") + + # Extract cache files to temporary location + debug_log(logger, "Starting cache file extraction") + extract_cache(regular_le_folder, regular_cache_files) + + debug_log(logger, + "Cache extraction completed, starting certificate parsing") - # ? We retrieve the certificates from the cache files by parsing the content of the .pem files - ret["list_certificates"]["data"] = retrieve_certificates_info((regular_le_folder,)) + # Parse certificates and retrieve information + cert_data = retrieve_certificates_info((regular_le_folder,)) + + cert_count = len(cert_data.get("domain", [])) + + debug_log(logger, "Certificate parsing completed") + debug_log(logger, f"Total certificates processed: {cert_count}") + debug_log(logger, f"Certificate data keys: {list(cert_data.keys())}") + + # Log sample certificate data (first certificate if available) + if cert_count > 0: + debug_log(logger, + "Sample certificate data (first certificate):") + for key in cert_data: + value = cert_data[key][0] if cert_data[key] else "None" + if key == "ocsp_support": + debug_log(logger, + f" {key}: {value} (OCSP support detected)") + else: + debug_log(logger, f" {key}: {value}") + + ret["list_certificates"]["data"] = cert_data + + logger.info(f"Pre-render completed successfully with {cert_count} " + f"certificates") + + debug_log(logger, f"Return data structure keys: {list(ret.keys())}") + debug_log(logger, + f"Certificate list structure: " + f"{list(ret['list_certificates'].keys())}") + except BaseException as e: - logger.debug(format_exc()) - logger.error(f"Failed to get Let's Encrypt certificates: {e}") - ret["error"] = str(e) + error_msg = f"Failed to get Let's Encrypt certificates: {e}" + logger.error(error_msg) + + debug_log(logger, "Pre-render error occurred:") + debug_log(logger, f" - Error type: {type(e).__name__}") + debug_log(logger, f" - Error message: {str(e)}") + debug_log(logger, f" - Error traceback: {format_exc()}") + debug_log(logger, + f" - kwargs keys: {list(kwargs.keys()) if kwargs else 'None'}") + if "db" in kwargs: + debug_log(logger, + f" - Database object type: {type(kwargs['db'])}") + + ret["error"] = {"message": str(e)} + finally: - if folder_path: - rmtree(root_folder, ignore_errors=True) + # Clean up temporary files + if folder_path and folder_path.exists(): + try: + debug_log(logger, + f"Cleaning up temporary folder: {root_folder}") + + rmtree(root_folder, ignore_errors=True) + + debug_log(logger, "Temporary folder cleanup completed") + + except Exception as cleanup_error: + logger.warning(f"Failed to clean up temporary folder " + f"{root_folder}: {cleanup_error}") + + debug_log(logger, "Pre-render function completed") - return ret + return ret \ No newline at end of file diff --git a/src/common/core/letsencrypt/ui/blueprints/letsencrypt.py b/src/common/core/letsencrypt/ui/blueprints/letsencrypt.py index 3ae7764062..f1f797b7b7 100644 --- a/src/common/core/letsencrypt/ui/blueprints/letsencrypt.py +++ b/src/common/core/letsencrypt/ui/blueprints/letsencrypt.py @@ -8,6 +8,7 @@ from traceback import format_exc from cryptography import x509 +from cryptography.x509 import oid from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import hashes from flask import Blueprint, render_template, request, jsonify @@ -26,7 +27,9 @@ template_folder=f"{blueprint_path}/templates", ) -CERTBOT_BIN = join(sep, "usr", "share", "bunkerweb", "deps", "python", "bin", "certbot") +CERTBOT_BIN = join( + sep, "usr", "share", "bunkerweb", "deps", "python", "bin", "certbot" +) LE_CACHE_DIR = join(sep, "var", "cache", "bunkerweb", "letsencrypt", "etc") DATA_PATH = join(sep, "var", "tmp", "bunkerweb", "ui", "letsencrypt", "etc") WORK_DIR = join(sep, "var", "tmp", "bunkerweb", "ui", "letsencrypt", "lib") @@ -35,22 +38,99 @@ DEPS_PATH = join(sep, "usr", "share", "bunkerweb", "deps", "python") +def debug_log(logger, message): + # Log debug messages only when LOG_LEVEL environment variable is set to + # "debug" + if getenv("LOG_LEVEL") == "debug": + logger.debug(f"[DEBUG] {message}") + + def download_certificates(): - rmtree(DATA_PATH, ignore_errors=True) + # Download and extract Let's Encrypt certificates from database cache. + # + # Retrieves certificate cache files from the database and extracts them + # to the local data path for processing. + is_debug = getenv("LOG_LEVEL") == "debug" + + debug_log(LOGGER, "Starting certificate download process") + debug_log(LOGGER, f"Target directory: {DATA_PATH}") + debug_log(LOGGER, f"Cache directory: {LE_CACHE_DIR}") + + # Clean up and create fresh directory + if Path(DATA_PATH).exists(): + debug_log(LOGGER, f"Removing existing directory: {DATA_PATH}") + rmtree(DATA_PATH, ignore_errors=True) + + debug_log(LOGGER, f"Creating directory structure: {DATA_PATH}") Path(DATA_PATH).mkdir(parents=True, exist_ok=True) + debug_log(LOGGER, "Fetching cache files from database") cache_files = DB.get_jobs_cache_files(job_name="certbot-renew") + + debug_log(LOGGER, f"Retrieved {len(cache_files)} cache files") + for i, cache_file in enumerate(cache_files): + debug_log(LOGGER, f"Cache file {i+1}: {cache_file['file_name']} " + f"({len(cache_file.get('data', b''))} bytes)") + extracted_count = 0 for cache_file in cache_files: - if cache_file["file_name"].endswith(".tgz") and cache_file["file_name"].startswith("folder:"): - with tar_open(fileobj=BytesIO(cache_file["data"]), mode="r:gz") as tar: - try: - tar.extractall(DATA_PATH, filter="fully_trusted") - except TypeError: - tar.extractall(DATA_PATH) + if (cache_file["file_name"].endswith(".tgz") and + cache_file["file_name"].startswith("folder:")): + + debug_log(LOGGER, + f"Extracting cache file: {cache_file['file_name']}") + debug_log(LOGGER, f"File size: {len(cache_file['data'])} bytes") + + try: + with tar_open(fileobj=BytesIO(cache_file["data"]), + mode="r:gz") as tar: + member_count = len(tar.getmembers()) + debug_log(LOGGER, + f"Archive contains {member_count} members") + + try: + tar.extractall(DATA_PATH, filter="fully_trusted") + debug_log(LOGGER, + "Extraction completed with fully_trusted filter") + except TypeError: + debug_log(LOGGER, + "Falling back to extraction without filter") + tar.extractall(DATA_PATH) + + extracted_count += 1 + debug_log(LOGGER, + f"Successfully extracted {cache_file['file_name']}") + + except Exception as e: + LOGGER.error(f"Failed to extract {cache_file['file_name']}: " + f"{e}") + debug_log(LOGGER, f"Extraction error details: {format_exc()}") + else: + debug_log(LOGGER, + f"Skipping non-matching file: {cache_file['file_name']}") + + debug_log(LOGGER, + f"Certificate download completed: {extracted_count} files extracted") + # List extracted directory contents + if Path(DATA_PATH).exists(): + contents = list(Path(DATA_PATH).rglob("*")) + debug_log(LOGGER, f"Extracted directory contains {len(contents)} items") + for item in contents[:10]: # Show first 10 items + debug_log(LOGGER, f" - {item}") + if len(contents) > 10: + debug_log(LOGGER, f" ... and {len(contents) - 10} more items") def retrieve_certificates(): + # Retrieve and parse Let's Encrypt certificate information. + # + # Downloads certificates from cache and parses both the certificate + # files and renewal configuration to extract comprehensive certificate + # information. + is_debug = getenv("LOG_LEVEL") == "debug" + + debug_log(LOGGER, "Starting certificate retrieval") + download_certificates() certificates = { @@ -67,11 +147,23 @@ def retrieve_certificates(): "challenge": [], "authenticator": [], "key_type": [], + "ocsp_support": [], } - for cert_file in Path(DATA_PATH).joinpath("live").glob("*/fullchain.pem"): + cert_files = list(Path(DATA_PATH).joinpath("live").glob("*/fullchain.pem")) + + debug_log(LOGGER, f"Processing {len(cert_files)} certificate files") + + for cert_file in cert_files: domain = cert_file.parent.name certificates["domain"].append(domain) + + debug_log(LOGGER, + f"Processing certificate {len(certificates['domain'])}: {domain}") + debug_log(LOGGER, f"Certificate file path: {cert_file}") + debug_log(LOGGER, + f"Certificate file size: {cert_file.stat().st_size} bytes") + cert_info = { "common_name": "Unknown", "issuer": "Unknown", @@ -85,52 +177,209 @@ def retrieve_certificates(): "challenge": "Unknown", "authenticator": "Unknown", "key_type": "Unknown", + "ocsp_support": "Unknown", } + try: - cert = x509.load_pem_x509_certificate(cert_file.read_bytes(), default_backend()) - subject = cert.subject.get_attributes_for_oid(x509.NameOID.COMMON_NAME) + debug_log(LOGGER, f"Loading X.509 certificate from {cert_file}") + + cert_data = cert_file.read_bytes() + debug_log(LOGGER, + f"Certificate data length: {len(cert_data)} bytes") + debug_log(LOGGER, + f"Certificate starts with: {cert_data[:50]}") + + cert = x509.load_pem_x509_certificate( + cert_data, default_backend() + ) + + debug_log(LOGGER, + f"Successfully loaded certificate for {domain}") + debug_log(LOGGER, f"Certificate subject: {cert.subject}") + debug_log(LOGGER, f"Certificate issuer: {cert.issuer}") + + subject = cert.subject.get_attributes_for_oid( + x509.NameOID.COMMON_NAME + ) if subject: cert_info["common_name"] = subject[0].value - issuer = cert.issuer.get_attributes_for_oid(x509.NameOID.COMMON_NAME) + debug_log(LOGGER, + f"Certificate CN extracted: {cert_info['common_name']}") + else: + debug_log(LOGGER, + "No Common Name found in certificate subject") + + issuer = cert.issuer.get_attributes_for_oid( + x509.NameOID.COMMON_NAME + ) if issuer: cert_info["issuer"] = issuer[0].value - cert_info["valid_from"] = cert.not_valid_before.astimezone().isoformat() - cert_info["valid_to"] = cert.not_valid_after.astimezone().isoformat() + debug_log(LOGGER, + f"Certificate issuer extracted: {cert_info['issuer']}") + else: + debug_log(LOGGER, + "No Common Name found in certificate issuer") + + cert_info["valid_from"] = ( + cert.not_valid_before.astimezone().isoformat() + ) + cert_info["valid_to"] = ( + cert.not_valid_after.astimezone().isoformat() + ) + + debug_log(LOGGER, + f"Certificate validity period: {cert_info['valid_from']} to " + f"{cert_info['valid_to']}") + cert_info["serial_number"] = str(cert.serial_number) cert_info["fingerprint"] = cert.fingerprint(hashes.SHA256()).hex() cert_info["version"] = cert.version.name + + # Check for OCSP support via Authority Information Access extension + try: + aia_ext = cert.extensions.get_extension_for_oid( + oid.ExtensionOID.AUTHORITY_INFORMATION_ACCESS + ) + ocsp_urls = [] + for access_description in aia_ext.value: + if (access_description.access_method == + oid.AuthorityInformationAccessOID.OCSP): + ocsp_urls.append( + str(access_description.access_location.value) + ) + + if ocsp_urls: + cert_info["ocsp_support"] = "Yes" + debug_log(LOGGER, f"OCSP URLs found: {ocsp_urls}") + else: + cert_info["ocsp_support"] = "No" + debug_log(LOGGER, + "AIA extension found but no OCSP URLs") + + except x509.ExtensionNotFound: + cert_info["ocsp_support"] = "No" + debug_log(LOGGER, + "No Authority Information Access extension found") + except Exception as ocsp_error: + cert_info["ocsp_support"] = "Unknown" + debug_log(LOGGER, f"Error checking OCSP support: {ocsp_error}") + + debug_log(LOGGER, "Certificate details extracted:") + debug_log(LOGGER, f" - Serial: {cert_info['serial_number']}") + debug_log(LOGGER, + f" - Fingerprint: {cert_info['fingerprint'][:16]}...") + debug_log(LOGGER, f" - Version: {cert_info['version']}") + debug_log(LOGGER, + f" - OCSP Support: {cert_info['ocsp_support']}") + except BaseException as e: LOGGER.debug(format_exc()) LOGGER.error(f"Error while parsing certificate {cert_file}: {e}") + debug_log(LOGGER, + f"Certificate parsing failed for {domain}: {str(e)}") + debug_log(LOGGER, f"Error type: {type(e).__name__}") try: - renewal_file = Path(DATA_PATH).joinpath("renewal", f"{domain}.conf") + renewal_file = Path(DATA_PATH).joinpath("renewal", + f"{domain}.conf") if renewal_file.exists(): + debug_log(LOGGER, f"Processing renewal file: {renewal_file}") + debug_log(LOGGER, + f"Renewal file size: {renewal_file.stat().st_size} bytes") + + config_lines_processed = 0 with renewal_file.open("r") as f: - for line in f: + for line_num, line in enumerate(f, 1): + line = line.strip() + if not line or line.startswith("#"): + continue + + config_lines_processed += 1 + if is_debug and line_num <= 10: # Debug first 10 lines + debug_log(LOGGER, + f"Renewal config line {line_num}: {line}") + if line.startswith("preferred_profile = "): - cert_info["preferred_profile"] = line.split(" = ")[1].strip() + cert_info["preferred_profile"] = ( + line.split(" = ")[1].strip() + ) + debug_log(LOGGER, + f"Found preferred_profile: " + f"{cert_info['preferred_profile']}") elif line.startswith("pref_challs = "): - cert_info["challenge"] = line.split(" = ")[1].strip().split(",")[0] + challenges = line.split(" = ")[1].strip() + cert_info["challenge"] = challenges.split(",")[0] + debug_log(LOGGER, + f"Found challenge: {cert_info['challenge']} " + f"(from {challenges})") elif line.startswith("authenticator = "): - cert_info["authenticator"] = line.split(" = ")[1].strip() + cert_info["authenticator"] = ( + line.split(" = ")[1].strip() + ) + debug_log(LOGGER, + f"Found authenticator: " + f"{cert_info['authenticator']}") elif line.startswith("server = "): - cert_info["issuer_server"] = line.split(" = ")[1].strip() + cert_info["issuer_server"] = ( + line.split(" = ")[1].strip() + ) + debug_log(LOGGER, + f"Found issuer_server: " + f"{cert_info['issuer_server']}") elif line.startswith("key_type = "): - cert_info["key_type"] = line.split(" = ")[1].strip() + cert_info["key_type"] = ( + line.split(" = ")[1].strip() + ) + debug_log(LOGGER, + f"Found key_type: {cert_info['key_type']}") + + debug_log(LOGGER, + f"Processed {config_lines_processed} configuration lines") + debug_log(LOGGER, + f"Final renewal configuration for {domain}:") + debug_log(LOGGER, + f" - Profile: {cert_info['preferred_profile']}") + debug_log(LOGGER, f" - Challenge: {cert_info['challenge']}") + debug_log(LOGGER, + f" - Authenticator: {cert_info['authenticator']}") + debug_log(LOGGER, + f" - Server: {cert_info['issuer_server']}") + debug_log(LOGGER, f" - Key type: {cert_info['key_type']}") + else: + debug_log(LOGGER, + f"No renewal file found for {domain} at {renewal_file}") + except BaseException as e: LOGGER.debug(format_exc()) - LOGGER.error(f"Error while parsing renewal configuration {renewal_file}: {e}") + LOGGER.error(f"Error while parsing renewal configuration " + f"{renewal_file}: {e}") + debug_log(LOGGER, + f"Renewal config parsing failed for {domain}: {str(e)}") + debug_log(LOGGER, f"Error type: {type(e).__name__}") for key in cert_info: certificates[key].append(cert_info[key]) + debug_log(LOGGER, f"Retrieved {len(certificates['domain'])} certificates") + # Summary of OCSP support + ocsp_support_counts = {"Yes": 0, "No": 0, "Unknown": 0} + for ocsp_status in certificates.get('ocsp_support', []): + ocsp_support_counts[ocsp_status] = ( + ocsp_support_counts.get(ocsp_status, 0) + 1 + ) + debug_log(LOGGER, f"OCSP support summary: {ocsp_support_counts}") + return certificates @letsencrypt.route("/letsencrypt", methods=["GET"]) @login_required def letsencrypt_page(): + # Render the Let's Encrypt certificates management page. + is_debug = getenv("LOG_LEVEL") == "debug" + + debug_log(LOGGER, "Rendering Let's Encrypt page") + return render_template("letsencrypt.html") @@ -138,55 +387,91 @@ def letsencrypt_page(): @login_required @cors_required def letsencrypt_fetch(): + # Fetch certificate data for DataTables AJAX requests. + # + # Retrieves and formats certificate information for display in the + # DataTables interface. + is_debug = getenv("LOG_LEVEL") == "debug" + + debug_log(LOGGER, "Fetching certificates for DataTables") + cert_list = [] try: certs = retrieve_certificates() - LOGGER.debug(f"Certificates: {certs}") + + debug_log(LOGGER, f"Retrieved certificates: " + f"{len(certs.get('domain', []))}") + for i, domain in enumerate(certs.get("domain", [])): - cert_list.append( - { - "domain": domain, - "common_name": certs.get("common_name", [""])[i], - "issuer": certs.get("issuer", [""])[i], - "issuer_server": certs.get("issuer_server", [""])[i], - "valid_from": certs.get("valid_from", [""])[i], - "valid_to": certs.get("valid_to", [""])[i], - "serial_number": certs.get("serial_number", [""])[i], - "fingerprint": certs.get("fingerprint", [""])[i], - "version": certs.get("version", [""])[i], - "preferred_profile": certs.get("preferred_profile", [""])[i], - "challenge": certs.get("challenge", [""])[i], - "authenticator": certs.get("authenticator", [""])[i], - "key_type": certs.get("key_type", [""])[i], - } - ) + cert_data = { + "domain": domain, + "common_name": certs.get("common_name", [""])[i], + "issuer": certs.get("issuer", [""])[i], + "issuer_server": certs.get("issuer_server", [""])[i], + "valid_from": certs.get("valid_from", [""])[i], + "valid_to": certs.get("valid_to", [""])[i], + "serial_number": certs.get("serial_number", [""])[i], + "fingerprint": certs.get("fingerprint", [""])[i], + "version": certs.get("version", [""])[i], + "preferred_profile": certs.get("preferred_profile", [""])[i], + "challenge": certs.get("challenge", [""])[i], + "authenticator": certs.get("authenticator", [""])[i], + "key_type": certs.get("key_type", [""])[i], + "ocsp_support": certs.get("ocsp_support", [""])[i], + } + cert_list.append(cert_data) + + debug_log(LOGGER, f"Added certificate to list: {domain}") + debug_log(LOGGER, + f" - OCSP Support: {cert_data['ocsp_support']}") + debug_log(LOGGER, f" - Challenge: {cert_data['challenge']}") + debug_log(LOGGER, f" - Key Type: {cert_data['key_type']}") + except BaseException as e: LOGGER.debug(format_exc()) LOGGER.error(f"Error while fetching certificates: {e}") - return jsonify( - { - "data": cert_list, - "recordsTotal": len(cert_list), - "recordsFiltered": len(cert_list), - "draw": int(request.form.get("draw", 1)), - } - ) + response_data = { + "data": cert_list, + "recordsTotal": len(cert_list), + "recordsFiltered": len(cert_list), + "draw": int(request.form.get("draw", 1)), + } + + debug_log(LOGGER, f"Returning {len(cert_list)} certificates to DataTables") + + return jsonify(response_data) @letsencrypt.route("/letsencrypt/delete", methods=["POST"]) @login_required @cors_required def letsencrypt_delete(): + # Delete a Let's Encrypt certificate. + # + # Removes the specified certificate using certbot and cleans up + # associated files and directories. Updates the database cache + # with the modified certificate data. + is_debug = getenv("LOG_LEVEL") == "debug" + cert_name = request.json.get("cert_name") if not cert_name: + debug_log(LOGGER, "Certificate deletion request missing cert_name") return jsonify({"status": "ko", "message": "Missing cert_name"}), 400 + debug_log(LOGGER, f"Starting deletion of certificate: {cert_name}") + download_certificates() env = {"PATH": getenv("PATH", ""), "PYTHONPATH": getenv("PYTHONPATH", "")} - env["PYTHONPATH"] = env["PYTHONPATH"] + (f":{DEPS_PATH}" if DEPS_PATH not in env["PYTHONPATH"] else "") + env["PYTHONPATH"] = env["PYTHONPATH"] + ( + f":{DEPS_PATH}" if DEPS_PATH not in env["PYTHONPATH"] else "" + ) + + debug_log(LOGGER, f"Running certbot delete for {cert_name}") + debug_log(LOGGER, f"Environment: PATH={env['PATH'][:100]}...") + debug_log(LOGGER, f"PYTHONPATH: {env['PYTHONPATH'][:100]}...") delete_proc = run( [ @@ -210,11 +495,20 @@ def letsencrypt_delete(): check=False, ) + debug_log(LOGGER, f"Certbot delete return code: {delete_proc.returncode}") + if delete_proc.stdout: + debug_log(LOGGER, f"Certbot output: {delete_proc.stdout}") + if delete_proc.returncode == 0: LOGGER.info(f"Successfully deleted certificate {cert_name}") + + # Clean up certificate directories and files cert_dir = Path(DATA_PATH).joinpath("live", cert_name) archive_dir = Path(DATA_PATH).joinpath("archive", cert_name) - renewal_file = Path(DATA_PATH).joinpath("renewal", f"{cert_name}.conf") + renewal_file = Path(DATA_PATH).joinpath("renewal", + f"{cert_name}.conf") + + debug_log(LOGGER, f"Cleaning up directories for {cert_name}") for path in (cert_dir, archive_dir): if path.exists(): @@ -222,8 +516,10 @@ def letsencrypt_delete(): for file in path.glob("*"): try: file.unlink() + debug_log(LOGGER, f"Removed file: {file}") except Exception as e: - LOGGER.error(f"Failed to remove file {file}: {e}") + LOGGER.error(f"Failed to remove file {file}: " + f"{e}") path.rmdir() LOGGER.info(f"Removed directory {path}") except Exception as e: @@ -233,36 +529,62 @@ def letsencrypt_delete(): try: renewal_file.unlink() LOGGER.info(f"Removed renewal file {renewal_file}") + debug_log(LOGGER, f"Renewal file removed: {renewal_file}") except Exception as e: - LOGGER.error(f"Failed to remove renewal file {renewal_file}: {e}") + LOGGER.error(f"Failed to remove renewal file " + f"{renewal_file}: {e}") + # Update database cache with modified certificate data try: + debug_log(LOGGER, "Updating database cache with modified data") + dir_path = Path(LE_CACHE_DIR) file_name = f"folder:{dir_path.as_posix()}.tgz" content = BytesIO() - with tar_open(file_name, mode="w:gz", fileobj=content, compresslevel=9) as tgz: + + with tar_open(file_name, mode="w:gz", fileobj=content, + compresslevel=9) as tgz: tgz.add(DATA_PATH, arcname=".") + content.seek(0, 0) - err = DB.upsert_job_cache("", file_name, content.getvalue(), job_name="certbot-renew") + err = DB.upsert_job_cache("", file_name, content.getvalue(), + job_name="certbot-renew") if err: - return jsonify({"status": "ko", "message": f"Failed to cache letsencrypt dir: {err}"}) + return jsonify({"status": "ko", + "message": f"Failed to cache letsencrypt " + f"dir: {err}"}) else: err = DB.checked_changes(["plugins"], ["letsencrypt"], True) if err: - return jsonify({"status": "ko", "message": f"Failed to cache letsencrypt dir: {err}"}) + return jsonify({"status": "ko", + "message": f"Failed to cache letsencrypt " + f"dir: {err}"}) + + debug_log(LOGGER, "Database cache updated successfully") + except Exception as e: - return jsonify({"status": "ok", "message": f"Successfully deleted certificate {cert_name}, but failed to cache letsencrypt dir: {e}"}) - return jsonify({"status": "ok", "message": f"Successfully deleted certificate {cert_name}"}) + error_msg = (f"Successfully deleted certificate {cert_name}, " + f"but failed to cache letsencrypt dir: {e}") + LOGGER.error(error_msg) + return jsonify({"status": "ok", "message": error_msg}) + + return jsonify({"status": "ok", + "message": f"Successfully deleted certificate " + f"{cert_name}"}) else: - LOGGER.error(f"Failed to delete certificate {cert_name}: {delete_proc.stdout}") - return jsonify({"status": "ko", "message": f"Failed to delete certificate {cert_name}: {delete_proc.stdout}"}) + error_msg = (f"Failed to delete certificate {cert_name}: " + f"{delete_proc.stdout}") + LOGGER.error(error_msg) + return jsonify({"status": "ko", "message": error_msg}) @letsencrypt.route("/letsencrypt/") @login_required def letsencrypt_static(filename): - """ - Generalized handler for static files in the letsencrypt blueprint. - """ - return letsencrypt.send_static_file(filename) + # Serve static files for the Let's Encrypt blueprint. + is_debug = getenv("LOG_LEVEL") == "debug" + + debug_log(LOGGER, f"Serving static file: {filename}") + + return letsencrypt.send_static_file(filename) \ No newline at end of file diff --git a/src/common/core/letsencrypt/ui/blueprints/static/js/main.js b/src/common/core/letsencrypt/ui/blueprints/static/js/main.js index 8a023a3752..46f4740520 100644 --- a/src/common/core/letsencrypt/ui/blueprints/static/js/main.js +++ b/src/common/core/letsencrypt/ui/blueprints/static/js/main.js @@ -1,585 +1,779 @@ +// Log debug messages only when LOG_LEVEL environment variable is set to +// "debug" +function debugLog(message) { + if (process.env.LOG_LEVEL === "debug") { + console.debug(`[DEBUG] ${message}`); + } +} + +// Main initialization function that waits for all dependencies to load +// before initializing the Let's Encrypt certificate management interface (async function waitForDependencies() { - // Wait for jQuery - while (typeof jQuery === "undefined") { - await new Promise((resolve) => setTimeout(resolve, 100)); - } - - // Wait for $ to be available (in case of jQuery.noConflict()) - while (typeof $ === "undefined") { - await new Promise((resolve) => setTimeout(resolve, 100)); - } - - // Wait for DataTable to be available - while (typeof $.fn.DataTable === "undefined") { - await new Promise((resolve) => setTimeout(resolve, 100)); - } - - $(document).ready(function () { - // Ensure i18next is loaded before using it - const t = - typeof i18next !== "undefined" - ? i18next.t - : (key, fallback) => fallback || key; // Fallback - - var actionLock = false; - const isReadOnly = $("#is-read-only").val()?.trim() === "True"; - const userReadOnly = $("#user-read-only").val()?.trim() === "True"; - - const headers = [ - { - title: "Domain", - tooltip: "Domain name for the certificate", - }, - { - title: "Common Name", - tooltip: "Common Name (CN) in the certificate", - }, - { - title: "Issuer", - tooltip: "Certificate issuing authority", - }, - { - title: "Valid From", - tooltip: "Date from which the certificate is valid", - }, - { - title: "Valid To", - tooltip: "Date until which the certificate is valid", - }, - { - title: "Preferred Profile", - tooltip: "Preferred profile for the certificate", - }, - { - title: "Challenge", - tooltip: "Challenge type used for domain validation", - }, - { - title: "Key Type", - tooltip: "Type of key used in the certificate", - }, - ]; - - // Set up the delete confirmation modal - const setupDeleteCertModal = (certs) => { - const $modalBody = $("#deleteCertContent"); - $modalBody.empty(); // Clear previous content - - if (certs.length === 1) { - $modalBody.html( - `

You are about to delete the certificate for: ${certs[0].domain}

` - ); - $("#confirmDeleteCertBtn").data("cert-name", certs[0].domain); - } else { - const certList = certs - .map((cert) => `
  • ${cert.domain}
  • `) - .join(""); - $modalBody.html( - `

    You are about to delete these certificates:

    -
      ${certList}
    ` - ); - $("#confirmDeleteCertBtn").data( - "cert-names", - certs.map((c) => c.domain) - ); - } - }; - - // Set up error modal - const showErrorModal = (title, message) => { - $("#errorModalLabel").text(title); - $("#errorModalContent").html(message); - const errorModal = new bootstrap.Modal(document.getElementById("errorModal")); - errorModal.show(); - }; - - // Handle delete button click - $("#confirmDeleteCertBtn").on("click", function () { - const certName = $(this).data("cert-name"); - const certNames = $(this).data("cert-names"); - - if (certName) { - // Delete single certificate - deleteCertificate(certName); - } else if (certNames && Array.isArray(certNames)) { - // Delete multiple certificates one by one - const deleteNext = (index) => { - if (index < certNames.length) { - deleteCertificate(certNames[index], () => { - deleteNext(index + 1); - }); - } else { - // All deleted, close modal and reload table - $("#deleteCertModal").modal("hide"); - $("#letsencrypt").DataTable().ajax.reload(); - } - }; - deleteNext(0); - } + // Wait for jQuery + while (typeof jQuery === "undefined") { + await new Promise((resolve) => setTimeout(resolve, 100)); + } - // Hide modal after starting delete process - $("#deleteCertModal").modal("hide"); - }); + // Wait for $ to be available (in case of jQuery.noConflict()) + while (typeof $ === "undefined") { + await new Promise((resolve) => setTimeout(resolve, 100)); + } - function deleteCertificate(certName, callback) { - $.ajax({ - url: `${window.location.pathname}/delete`, - type: "POST", - contentType: "application/json", - data: JSON.stringify({ cert_name: certName }), - headers: { - "X-CSRFToken": $("#csrf_token").val(), - }, - success: function (response) { - if (response.status === "ok") { - if (callback) { - callback(); + // Wait for DataTable to be available + while (typeof $.fn.DataTable === "undefined") { + await new Promise((resolve) => setTimeout(resolve, 100)); + } + + $(document).ready(function () { + const logLevel = process.env.LOG_LEVEL; + const isDebug = logLevel === "debug"; + + debugLog("Initializing Let's Encrypt certificate management"); + debugLog(`Log level: ${logLevel}`); + debugLog(`jQuery version: ${$.fn.jquery}`); + debugLog(`DataTables version: ${$.fn.DataTable.version}`); + + // Ensure i18next is loaded before using it + const t = + typeof i18next !== "undefined" + ? i18next.t + : (key, fallback) => fallback || key; + + var actionLock = false; + const isReadOnly = $("#is-read-only").val()?.trim() === "True"; + const userReadOnly = $("#user-read-only").val()?.trim() === "True"; + + debugLog("Application state initialized:"); + debugLog(`- Read-only mode: ${isReadOnly}`); + debugLog(`- User read-only: ${userReadOnly}`); + debugLog(`- Action lock: ${actionLock}`); + debugLog(`- CSRF token available: ${!!$("#csrf_token").val()}`); + + const headers = [ + { + title: "Domain", + tooltip: "Domain name for the certificate", + }, + { + title: "Common Name", + tooltip: "Common Name (CN) in the certificate", + }, + { + title: "Issuer", + tooltip: "Certificate issuing authority", + }, + { + title: "Valid From", + tooltip: "Date from which the certificate is valid", + }, + { + title: "Valid To", + tooltip: "Date until which the certificate is valid", + }, + { + title: "Preferred Profile", + tooltip: "Preferred profile for the certificate", + }, + { + title: "Challenge", + tooltip: "Challenge type used for domain validation", + }, + { + title: "Key Type", + tooltip: "Type of key used in the certificate", + }, + { + title: "OCSP", + tooltip: "Online Certificate Status Protocol support", + }, + ]; + + // Configure the delete confirmation modal for certificate deletion + // operations, supporting both single and multiple certificate + // deletion workflows + const setupDeleteCertModal = (certs) => { + debugLog(`Setting up delete modal for certificates: ${ + certs.map(c => c.domain).join(", ")}`); + debugLog(`Modal setup - certificate count: ${certs.length}`); + + const $modalBody = $("#deleteCertContent"); + $modalBody.empty(); + + if (certs.length === 1) { + debugLog(`Configuring modal for single certificate: ${ + certs[0].domain}`); + + $modalBody.html( + `

    You are about to delete the certificate for: ` + + `${certs[0].domain}

    ` + ); + $("#confirmDeleteCertBtn").data("cert-name", + certs[0].domain); } else { - $("#letsencrypt").DataTable().ajax.reload(); + debugLog(`Configuring modal for multiple certificates: ${ + certs.map(c => c.domain).join(", ")}`); + + const certList = certs + .map((cert) => `
  • ${cert.domain}
  • `) + .join(""); + $modalBody.html( + `

    You are about to delete these certificates:

    +
      ${certList}
    ` + ); + $("#confirmDeleteCertBtn").data( + "cert-names", + certs.map((c) => c.domain) + ); } - } else { - // Handle 200 OK but with error in response - showErrorModal( - "Certificate Deletion Error", - `

    Error deleting certificate ${certName}:

    ${response.message || "Unknown error"}

    ` + + debugLog("Modal configuration completed"); + }; + + // Display error modal with specified title and message for user + // feedback during certificate management operations + const showErrorModal = (title, message) => { + debugLog(`Showing error modal: ${title} - ${message}`); + + $("#errorModalLabel").text(title); + $("#errorModalContent").text(message); + const errorModal = new bootstrap.Modal( + document.getElementById("errorModal") ); - if (callback) callback(); - else $("#letsencrypt").DataTable().ajax.reload(); - } - }, - error: function (xhr, status, error) { - console.error("Error deleting certificate:", error, xhr); - - // Create a more detailed error message - let errorMessage = `

    Failed to delete certificate ${certName}:

    `; - - if (xhr.responseJSON && xhr.responseJSON.message) { - errorMessage += `

    ${xhr.responseJSON.message}

    `; - } else if (xhr.responseText) { - try { - const parsedError = JSON.parse(xhr.responseText); - errorMessage += `

    ${parsedError.message || error}

    `; - } catch (e) { - // If can't parse JSON, use the raw response text if not too large - if (xhr.responseText.length < 200) { - errorMessage += `

    ${xhr.responseText}

    `; - } else { - errorMessage += `

    ${error || "Unknown error"}

    `; - } + errorModal.show(); + }; + + // Handle delete button click events for certificate deletion + // confirmation modal + $("#confirmDeleteCertBtn").on("click", function () { + const certName = $(this).data("cert-name"); + const certNames = $(this).data("cert-names"); + + debugLog(`Delete button clicked: certName=${certName}, ` + + `certNames=${certNames}`); + + if (certName) { + deleteCertificate(certName); + } else if (certNames && Array.isArray(certNames)) { + // Delete multiple certificates sequentially + const deleteNext = (index) => { + if (index < certNames.length) { + deleteCertificate(certNames[index], () => { + deleteNext(index + 1); + }); + } else { + $("#deleteCertModal").modal("hide"); + $("#letsencrypt").DataTable().ajax.reload(); + } + }; + deleteNext(0); } - } else { - errorMessage += `

    ${error || "Unknown error"}

    `; - } - showErrorModal("Certificate Deletion Failed", errorMessage); + $("#deleteCertModal").modal("hide"); + }); - if (callback) callback(); - else $("#letsencrypt").DataTable().ajax.reload(); - }, - }); - } + // Delete a single certificate via AJAX request with optional callback + // for sequential deletion operations + function deleteCertificate(certName, callback) { + debugLog("Starting certificate deletion process:"); + debugLog(`- Certificate name: ${certName}`); + debugLog(`- Has callback: ${!!callback}`); + debugLog(`- Request URL: ${ + window.location.pathname}/delete`); + + const requestData = { cert_name: certName }; + const csrfToken = $("#csrf_token").val(); + + debugLog(`Request payload: ${JSON.stringify(requestData)}`); + debugLog(`CSRF token: ${csrfToken ? "present" : "missing"}`); + + $.ajax({ + url: `${window.location.pathname}/delete`, + type: "POST", + contentType: "application/json", + data: JSON.stringify(requestData), + headers: { + "X-CSRFToken": csrfToken, + }, + beforeSend: function(xhr) { + debugLog(`AJAX request starting for: ${certName}`); + debugLog(`Request headers: ${ + xhr.getAllResponseHeaders()}`); + }, + success: function (response) { + debugLog("Delete response received:"); + debugLog(`- Status: ${response.status}`); + debugLog(`- Message: ${response.message}`); + debugLog(`- Full response: ${ + JSON.stringify(response)}`); + + if (response.status === "ok") { + debugLog(`Certificate deletion successful: ${ + certName}`); + + if (callback) { + debugLog("Executing callback function"); + callback(); + } else { + debugLog("Reloading DataTable data"); + $("#letsencrypt").DataTable().ajax.reload(); + } + } else { + debugLog(`Certificate deletion failed: ${ + response.message}`); + + showErrorModal( + "Certificate Deletion Error", + `Error deleting certificate ${certName}: ${ + response.message || "Unknown error"}` + ); + if (callback) callback(); + else $("#letsencrypt").DataTable().ajax.reload(); + } + }, + error: function (xhr, status, error) { + debugLog("AJAX error details:"); + debugLog(`- XHR status: ${xhr.status}`); + debugLog(`- Status text: ${status}`); + debugLog(`- Error: ${error}`); + debugLog(`- Response text: ${xhr.responseText}`); + debugLog(`- Response JSON: ${ + JSON.stringify(xhr.responseJSON)}`); + + console.error("Error deleting certificate:", error, xhr); + + let errorMessage = `Failed to delete certificate ` + + `${certName}: `; + + if (xhr.responseJSON && xhr.responseJSON.message) { + errorMessage += xhr.responseJSON.message; + } else if (xhr.responseText) { + try { + const parsedError = JSON.parse(xhr.responseText); + errorMessage += + parsedError.message || error; + } catch (e) { + debugLog(`Failed to parse error response: ${e}`); + if (xhr.responseText.length < 200) { + errorMessage += xhr.responseText; + } else { + errorMessage += error || "Unknown error"; + } + } + } else { + errorMessage += error || "Unknown error"; + } + + showErrorModal("Certificate Deletion Failed", + errorMessage); + + if (callback) callback(); + else $("#letsencrypt").DataTable().ajax.reload(); + }, + complete: function(xhr, status) { + debugLog("AJAX request completed:"); + debugLog(`- Final status: ${status}`); + debugLog(`- Certificate: ${certName}`); + } + }); + } - // DataTable Layout and Buttons - const layout = { - top1: { - searchPanes: { - viewTotal: true, - cascadePanes: true, - collapse: false, - columns: [2, 5, 6, 7], // Issuer, Preferred Profile, Challenge and Key Type - }, - }, - topStart: {}, - topEnd: { - search: true, - buttons: [ - { - extend: "auto_refresh", - className: - "btn btn-sm btn-outline-primary d-flex align-items-center", - }, - { - extend: "toggle_filters", - className: "btn btn-sm btn-outline-primary toggle-filters", - }, - ], - }, - bottomStart: { - pageLength: { - menu: [10, 25, 50, 100, { label: "All", value: -1 }], - }, - info: true, - }, - }; - - layout.topStart.buttons = [ - { - extend: "colvis", - columns: "th:not(:nth-child(-n+3)):not(:last-child)", - text: `${t( - "button.columns", - "Columns" - )}`, - className: "btn btn-sm btn-outline-primary rounded-start", - columnText: function (dt, idx, title) { - return `${idx + 1}. ${title}`; - }, - }, - { - extend: "colvisRestore", - text: `${t( - "button.reset_columns", - "Reset columns" - )}`, - className: "btn btn-sm btn-outline-primary d-none d-md-inline", - }, - { - extend: "collection", - text: `${t( - "button.export", - "Export" - )}`, - className: "btn btn-sm btn-outline-primary", - buttons: [ - { - extend: "copy", - text: `${t( - "button.copy_visible", - "Copy visible" - )}`, - exportOptions: { - columns: ":visible:not(:nth-child(-n+2)):not(:last-child)", + // DataTable Layout and Button configuration + const layout = { + top1: { + searchPanes: { + viewTotal: true, + cascadePanes: true, + collapse: false, + // Issuer, Preferred Profile, Challenge, Key Type, and OCSP + columns: [2, 5, 6, 7, 8], + }, }, - }, - { - extend: "csv", - text: `CSV`, - bom: true, - filename: "bw_certificates", - exportOptions: { - modifier: { search: "none" }, - columns: ":not(:nth-child(-n+2)):not(:last-child)", + topStart: {}, + topEnd: { + search: true, + buttons: [ + { + extend: "auto_refresh", + className: ( + "btn btn-sm btn-outline-primary " + + "d-flex align-items-center" + ), + }, + { + extend: "toggle_filters", + className: "btn btn-sm btn-outline-primary " + + "toggle-filters", + }, + ], }, - }, - { - extend: "excel", - text: `Excel`, - filename: "bw_certificates", - exportOptions: { - modifier: { search: "none" }, - columns: ":not(:nth-child(-n+2)):not(:last-child)", + bottomStart: { + pageLength: { + menu: [10, 25, 50, 100, + { label: "All", value: -1 }], + }, + info: true, }, - }, - ], - }, - { - extend: "collection", - text: `${t( - "button.actions", - "Actions" - )}`, - className: "btn btn-sm btn-outline-primary action-button disabled", - buttons: [{ extend: "delete_cert", className: "text-danger" }], - }, - ]; - - let autoRefresh = false; - let autoRefreshInterval = null; - const sessionAutoRefresh = sessionStorage.getItem("letsencryptAutoRefresh"); - - function toggleAutoRefresh() { - autoRefresh = !autoRefresh; - sessionStorage.setItem("letsencryptAutoRefresh", autoRefresh); - if (autoRefresh) { - $(".bx-loader") - .addClass("bx-spin") - .closest(".btn") - .removeClass("btn-outline-primary") - .addClass("btn-primary"); - if (autoRefreshInterval) clearInterval(autoRefreshInterval); - autoRefreshInterval = setInterval(() => { - if (!autoRefresh) { - clearInterval(autoRefreshInterval); - autoRefreshInterval = null; - } else { - $("#letsencrypt").DataTable().ajax.reload(null, false); - } - }, 10000); // 10 seconds - } else { - $(".bx-loader") - .removeClass("bx-spin") - .closest(".btn") - .removeClass("btn-primary") - .addClass("btn-outline-primary"); - if (autoRefreshInterval) { - clearInterval(autoRefreshInterval); - autoRefreshInterval = null; - } - } - } + }; - if (sessionAutoRefresh === "true") { - toggleAutoRefresh(); - } + debugLog("DataTable layout configuration:"); + debugLog(`- Search panes columns: ${ + layout.top1.searchPanes.columns.join(", ")}`); + debugLog(`- Page length options: ${ + JSON.stringify(layout.bottomStart.pageLength.menu)}`); + debugLog(`- Layout structure: ${JSON.stringify(layout)}`); + debugLog(`- Headers count: ${headers.length}`); + + layout.topStart.buttons = [ + { + extend: "colvis", + columns: "th:not(:nth-child(-n+3)):not(:last-child)", + text: ( + `${t( + "button.columns", + "Columns" + )}` + ), + className: "btn btn-sm btn-outline-primary rounded-start", + columnText: function (dt, idx, title) { + return `${idx + 1}. ${title}`; + }, + }, + { + extend: "colvisRestore", + text: ( + `${t( + "button.reset_columns", + "Reset columns" + )}` + ), + className: "btn btn-sm btn-outline-primary d-none d-md-inline", + }, + { + extend: "collection", + text: ( + `${t( + "button.export", + "Export" + )}` + ), + className: "btn btn-sm btn-outline-primary", + buttons: [ + { + extend: "copy", + text: ( + `${t( + "button.copy_visible", + "Copy visible" + )}` + ), + exportOptions: { + columns: ( + ":visible:not(:nth-child(-n+2)):" + + "not(:last-child)" + ), + }, + }, + { + extend: "csv", + text: ( + `CSV` + ), + bom: true, + filename: "bw_certificates", + exportOptions: { + modifier: { search: "none" }, + columns: ( + ":not(:nth-child(-n+2)):not(:last-child)" + ), + }, + }, + { + extend: "excel", + text: ( + `Excel` + ), + filename: "bw_certificates", + exportOptions: { + modifier: { search: "none" }, + columns: ( + ":not(:nth-child(-n+2)):not(:last-child)" + ), + }, + }, + ], + }, + { + extend: "collection", + text: ( + `${t( + "button.actions", + "Actions" + )}` + ), + className: ( + "btn btn-sm btn-outline-primary action-button disabled" + ), + buttons: [ + { extend: "delete_cert", className: "text-danger" } + ], + }, + ]; + + let autoRefresh = false; + let autoRefreshInterval = null; + const sessionAutoRefresh = + sessionStorage.getItem("letsencryptAutoRefresh"); + + // Toggle auto-refresh functionality for DataTable data with + // visual feedback and interval management + function toggleAutoRefresh() { + autoRefresh = !autoRefresh; + sessionStorage.setItem("letsencryptAutoRefresh", autoRefresh); + + debugLog(`Auto-refresh toggled: ${autoRefresh}`); + + if (autoRefresh) { + $(".bx-loader") + .addClass("bx-spin") + .closest(".btn") + .removeClass("btn-outline-primary") + .addClass("btn-primary"); + + if (autoRefreshInterval) clearInterval(autoRefreshInterval); + + autoRefreshInterval = setInterval(() => { + if (!autoRefresh) { + clearInterval(autoRefreshInterval); + autoRefreshInterval = null; + } else { + $("#letsencrypt").DataTable().ajax.reload(null, + false); + } + }, 10000); + } else { + $(".bx-loader") + .removeClass("bx-spin") + .closest(".btn") + .removeClass("btn-primary") + .addClass("btn-outline-primary"); + + if (autoRefreshInterval) { + clearInterval(autoRefreshInterval); + autoRefreshInterval = null; + } + } + } - const getSelectedCertificates = () => { - const certs = []; - $("tr.selected").each(function () { - const $row = $(this); - const domain = $row.find("td:eq(2)").text().trim(); - certs.push({ - domain: domain, - }); - }); - return certs; - }; - - $.fn.dataTable.ext.buttons.auto_refresh = { - text: '  Auto refresh', - action: (e, dt, node, config) => { - toggleAutoRefresh(); - }, - }; - - $.fn.dataTable.ext.buttons.delete_cert = { - text: `Delete certificate`, - action: function (e, dt, node, config) { - if (isReadOnly) { - alert( - t( - "alert.readonly_mode", - "This action is not allowed in read-only mode." - ) - ); - return; + if (sessionAutoRefresh === "true") { + toggleAutoRefresh(); } - if (actionLock) return; - actionLock = true; - $(".dt-button-background").click(); - - const certs = getSelectedCertificates(); - if (certs.length === 0) { - actionLock = false; - return; + + // Extract currently selected certificates from DataTable rows + // and return their domain information for bulk operations + const getSelectedCertificates = () => { + const certs = []; + $("tr.selected").each(function () { + const $row = $(this); + const domain = $row.find("td:eq(2)").text().trim(); + certs.push({ domain: domain }); + }); + + debugLog(`Selected certificates: ${ + certs.map(c => c.domain).join(", ")}`); + + return certs; + }; + + // Custom DataTable button for auto-refresh functionality + $.fn.dataTable.ext.buttons.auto_refresh = { + text: ( + '' + + '  ' + + 'Auto refresh' + ), + action: (e, dt, node, config) => { + toggleAutoRefresh(); + }, + }; + + // Custom DataTable button for certificate deletion with + // read-only mode checks and selection validation + $.fn.dataTable.ext.buttons.delete_cert = { + text: ( + `` + + `Delete certificate` + ), + action: function (e, dt, node, config) { + if (isReadOnly) { + alert( + t( + "alert.readonly_mode", + "This action is not allowed in read-only mode." + ) + ); + return; + } + + if (actionLock) return; + actionLock = true; + $(".dt-button-background").click(); + + const certs = getSelectedCertificates(); + if (certs.length === 0) { + actionLock = false; + return; + } + + setupDeleteCertModal(certs); + + const deleteModal = new bootstrap.Modal( + document.getElementById("deleteCertModal") + ); + deleteModal.show(); + + actionLock = false; + }, + }; + + // Build column definitions for DataTable configuration with + // responsive controls, selection, and search pane settings + function buildColumnDefs() { + return [ + { + orderable: false, + className: "dtr-control", + targets: 0 + }, + { + orderable: false, + render: DataTable.render.select(), + targets: 1 + }, + { type: "string", targets: 2 }, + { orderable: true, targets: -1 }, + { + targets: [5, 6], + render: function (data, type, row) { + if (type === "display" || type === "filter") { + const date = new Date(data); + if (!isNaN(date.getTime())) { + return date.toLocaleString(); + } + } + return data; + }, + }, + { + searchPanes: { + show: true, + combiner: "or", + header: t("searchpane.issuer", "Issuer"), + }, + targets: 2, + }, + { + searchPanes: { + show: true, + header: t("searchpane.preferred_profile", + "Preferred Profile"), + combiner: "or", + }, + targets: 5, + }, + { + searchPanes: { + show: true, + header: t("searchpane.challenge", "Challenge"), + combiner: "or", + }, + targets: 6, + }, + { + searchPanes: { + show: true, + header: t("searchpane.key_type", "Key Type"), + combiner: "or", + }, + targets: 7, + }, + { + searchPanes: { + show: true, + header: t("searchpane.ocsp", "OCSP Support"), + combiner: "or", + }, + targets: 8, + }, + ]; } - setupDeleteCertModal(certs); - - // Show the modal - const deleteModal = new bootstrap.Modal( - document.getElementById("deleteCertModal") - ); - deleteModal.show(); - - actionLock = false; - }, - }; - - // Create columns configuration - function buildColumnDefs() { - return [ - { orderable: false, className: "dtr-control", targets: 0 }, - { orderable: false, render: DataTable.render.select(), targets: 1 }, - { type: "string", targets: 2 }, // domain - { orderable: true, targets: -1 }, - { - targets: [5, 6], - render: function (data, type, row) { - if (type === "display" || type === "filter") { - const date = new Date(data); - if (!isNaN(date.getTime())) { - return date.toLocaleString(); - } - } - return data; - }, - }, - { - searchPanes: { - show: true, - combiner: "or", - header: t("searchpane.issuer", "Issuer"), - }, - targets: 2, // Issuer column - }, - { - searchPanes: { - show: true, - header: t("searchpane.preferred_profile", "Preferred Profile"), - combiner: "or", - }, - targets: 5, // Preferred Profile column - }, - { - searchPanes: { - show: true, - header: t("searchpane.challenge", "Challenge"), - combiner: "or", - }, - targets: 6, // Challenge column - }, - { - searchPanes: { - show: true, - header: t("searchpane.key_type", "Key Type"), - combiner: "or", - }, - targets: 7, // Key Type column - }, - ]; - } - // Define the columns for the DataTable - function buildColumns() { - return [ - { - data: null, - defaultContent: "", - orderable: false, - className: "dtr-control", - }, - { data: null, defaultContent: "", orderable: false }, - { - data: "domain", - title: "Domain", - }, - { - data: "common_name", - title: "Common Name", - }, - { - data: "issuer", - title: "Issuer", - }, - { - data: "valid_from", - title: "Valid From", - }, - { - data: "valid_to", - title: "Valid To", - }, - { - data: "preferred_profile", - title: "Preferred Profile", - }, - { - data: "challenge", - title: "Challenge", - }, - { - data: "key_type", - title: "Key Type", - }, - { - data: "serial_number", - title: "Serial Number", - }, - { - data: "fingerprint", - title: "Fingerprint", - }, - { - data: "version", - title: "Version", - }, - ]; - } + // Define the columns for the DataTable with data mappings + // and display configurations for certificate information + function buildColumns() { + return [ + { + data: null, + defaultContent: "", + orderable: false, + className: "dtr-control", + }, + { data: null, defaultContent: "", orderable: false }, + { data: "domain", title: "Domain" }, + { data: "common_name", title: "Common Name" }, + { data: "issuer", title: "Issuer" }, + { data: "valid_from", title: "Valid From" }, + { data: "valid_to", title: "Valid To" }, + { data: "preferred_profile", title: "Preferred Profile" }, + { data: "challenge", title: "Challenge" }, + { data: "key_type", title: "Key Type" }, + { data: "ocsp_support", title: "OCSP" }, + { data: "serial_number", title: "Serial Number" }, + { data: "fingerprint", title: "Fingerprint" }, + { data: "version", title: "Version" }, + ]; + } - // Utility function to manage header tooltips - function updateHeaderTooltips(selector, headers) { - $(selector) - .find("th") - .each((index, element) => { - const $th = $(element); - const tooltip = headers[index] ? headers[index].tooltip : ""; - if (!tooltip) return; - - $th.attr({ - "data-bs-toggle": "tooltip", - "data-bs-placement": "bottom", - title: tooltip, - }); - }); + // Manage header tooltips for DataTable columns by applying + // Bootstrap tooltip attributes to table headers + function updateHeaderTooltips(selector, headers) { + $(selector) + .find("th") + .each((index, element) => { + const $th = $(element); + const tooltip = headers[index] ? + headers[index].tooltip : ""; + if (!tooltip) return; + + $th.attr({ + "data-bs-toggle": "tooltip", + "data-bs-placement": "bottom", + title: tooltip, + }); + }); + + $('[data-bs-toggle="tooltip"]').tooltip("dispose").tooltip(); + } - $('[data-bs-toggle="tooltip"]').tooltip("dispose").tooltip(); - } + // Initialize the DataTable with complete configuration including + // server-side processing, AJAX data loading, and UI components + const letsencrypt_config = { + tableSelector: "#letsencrypt", + tableName: "letsencrypt", + columnVisibilityCondition: (column) => column > 2 && column < 14, + dataTableOptions: { + columnDefs: buildColumnDefs(), + order: [[2, "asc"]], + autoFill: false, + responsive: true, + select: { + style: "multi+shift", + selector: "td:nth-child(2)", + headerCheckbox: true, + }, + layout: layout, + processing: true, + serverSide: true, + ajax: { + url: `${window.location.pathname}/fetch`, + type: "POST", + data: function (d) { + debugLog(`DataTable AJAX request data: ${ + JSON.stringify(d)}`); + debugLog("Request parameters:"); + debugLog(`- Draw: ${d.draw}`); + debugLog(`- Start: ${d.start}`); + debugLog(`- Length: ${d.length}`); + debugLog(`- Search value: ${d.search?.value}`); + + d.csrf_token = $("#csrf_token").val(); + return d; + }, + error: function (jqXHR, textStatus, errorThrown) { + debugLog("DataTable AJAX error details:"); + debugLog(`- Status: ${jqXHR.status}`); + debugLog(`- Status text: ${textStatus}`); + debugLog(`- Error: ${errorThrown}`); + debugLog(`- Response text: ${jqXHR.responseText}`); + debugLog(`- Response headers: ${ + jqXHR.getAllResponseHeaders()}`); + + console.error("DataTables AJAX error:", + textStatus, errorThrown); + + $("#letsencrypt").addClass("d-none"); + $("#letsencrypt-waiting") + .removeClass("d-none") + .text("Error loading certificates. " + + "Please try refreshing the page.") + .addClass("text-danger"); + + $(".dataTables_processing").hide(); + }, + success: function(data, textStatus, jqXHR) { + debugLog("DataTable AJAX success:"); + debugLog(`- Records total: ${data.recordsTotal}`); + debugLog(`- Records filtered: ${ + data.recordsFiltered}`); + debugLog(`- Data length: ${data.data?.length}`); + debugLog(`- Draw number: ${data.draw}`); + } + }, + columns: buildColumns(), + initComplete: function (settings, json) { + debugLog(`DataTable initialized with settings: ${ + JSON.stringify(settings)}`); + + $("#letsencrypt_wrapper .btn-secondary") + .removeClass("btn-secondary"); + + $("#letsencrypt-waiting").addClass("d-none"); + $("#letsencrypt").removeClass("d-none"); + + if (isReadOnly) { + const titleKey = userReadOnly + ? "tooltip.readonly_user_action_disabled" + : "tooltip.readonly_db_action_disabled"; + const defaultTitle = userReadOnly + ? "Your account is readonly, action disabled." + : "The database is in readonly, action disabled."; + } + }, + headerCallback: function (thead) { + updateHeaderTooltips(thead, headers); + }, + }, + }; - // Initialize the DataTable with columns and configuration - const letsencrypt_config = { - tableSelector: "#letsencrypt", - tableName: "letsencrypt", - columnVisibilityCondition: (column) => column > 2 && column < 13, - dataTableOptions: { - columnDefs: buildColumnDefs(), - order: [[2, "asc"]], // Sort by domain name - autoFill: false, - responsive: true, - select: { - style: "multi+shift", - selector: "td:nth-child(2)", - headerCheckbox: true, - }, - layout: layout, - processing: true, - serverSide: true, - ajax: { - url: `${window.location.pathname}/fetch`, - type: "POST", - data: function (d) { - d.csrf_token = $("#csrf_token").val(); - return d; - }, - // Add error handling for ajax requests - error: function (jqXHR, textStatus, errorThrown) { - console.error("DataTables AJAX error:", textStatus, errorThrown); - $("#letsencrypt").addClass("d-none"); - $("#letsencrypt-waiting") - .removeClass("d-none") - .text( - "Error loading certificates. Please try refreshing the page." - ) - .addClass("text-danger"); - // Remove any loading indicators - $(".dataTables_processing").hide(); - }, - }, - columns: buildColumns(), - initComplete: function (settings, json) { - $("#letsencrypt_wrapper .btn-secondary").removeClass("btn-secondary"); - - // Hide loading message and show table - $("#letsencrypt-waiting").addClass("d-none"); - $("#letsencrypt").removeClass("d-none"); - - if (isReadOnly) { - const titleKey = userReadOnly - ? "tooltip.readonly_user_action_disabled" - : "tooltip.readonly_db_action_disabled"; - const defaultTitle = userReadOnly - ? "Your account is readonly, action disabled." - : "The database is in readonly, action disabled."; - } - }, - headerCallback: function (thead) { - updateHeaderTooltips(thead, headers); - }, - }, - }; - - const dt = initializeDataTable(letsencrypt_config); - dt.on("draw.dt", function () { - updateHeaderTooltips(dt.table().header(), headers); - $(".tooltip").remove(); - }); - dt.on("column-visibility.dt", function (e, settings, column, state) { - updateHeaderTooltips(dt.table().header(), headers); - $(".tooltip").remove(); - }); + const dt = initializeDataTable(letsencrypt_config); + + dt.on("draw.dt", function () { + updateHeaderTooltips(dt.table().header(), headers); + $(".tooltip").remove(); + }); + + dt.on("column-visibility.dt", function (e, settings, column, state) { + updateHeaderTooltips(dt.table().header(), headers); + $(".tooltip").remove(); + }); - // Add selection event handler for toggle action button - dt.on("select.dt deselect.dt", function () { - const count = dt.rows({ selected: true }).count(); - $(".action-button").toggleClass("disabled", count === 0); + // Toggle action button based on row selection state + dt.on("select.dt deselect.dt", function () { + const count = dt.rows({ selected: true }).count(); + $(".action-button").toggleClass("disabled", count === 0); + + debugLog(`Selection changed, count: ${count}`); + }); }); - }); -})(); +})(); \ No newline at end of file diff --git a/src/common/core/letsencrypt/ui/hooks.py b/src/common/core/letsencrypt/ui/hooks.py index 0ca87d78b6..79e5ddee1d 100644 --- a/src/common/core/letsencrypt/ui/hooks.py +++ b/src/common/core/letsencrypt/ui/hooks.py @@ -1,30 +1,87 @@ +from logging import getLogger +from os import getenv from flask import request -# Default column visibility settings for letsencrypt tables +# Default column visibility settings for Let's Encrypt certificate tables +# Key represents column index, value indicates if column is visible by default COLUMNS_PREFERENCES_DEFAULTS = { - "3": True, - "4": True, - "5": True, - "6": True, - "7": True, - "8": True, - "9": False, - "10": False, - "11": True, + "3": True, # Common Name + "4": True, # Issuer + "5": True, # Valid From + "6": True, # Valid To + "7": True, # Preferred Profile + "8": True, # Challenge + "9": True, # Key Type + "10": True, # OCSP Support + "11": False, # Serial Number (hidden by default) + "12": False, # Fingerprint (hidden by default) + "13": True, # Version } +def debug_log(logger, message): + # Log debug messages only when LOG_LEVEL environment variable is set to + # "debug" + if getenv("LOG_LEVEL") == "debug": + logger.debug(f"[DEBUG] {message}") + + def context_processor(): - """ - Flask context processor to inject variables into templates. - - This adds: - - Column preference defaults for tables - - Extra pages visibility based on user permissions - """ - if request.path.startswith(("/check", "/setup", "/loading", "/login", "/totp", "/logout")): + # Flask context processor to inject variables into templates. + # + # Provides template context data for the Let's Encrypt certificate + # management interface. Injects column preferences and other UI + # configuration data that templates need for proper rendering. + logger = getLogger("UI") + is_debug = getenv("LOG_LEVEL") == "debug" + + debug_log(logger, "Context processor called") + debug_log(logger, f"Request path: {request.path}") + debug_log(logger, f"Request method: {request.method}") + debug_log(logger, + f"Request endpoint: {getattr(request, 'endpoint', 'unknown')}") + + # Skip context processing for system/auth pages that don't need it + excluded_paths = [ + "/check", "/setup", "/loading", + "/login", "/totp", "/logout" + ] + + # Check if current path should be excluded + path_excluded = request.path.startswith(tuple(excluded_paths)) + + if path_excluded: + debug_log(logger, + f"Path {request.path} is excluded from context processing") + for excluded_path in excluded_paths: + if request.path.startswith(excluded_path): + debug_log(logger, + f" Matched exclusion pattern: {excluded_path}") + break return None - data = {"columns_preferences_defaults_letsencrypt": COLUMNS_PREFERENCES_DEFAULTS} + debug_log(logger, f"Processing context for path: {request.path}") + debug_log(logger, "Column preferences to inject:") + column_names = { + "3": "Common Name", "4": "Issuer", "5": "Valid From", + "6": "Valid To", "7": "Preferred Profile", "8": "Challenge", + "9": "Key Type", "10": "OCSP Support", "11": "Serial Number", + "12": "Fingerprint", "13": "Version" + } + for col_id, visible in COLUMNS_PREFERENCES_DEFAULTS.items(): + col_name = column_names.get(col_id, f"Column {col_id}") + debug_log(logger, + f" {col_name} (#{col_id}): {'visible' if visible else 'hidden'}") + + # Prepare context data for templates + data = { + "columns_preferences_defaults_letsencrypt": COLUMNS_PREFERENCES_DEFAULTS + } + + debug_log(logger, f"Context processor returning {len(data)} variables") + debug_log(logger, f"Context data keys: {list(data.keys())}") + debug_log(logger, + f"Let's Encrypt preferences: {len(COLUMNS_PREFERENCES_DEFAULTS)} " + f"columns configured") - return data + return data \ No newline at end of file