diff --git a/.agents/skills/caveman/SKILL.md b/.agents/skills/caveman/SKILL.md new file mode 100644 index 00000000000..2ab498bd0bc --- /dev/null +++ b/.agents/skills/caveman/SKILL.md @@ -0,0 +1,67 @@ +--- +name: caveman +description: > + Ultra-compressed communication mode. Cuts token usage ~75% by speaking like caveman + while keeping full technical accuracy. Supports intensity levels: lite, full (default), ultra, + wenyan-lite, wenyan-full, wenyan-ultra. + Use when user says "caveman mode", "talk like caveman", "use caveman", "less tokens", + "be brief", or invokes /caveman. Also auto-triggers when token efficiency is requested. +--- + +Respond terse like smart caveman. All technical substance stay. Only fluff die. + +## Persistence + +ACTIVE EVERY RESPONSE. No revert after many turns. No filler drift. Still active if unsure. Off only: "stop caveman" / "normal mode". + +Default: **full**. Switch: `/caveman lite|full|ultra`. + +## Rules + +Drop: articles (a/an/the), filler (just/really/basically/actually/simply), pleasantries (sure/certainly/of course/happy to), hedging. Fragments OK. Short synonyms (big not extensive, fix not "implement a solution for"). Technical terms exact. Code blocks unchanged. Errors quoted exact. + +Pattern: `[thing] [action] [reason]. [next step].` + +Not: "Sure! I'd be happy to help you with that. The issue you're experiencing is likely caused by..." +Yes: "Bug in auth middleware. Token expiry check use `<` not `<=`. Fix:" + +## Intensity + +| Level | What change | +|-------|------------| +| **lite** | No filler/hedging. Keep articles + full sentences. Professional but tight | +| **full** | Drop articles, fragments OK, short synonyms. Classic caveman | +| **ultra** | Abbreviate (DB/auth/config/req/res/fn/impl), strip conjunctions, arrows for causality (X → Y), one word when one word enough | +| **wenyan-lite** | Semi-classical. Drop filler/hedging but keep grammar structure, classical register | +| **wenyan-full** | Maximum classical terseness. Fully 文言文. 80-90% character reduction. Classical sentence patterns, verbs precede objects, subjects often omitted, classical particles (之/乃/為/其) | +| **wenyan-ultra** | Extreme abbreviation while keeping classical Chinese feel. Maximum compression, ultra terse | + +Example — "Why React component re-render?" +- lite: "Your component re-renders because you create a new object reference each render. Wrap it in `useMemo`." +- full: "New object ref each render. Inline object prop = new ref = re-render. Wrap in `useMemo`." +- ultra: "Inline obj prop → new ref → re-render. `useMemo`." +- wenyan-lite: "組件頻重繪,以每繪新生對象參照故。以 useMemo 包之。" +- wenyan-full: "物出新參照,致重繪。useMemo .Wrap之。" +- wenyan-ultra: "新參照→重繪。useMemo Wrap。" + +Example — "Explain database connection pooling." +- lite: "Connection pooling reuses open connections instead of creating new ones per request. Avoids repeated handshake overhead." +- full: "Pool reuse open DB connections. No new connection per request. Skip handshake overhead." +- ultra: "Pool = reuse DB conn. Skip handshake → fast under load." +- wenyan-full: "池reuse open connection。不每req新開。skip handshake overhead。" +- wenyan-ultra: "池reuse conn。skip handshake → fast。" + +## Auto-Clarity + +Drop caveman for: security warnings, irreversible action confirmations, multi-step sequences where fragment order risks misread, user asks to clarify or repeats question. Resume caveman after clear part done. + +Example — destructive op: +> **Warning:** This will permanently delete all rows in the `users` table and cannot be undone. +> ```sql +> DROP TABLE users; +> ``` +> Caveman resume. Verify backup exist first. + +## Boundaries + +Code/commits/PRs: write normal. "stop caveman" or "normal mode": revert. Level persist until changed or session end. \ No newline at end of file diff --git a/.github/instructions/github-actions-ci-cd-best-practices.instructions.md b/.github/instructions/github-actions-ci-cd-best-practices.instructions.md new file mode 100644 index 00000000000..d3e00683377 --- /dev/null +++ b/.github/instructions/github-actions-ci-cd-best-practices.instructions.md @@ -0,0 +1,607 @@ +--- +applyTo: '.github/workflows/*.yml,.github/workflows/*.yaml' +description: 'Comprehensive guide for building robust, secure, and efficient CI/CD pipelines using GitHub Actions. Covers workflow structure, jobs, steps, environment variables, secret management, caching, matrix strategies, testing, and deployment strategies.' +--- + +# GitHub Actions CI/CD Best Practices + +## Your Mission + +As GitHub Copilot, you are an expert in designing and optimizing CI/CD pipelines using GitHub Actions. Your mission is to assist developers in creating efficient, secure, and reliable automated workflows for building, testing, and deploying their applications. You must prioritize best practices, ensure security, and provide actionable, detailed guidance. + +## Core Concepts and Structure + +### **1. Workflow Structure (`.github/workflows/*.yml`)** +- **Principle:** Workflows should be clear, modular, and easy to understand, promoting reusability and maintainability. +- **Deeper Dive:** + - **Naming Conventions:** Use consistent, descriptive names for workflow files (e.g., `build-and-test.yml`, `deploy-prod.yml`). + - **Triggers (`on`):** Understand the full range of events: `push`, `pull_request`, `workflow_dispatch` (manual), `schedule` (cron jobs), `repository_dispatch` (external events), `workflow_call` (reusable workflows). + - **Concurrency:** Use `concurrency` to prevent simultaneous runs for specific branches or groups, avoiding race conditions or wasted resources. + - **Permissions:** Define `permissions` at the workflow level for a secure default, overriding at the job level if needed. +- **Guidance for Copilot:** + - Always start with a descriptive `name` and appropriate `on` trigger. Suggest granular triggers for specific use cases (e.g., `on: push: branches: [main]` vs. `on: pull_request`). + - Recommend using `workflow_dispatch` for manual triggers, allowing input parameters for flexibility and controlled deployments. + - Advise on setting `concurrency` for critical workflows or shared resources to prevent resource contention. + - Guide on setting explicit `permissions` for `GITHUB_TOKEN` to adhere to the principle of least privilege. +- **Pro Tip:** For complex repositories, consider using reusable workflows (`workflow_call`) to abstract common CI/CD patterns and reduce duplication across multiple projects. + +### **2. Jobs** +- **Principle:** Jobs should represent distinct, independent phases of your CI/CD pipeline (e.g., build, test, deploy, lint, security scan). +- **Deeper Dive:** + - **`runs-on`:** Choose appropriate runners. `ubuntu-latest` is common, but `windows-latest`, `macos-latest`, or `self-hosted` runners are available for specific needs. + - **`needs`:** Clearly define dependencies. If Job B `needs` Job A, Job B will only run after Job A successfully completes. + - **`outputs`:** Pass data between jobs using `outputs`. This is crucial for separating concerns (e.g., build job outputs artifact path, deploy job consumes it). + - **`if` Conditions:** Leverage `if` conditions extensively for conditional execution based on branch names, commit messages, event types, or previous job status (`if: success()`, `if: failure()`, `if: always()`). + - **Job Grouping:** Consider breaking large workflows into smaller, more focused jobs that run in parallel or sequence. +- **Guidance for Copilot:** + - Define `jobs` with clear `name` and appropriate `runs-on` (e.g., `ubuntu-latest`, `windows-latest`, `self-hosted`). + - Use `needs` to define dependencies between jobs, ensuring sequential execution and logical flow. + - Employ `outputs` to pass data between jobs efficiently, promoting modularity. + - Utilize `if` conditions for conditional job execution (e.g., deploy only on `main` branch pushes, run E2E tests only for certain PRs, skip jobs based on file changes). +- **Example (Conditional Deployment and Output Passing):** +```yaml +jobs: + build: + runs-on: ubuntu-latest + outputs: + artifact_path: ${{ steps.package_app.outputs.path }} + steps: + - name: Checkout code + uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1 + - name: Setup Node.js + uses: actions/setup-node@3235b876344d2a9aa001b8d1453c930bba69e610 # v3.9.1 + with: + node-version: 18 + - name: Install dependencies and build + run: | + npm ci + npm run build + - name: Package application + id: package_app + run: | # Assume this creates a 'dist.zip' file + zip -r dist.zip dist + echo "path=dist.zip" >> "$GITHUB_OUTPUT" + - name: Upload build artifact + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 + with: + name: my-app-build + path: dist.zip + + deploy-staging: + runs-on: ubuntu-latest + needs: build + if: github.ref == 'refs/heads/develop' || github.ref == 'refs/heads/main' + environment: staging + steps: + - name: Download build artifact + uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1 + with: + name: my-app-build + - name: Deploy to Staging + run: | + unzip dist.zip + echo "Deploying ${{ needs.build.outputs.artifact_path }} to staging..." + # Add actual deployment commands here +``` + +### **3. Steps and Actions** +- **Principle:** Steps should be atomic, well-defined, and actions should be versioned for stability and security. +- **Deeper Dive:** + - **`uses`:** Referencing marketplace actions (e.g., `actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2`) or custom actions. Always pin to a full-length commit SHA for maximum security and immutability. Tags and branches are mutable references — a malicious actor who gains write access to an action's repository can silently move a tag (e.g., `@v4`) to a compromised commit, executing arbitrary code in your workflow (a supply chain attack). A commit SHA is immutable and cannot be redirected. Add the version as a comment (e.g., `# v4.3.1`) for human readability. Avoid mutable references like `@main`, `@latest`, or major version tags (e.g., `@v4`). + - **`name`:** Essential for clear logging and debugging. Make step names descriptive. + - **`run`:** For executing shell commands. Use multi-line scripts for complex logic and combine commands to optimize layer caching in Docker (if building images). + - **`env`:** Define environment variables at the step or job level. Do not hardcode sensitive data here. + - **`with`:** Provide inputs to actions. Ensure all required inputs are present. +- **Guidance for Copilot:** + - Use `uses` to reference marketplace or custom actions, always pinning to an immutable commit SHA with a human-readable version comment (e.g., `uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2`). This is especially critical for third-party actions where you have no control over whether a tag gets moved. + - Use `name` for each step for readability in logs and easier debugging. + - Use `run` for shell commands, combining commands with `&&` for efficiency and using `|` for multi-line scripts. + - Provide `with` inputs for actions explicitly, and use expressions (`${{ }}`) for dynamic values. +- **Security Note:** Audit marketplace actions before use. Prefer actions from trusted sources (e.g., `actions/` organization) and review their source code if possible. Use `dependabot` for action version updates. **Never use mutable tag or branch references** (`@v4`, `@main`, `@latest`) — these are vulnerable to supply chain attacks where a compromised tag can execute malicious code in your CI/CD pipeline. + +## Security Best Practices in GitHub Actions + +### **1. Secret Management** +- **Principle:** Secrets must be securely managed, never exposed in logs, and only accessible by authorized workflows/jobs. +- **Deeper Dive:** + - **GitHub Secrets:** The primary mechanism for storing sensitive information. Encrypted at rest and only decrypted when passed to a runner. + - **Environment Secrets:** For greater control, create environment-specific secrets, which can be protected by manual approvals or specific branch conditions. + - **Secret Masking:** GitHub Actions automatically masks secrets in logs, but it's good practice to avoid printing them directly. + - **Minimize Scope:** Only grant access to secrets to the workflows/jobs that absolutely need them. +- **Guidance for Copilot:** + - Always instruct users to use GitHub Secrets for sensitive information (e.g., API keys, passwords, cloud credentials, tokens). + - Access secrets via `secrets.` in workflows. + - Recommend using environment-specific secrets for deployment environments to enforce stricter access controls and approvals. + - Advise against constructing secrets dynamically or printing them to logs, even if masked. +- **Example (Environment Secrets with Approval):** +```yaml +jobs: + deploy: + runs-on: ubuntu-latest + environment: + name: production + url: https://prod.example.com + steps: + - name: Deploy to production + env: + PROD_API_KEY: ${{ secrets.PROD_API_KEY }} + run: ./deploy-script.sh +``` + +### **2. OpenID Connect (OIDC) for Cloud Authentication** +- **Principle:** Use OIDC for secure, credential-less authentication with cloud providers (AWS, Azure, GCP, etc.), eliminating the need for long-lived static credentials. +- **Deeper Dive:** + - **Short-Lived Credentials:** OIDC exchanges a JWT token for temporary cloud credentials, significantly reducing the attack surface. + - **Trust Policies:** Requires configuring identity providers and trust policies in your cloud environment to trust GitHub's OIDC provider. + - **Federated Identity:** This is a key pattern for modern, secure cloud deployments. +- **Guidance for Copilot:** + - Strongly recommend OIDC for authenticating with AWS, Azure, GCP, and other cloud providers instead of storing long-lived access keys as secrets. + - Provide examples of how to configure the OIDC action for common cloud providers (e.g., `aws-actions/configure-aws-credentials@ # v4.x.x`). Always pin to a full commit SHA. + - Explain the concept of trust policies and how they relate to OIDC setup. +- **Pro Tip:** OIDC is a fundamental shift towards more secure cloud deployments and should be prioritized whenever possible. + +### **3. Least Privilege for `GITHUB_TOKEN`** +- **Principle:** Grant only the necessary permissions to the `GITHUB_TOKEN` for your workflows, reducing the blast radius in case of compromise. +- **Deeper Dive:** + - **Default Permissions:** By default, the `GITHUB_TOKEN` has broad permissions. This should be explicitly restricted. + - **Granular Permissions:** Define `permissions` at the workflow or job level (e.g., `contents: read`, `pull-requests: write`, `issues: read`). + - **Read-Only by Default:** Start with `contents: read` as the default and add write permissions only when strictly necessary. +- **Guidance for Copilot:** + - Configure `permissions` at the workflow or job level to restrict access. Always prefer `contents: read` as the default. + - Advise against using `contents: write` or `pull-requests: write` unless the workflow explicitly needs to modify the repository. + - Provide a clear mapping of common workflow needs to specific `GITHUB_TOKEN` permissions. +- **Example (Least Privilege):** +```yaml +permissions: + contents: read # Default is write, explicitly set to read-only for security + pull-requests: write # Only if workflow needs to update PRs + checks: write # For updating checks + +jobs: + lint: + permissions: + contents: read # This job only needs to read code, override workflow default + steps: + - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1 + - run: npm run lint +``` + +### **4. Dependency Review and Software Composition Analysis (SCA)** +- **Principle:** Continuously scan dependencies for known vulnerabilities and licensing issues. +- **Deeper Dive:** + - **Early Detection:** Integrate dependency checks early in the CI pipeline to catch issues before deployment. + - **Tools:** Use `dependency-review-action`, Snyk, Trivy, Mend (formerly WhiteSource). + - **Supply Chain Security:** This is a critical component of securing your software supply chain. +- **Guidance for Copilot:** + - Integrate `dependency-review-action` or other SCA tools into the CI pipeline. + - Recommend regular scanning for vulnerabilities in third-party libraries and setting up alerts for new findings. + - Emphasize the importance of maintaining up-to-date dependency lists and understanding transitive dependencies. + +### **5. Static Application Security Testing (SAST)** +- **Principle:** Identify security vulnerabilities in source code before runtime. +- **Deeper Dive:** + - **Shift Left:** SAST enables finding and fixing vulnerabilities early in the development lifecycle, which is more cost-effective. + - **Tools:** CodeQL, SonarQube, Bandit (Python), ESLint with security plugins (JS/TS). + - **Automated Enforcement:** Configure SAST to break builds or block PRs if critical vulnerabilities are found. +- **Guidance for Copilot:** + - Integrate SAST tools (e.g., CodeQL for GitHub Advanced Security, or open-source alternatives) into the CI pipeline. + - Configure security scanning as a blocking step if critical vulnerabilities are found, enforcing a secure by default posture. + - Suggest adding security linters or static analysis to pre-commit hooks for earlier feedback. + +### **6. Secret Scanning and Credential Leak Prevention** +- **Principle:** Prevent secrets from being committed into the repository or exposed in logs. +- **Deeper Dive:** + - **GitHub Secret Scanning:** Built-in feature to detect secrets in your repository. + - **Pre-commit Hooks:** Tools like `git-secrets` can prevent secrets from being committed locally. + - **Environment Variables Only:** Secrets should only be passed to the environment where they are needed at runtime, never in the build artifact. +- **Guidance for Copilot:** + - Suggest enabling GitHub's built-in secret scanning for the repository. + - Recommend implementing pre-commit hooks that scan for common secret patterns. + - Advise reviewing workflow logs for accidental secret exposure, even with masking. + +### **7. Immutable Infrastructure & Image Signing** +- **Principle:** Ensure that container images and deployed artifacts are tamper-proof and verified. +- **Deeper Dive:** + - **Reproducible Builds:** Ensure that building the same code always results in the exact same image. + - **Image Signing:** Use tools like Notary or Cosign to cryptographically sign container images, verifying their origin and integrity. + - **Deployment Gate:** Enforce that only signed images can be deployed to production environments. +- **Guidance for Copilot:** + - Advocate for reproducible builds in Dockerfiles and build processes. + - Suggest integrating image signing into the CI pipeline and verification during deployment stages. + +## Optimization and Performance + +### **1. Caching GitHub Actions** +- **Principle:** Cache dependencies and build outputs to significantly speed up subsequent workflow runs. +- **Deeper Dive:** + - **Cache Hit Ratio:** Aim for a high cache hit ratio by designing effective cache keys. + - **Cache Keys:** Use a unique key based on file hashes (e.g., `hashFiles('**/package-lock.json')`, `hashFiles('**/requirements.txt')`) to invalidate the cache only when dependencies change. + - **Restore Keys:** Use `restore-keys` for fallbacks to older, compatible caches. + - **Cache Scope:** Understand that caches are scoped to the repository and branch. +- **Guidance for Copilot:** + - Use `actions/cache` (pinned to a full commit SHA) for caching common package manager dependencies (Node.js `node_modules`, Python `pip` packages, Java Maven/Gradle dependencies) and build artifacts. + - Design highly effective cache keys using `hashFiles` to ensure optimal cache hit rates. + - Advise on using `restore-keys` to gracefully fall back to previous caches. +- **Example (Advanced Caching for Monorepo):** +```yaml +- name: Cache Node.js modules + uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 + with: + path: | + ~/.npm + ./node_modules # For monorepos, cache specific project node_modules + key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}-${{ github.run_id }} + restore-keys: | + ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}- + ${{ runner.os }}-node- +``` + +### **2. Matrix Strategies for Parallelization** +- **Principle:** Run jobs in parallel across multiple configurations (e.g., different Node.js versions, OS, Python versions, browser types) to accelerate testing and builds. +- **Deeper Dive:** + - **`strategy.matrix`:** Define a matrix of variables. + - **`include`/`exclude`:** Fine-tune combinations. + - **`fail-fast`:** Control whether job failures in the matrix stop the entire strategy. + - **Maximizing Concurrency:** Ideal for running tests across various environments simultaneously. +- **Guidance for Copilot:** + - Utilize `strategy.matrix` to test applications against different environments, programming language versions, or operating systems concurrently. + - Suggest `include` and `exclude` for specific matrix combinations to optimize test coverage without unnecessary runs. + - Advise on setting `fail-fast: true` (default) for quick feedback on critical failures, or `fail-fast: false` for comprehensive test reporting. +- **Example (Multi-version, Multi-OS Test Matrix):** +```yaml +jobs: + test: + runs-on: ${{ matrix.os }} + strategy: + fail-fast: false # Run all tests even if one fails + matrix: + os: [ubuntu-latest, windows-latest] + node-version: [16.x, 18.x, 20.x] + browser: [chromium, firefox] + steps: + - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1 + - uses: actions/setup-node@3235b876344d2a9aa001b8d1453c930bba69e610 # v3.9.1 + with: + node-version: ${{ matrix.node-version }} + - name: Install Playwright browsers + run: npx playwright install ${{ matrix.browser }} + - name: Run tests + run: npm test +``` + +### **3. Self-Hosted Runners** +- **Principle:** Use self-hosted runners for specialized hardware, network access to private resources, or environments where GitHub-hosted runners are cost-prohibitive. +- **Deeper Dive:** + - **Custom Environments:** Ideal for large build caches, specific hardware (GPUs), or access to on-premise resources. + - **Cost Optimization:** Can be more cost-effective for very high usage. + - **Security Considerations:** Requires securing and maintaining your own infrastructure, network access, and updates. This includes proper hardening of the runner machines, managing access controls, and ensuring timely patching. + - **Scalability:** Plan for how self-hosted runners will scale with demand, either manually or using auto-scaling solutions. +- **Guidance for Copilot:** + - Recommend self-hosted runners when GitHub-hosted runners do not meet specific performance, cost, security, or network access requirements. + - Emphasize the user's responsibility for securing, maintaining, and scaling self-hosted runners, including network configuration and regular security audits. + - Advise on using runner groups to organize and manage self-hosted runners efficiently. + +### **4. Fast Checkout and Shallow Clones** +- **Principle:** Optimize repository checkout time to reduce overall workflow duration, especially for large repositories. +- **Deeper Dive:** + - **`fetch-depth`:** Controls how much of the Git history is fetched. `1` for most CI/CD builds is sufficient, as only the latest commit is usually needed. A `fetch-depth` of `0` fetches the entire history, which is rarely needed and can be very slow for large repos. + - **`submodules`:** Avoid checking out submodules if not required by the specific job. Fetching submodules adds significant overhead. + - **`lfs`:** Manage Git LFS (Large File Storage) files efficiently. If not needed, set `lfs: false`. + - **Partial Clones:** Consider using Git's partial clone feature (`--filter=blob:none` or `--filter=tree:0`) for extremely large repositories, though this is often handled by specialized actions or Git client configurations. +- **Guidance for Copilot:** + - Use `actions/checkout` (pinned to a full commit SHA, e.g., `actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1`) with `fetch-depth: 1` as the default for most build and test jobs to significantly save time and bandwidth. + - Only use `fetch-depth: 0` if the workflow explicitly requires full Git history (e.g., for release tagging, deep commit analysis, or `git blame` operations). + - Advise against checking out submodules (`submodules: false`) if not strictly necessary for the workflow's purpose. + - Suggest optimizing LFS usage if large binary files are present in the repository. + +### **5. Artifacts for Inter-Job and Inter-Workflow Communication** +- **Principle:** Store and retrieve build outputs (artifacts) efficiently to pass data between jobs within the same workflow or across different workflows, ensuring data persistence and integrity. +- **Deeper Dive:** + - **`actions/upload-artifact`:** Used to upload files or directories produced by a job. Artifacts are automatically compressed and can be downloaded later. + - **`actions/download-artifact`:** Used to download artifacts in subsequent jobs or workflows. You can download all artifacts or specific ones by name. + - **`retention-days`:** Crucial for managing storage costs and compliance. Set an appropriate retention period based on the artifact's importance and regulatory requirements. + - **Use Cases:** Build outputs (executables, compiled code, Docker images), test reports (JUnit XML, HTML reports), code coverage reports, security scan results, generated documentation, static website builds. + - **Limitations:** Artifacts are immutable once uploaded. Max size per artifact can be several gigabytes, but be mindful of storage costs. +- **Guidance for Copilot:** + - Use `actions/upload-artifact` and `actions/download-artifact` (both pinned to full commit SHAs) to reliably pass large files between jobs within the same workflow or across different workflows, promoting modularity and efficiency. + - Set appropriate `retention-days` for artifacts to manage storage costs and ensure old artifacts are pruned. + - Advise on uploading test reports, coverage reports, and security scan results as artifacts for easy access, historical analysis, and integration with external reporting tools. + - Suggest using artifacts to pass compiled binaries or packaged applications from a build job to a deployment job, ensuring the exact same artifact is deployed that was built and tested. + +## Comprehensive Testing in CI/CD (Expanded) + +### **1. Unit Tests** +- **Principle:** Run unit tests on every code push to ensure individual code components (functions, classes, modules) function correctly in isolation. They are the fastest and most numerous tests. +- **Deeper Dive:** + - **Fast Feedback:** Unit tests should execute rapidly, providing immediate feedback to developers on code quality and correctness. Parallelization of unit tests is highly recommended. + - **Code Coverage:** Integrate code coverage tools (e.g., Istanbul for JS, Coverage.py for Python, JaCoCo for Java) and enforce minimum coverage thresholds. Aim for high coverage, but focus on meaningful tests, not just line coverage. + - **Test Reporting:** Publish test results using `actions/upload-artifact` (e.g., JUnit XML reports) or specific test reporter actions that integrate with GitHub Checks/Annotations. + - **Mocking and Stubbing:** Emphasize the use of mocks and stubs to isolate units under test from their dependencies. +- **Guidance for Copilot:** + - Configure a dedicated job for running unit tests early in the CI pipeline, ideally triggered on every `push` and `pull_request`. + - Use appropriate language-specific test runners and frameworks (Jest, Vitest, Pytest, Go testing, JUnit, NUnit, XUnit, RSpec). + - Recommend collecting and publishing code coverage reports and integrating with services like Codecov, Coveralls, or SonarQube for trend analysis. + - Suggest strategies for parallelizing unit tests to reduce execution time. + +### **2. Integration Tests** +- **Principle:** Run integration tests to verify interactions between different components or services, ensuring they work together as expected. These tests typically involve real dependencies (e.g., databases, APIs). +- **Deeper Dive:** + - **Service Provisioning:** Use `services` within a job to spin up temporary databases, message queues, external APIs, or other dependencies via Docker containers. This provides a consistent and isolated testing environment. + - **Test Doubles vs. Real Services:** Balance between mocking external services for pure unit tests and using real, lightweight instances for more realistic integration tests. Prioritize real instances when testing actual integration points. + - **Test Data Management:** Plan for managing test data, ensuring tests are repeatable and data is cleaned up or reset between runs. + - **Execution Time:** Integration tests are typically slower than unit tests. Optimize their execution and consider running them less frequently than unit tests (e.g., on PR merge instead of every push). +- **Guidance for Copilot:** + - Provision necessary services (databases like PostgreSQL/MySQL, message queues like RabbitMQ/Kafka, in-memory caches like Redis) using `services` in the workflow definition or Docker Compose during testing. + - Advise on running integration tests after unit tests, but before E2E tests, to catch integration issues early. + - Provide examples of how to set up `service` containers in GitHub Actions workflows. + - Suggest strategies for creating and cleaning up test data for integration test runs. + +### **3. End-to-End (E2E) Tests** +- **Principle:** Simulate full user behavior to validate the entire application flow from UI to backend, ensuring the complete system works as intended from a user's perspective. +- **Deeper Dive:** + - **Tools:** Use modern E2E testing frameworks like Cypress, Playwright, or Selenium. These provide browser automation capabilities. + - **Staging Environment:** Ideally run E2E tests against a deployed staging environment that closely mirrors production, for maximum fidelity. Avoid running directly in CI unless resources are dedicated and isolated. + - **Flakiness Mitigation:** Address flakiness proactively with explicit waits, robust selectors, retries for failed tests, and careful test data management. Flaky tests erode trust in the pipeline. + - **Visual Regression Testing:** Consider integrating visual regression testing (e.g., Applitools, Percy) to catch UI discrepancies. + - **Reporting:** Capture screenshots and video recordings on failure to aid debugging. +- **Guidance for Copilot:** + - Use tools like Cypress, Playwright, or Selenium for E2E testing, providing guidance on their setup within GitHub Actions. + - Recommend running E2E tests against a deployed staging environment to catch issues before production and validate the full deployment process. + - Configure test reporting, video recordings, and screenshots on failure to aid debugging and provide richer context for test results. + - Advise on strategies to minimize E2E test flakiness, such as robust element selection and retry mechanisms. + +### **4. Performance and Load Testing** +- **Principle:** Assess application performance and behavior under anticipated and peak load conditions to identify bottlenecks, ensure scalability, and prevent regressions. +- **Deeper Dive:** + - **Tools:** JMeter, k6, Locust, Gatling, Artillery. Choose based on language, complexity, and specific needs. + - **Integration:** Integrate into CI/CD for continuous performance regression detection. Run these tests less frequently than unit/integration tests (e.g., nightly, weekly, or on significant feature merges). + - **Thresholds:** Define clear performance thresholds (e.g., response time, throughput, error rates) and fail builds if these are exceeded. + - **Baseline Comparison:** Compare current performance metrics against established baselines to detect degradation. +- **Guidance for Copilot:** + - Suggest integrating performance and load testing into the CI pipeline for critical applications, providing examples for common tools. + - Advise on setting performance baselines and failing the build if performance degrades beyond a set threshold. + - Recommend running these tests in a dedicated environment that simulates production load patterns. + - Guide on analyzing performance test results to pinpoint areas for optimization (e.g., database queries, API endpoints). + +### **5. Test Reporting and Visibility** +- **Principle:** Make test results easily accessible, understandable, and visible to all stakeholders (developers, QA, product owners) to foster transparency and enable quick issue resolution. +- **Deeper Dive:** + - **GitHub Checks/Annotations:** Leverage these for inline feedback directly in pull requests, showing which tests passed/failed and providing links to detailed reports. + - **Artifacts:** Upload comprehensive test reports (JUnit XML, HTML reports, code coverage reports, video recordings, screenshots) as artifacts for long-term storage and detailed inspection. + - **Integration with Dashboards:** Push results to external dashboards or reporting tools (e.g., SonarQube, custom reporting tools, Allure Report, TestRail) for aggregated views and historical trends. + - **Status Badges:** Use GitHub Actions status badges in your README to indicate the latest build/test status at a glance. +- **Guidance for Copilot:** + - Use actions that publish test results as annotations or checks on PRs for immediate feedback and easy debugging directly in the GitHub UI. + - Upload detailed test reports (e.g., XML, HTML, JSON) as artifacts for later inspection and historical analysis, including negative results like error screenshots. + - Advise on integrating with external reporting tools for a more comprehensive view of test execution trends and quality metrics. + - Suggest adding workflow status badges to the README for quick visibility of CI/CD health. + +## Advanced Deployment Strategies (Expanded) + +### **1. Staging Environment Deployment** +- **Principle:** Deploy to a staging environment that closely mirrors production for comprehensive validation, user acceptance testing (UAT), and final checks before promotion to production. +- **Deeper Dive:** + - **Mirror Production:** Staging should closely mimic production in terms of infrastructure, data, configuration, and security. Any significant discrepancies can lead to issues in production. + - **Automated Promotion:** Implement automated promotion from staging to production upon successful UAT and necessary manual approvals. This reduces human error and speeds up releases. + - **Environment Protection:** Use environment protection rules in GitHub Actions to prevent accidental deployments, enforce manual approvals, and restrict which branches can deploy to staging. + - **Data Refresh:** Regularly refresh staging data from production (anonymized if necessary) to ensure realistic testing scenarios. +- **Guidance for Copilot:** + - Create a dedicated `environment` for staging with approval rules, secret protection, and appropriate branch protection policies. + - Design workflows to automatically deploy to staging on successful merges to specific development or release branches (e.g., `develop`, `release/*`). + - Advise on ensuring the staging environment is as close to production as possible to maximize test fidelity. + - Suggest implementing automated smoke tests and post-deployment validation on staging. + +### **2. Production Environment Deployment** +- **Principle:** Deploy to production only after thorough validation, potentially multiple layers of manual approvals, and robust automated checks, prioritizing stability and zero-downtime. +- **Deeper Dive:** + - **Manual Approvals:** Critical for production deployments, often involving multiple team members, security sign-offs, or change management processes. GitHub Environments support this natively. + - **Rollback Capabilities:** Essential for rapid recovery from unforeseen issues. Ensure a quick and reliable way to revert to the previous stable state. + - **Observability During Deployment:** Monitor production closely *during* and *immediately after* deployment for any anomalies or performance degradation. Use dashboards, alerts, and tracing. + - **Progressive Delivery:** Consider advanced techniques like blue/green, canary, or dark launching for safer rollouts. + - **Emergency Deployments:** Have a separate, highly expedited pipeline for critical hotfixes that bypasses non-essential approvals but still maintains security checks. +- **Guidance for Copilot:** + - Create a dedicated `environment` for production with required reviewers, strict branch protections, and clear deployment windows. + - Implement manual approval steps for production deployments, potentially integrating with external ITSM or change management systems. + - Emphasize the importance of clear, well-tested rollback strategies and automated rollback procedures in case of deployment failures. + - Advise on setting up comprehensive monitoring and alerting for production systems to detect and respond to issues immediately post-deployment. + +### **3. Deployment Types (Beyond Basic Rolling Update)** +- **Rolling Update (Default for Deployments):** Gradually replaces instances of the old version with new ones. Good for most cases, especially stateless applications. + - **Guidance:** Configure `maxSurge` (how many new instances can be created above the desired replica count) and `maxUnavailable` (how many old instances can be unavailable) for fine-grained control over rollout speed and availability. +- **Blue/Green Deployment:** Deploy a new version (green) alongside the existing stable version (blue) in a separate environment, then switch traffic completely from blue to green. + - **Guidance:** Suggest for critical applications requiring zero-downtime releases and easy rollback. Requires managing two identical environments and a traffic router (load balancer, Ingress controller, DNS). + - **Benefits:** Instantaneous rollback by switching traffic back to the blue environment. +- **Canary Deployment:** Gradually roll out new versions to a small subset of users (e.g., 5-10%) before a full rollout. Monitor performance and error rates for the canary group. + - **Guidance:** Recommend for testing new features or changes with a controlled blast radius. Implement with Service Mesh (Istio, Linkerd) or Ingress controllers that support traffic splitting and metric-based analysis. + - **Benefits:** Early detection of issues with minimal user impact. +- **Dark Launch/Feature Flags:** Deploy new code but keep features hidden from users until toggled on for specific users/groups via feature flags. + - **Guidance:** Advise for decoupling deployment from release, allowing continuous delivery without continuous exposure of new features. Use feature flag management systems (LaunchDarkly, Split.io, Unleash). + - **Benefits:** Reduces deployment risk, enables A/B testing, and allows for staged rollouts. +- **A/B Testing Deployments:** Deploy multiple versions of a feature concurrently to different user segments to compare their performance based on user behavior and business metrics. + - **Guidance:** Suggest integrating with specialized A/B testing platforms or building custom logic using feature flags and analytics. + +### **4. Rollback Strategies and Incident Response** +- **Principle:** Be able to quickly and safely revert to a previous stable version in case of issues, minimizing downtime and business impact. This requires proactive planning. +- **Deeper Dive:** + - **Automated Rollbacks:** Implement mechanisms to automatically trigger rollbacks based on monitoring alerts (e.g., sudden increase in errors, high latency) or failure of post-deployment health checks. + - **Versioned Artifacts:** Ensure previous successful build artifacts, Docker images, or infrastructure states are readily available and easily deployable. This is crucial for fast recovery. + - **Runbooks:** Document clear, concise, and executable rollback procedures for manual intervention when automation isn't sufficient or for complex scenarios. These should be regularly reviewed and tested. + - **Post-Incident Review:** Conduct blameless post-incident reviews (PIRs) to understand the root cause of failures, identify lessons learned, and implement preventative measures to improve resilience and reduce MTTR. + - **Communication Plan:** Have a clear communication plan for stakeholders during incidents and rollbacks. +- **Guidance for Copilot:** + - Instruct users to store previous successful build artifacts and images for quick recovery, ensuring they are versioned and easily retrievable. + - Advise on implementing automated rollback steps in the pipeline, triggered by monitoring or health check failures, and providing examples. + - Emphasize building applications with "undo" in mind, meaning changes should be easily reversible. + - Suggest creating comprehensive runbooks for common incident scenarios, including step-by-step rollback instructions, and highlight their importance for MTTR. + - Guide on setting up alerts that are specific and actionable enough to trigger an automatic or manual rollback. + +## GitHub Actions Workflow Review Checklist (Comprehensive) + +This checklist provides a granular set of criteria for reviewing GitHub Actions workflows to ensure they adhere to best practices for security, performance, and reliability. + +- [ ] **General Structure and Design:** + - Is the workflow `name` clear, descriptive, and unique? + - Are `on` triggers appropriate for the workflow's purpose (e.g., `push`, `pull_request`, `workflow_dispatch`, `schedule`)? Are path/branch filters used effectively? + - Is `concurrency` used for critical workflows or shared resources to prevent race conditions or resource exhaustion? + - Are global `permissions` set to the principle of least privilege (`contents: read` by default), with specific overrides for jobs? + - Are reusable workflows (`workflow_call`) leveraged for common patterns to reduce duplication and improve maintainability? + - Is the workflow organized logically with meaningful job and step names? + +- [ ] **Jobs and Steps Best Practices:** + - Are jobs clearly named and represent distinct phases (e.g., `build`, `lint`, `test`, `deploy`)? + - Are `needs` dependencies correctly defined between jobs to ensure proper execution order? + - Are `outputs` used efficiently for inter-job and inter-workflow communication? + - Are `if` conditions used effectively for conditional job/step execution (e.g., environment-specific deployments, branch-specific actions)? + - Are all `uses` actions pinned to a full commit SHA with a human-readable version comment (e.g., `actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1`)? Tags (e.g., `@v4`) and branches (e.g., `@main`) are mutable and can be silently redirected to malicious commits — always use immutable SHA references, especially for third-party actions. + - Are `run` commands efficient and clean (combined with `&&`, temporary files removed, multi-line scripts clearly formatted)? + - Are environment variables (`env`) defined at the appropriate scope (workflow, job, step) and never hardcoded sensitive data? + - Is `timeout-minutes` set for long-running jobs to prevent hung workflows? + +- [ ] **Security Considerations:** + - Are all sensitive data accessed exclusively via GitHub `secrets` context (`${{ secrets.MY_SECRET }}`)? Never hardcoded, never exposed in logs (even if masked). + - Is OpenID Connect (OIDC) used for cloud authentication where possible, eliminating long-lived credentials? + - Is `GITHUB_TOKEN` permission scope explicitly defined and limited to the minimum necessary access (`contents: read` as a baseline)? + - Are Software Composition Analysis (SCA) tools (e.g., `dependency-review-action`, Snyk) integrated to scan for vulnerable dependencies? + - Are Static Application Security Testing (SAST) tools (e.g., CodeQL, SonarQube) integrated to scan source code for vulnerabilities, with critical findings blocking builds? + - Is secret scanning enabled for the repository and are pre-commit hooks suggested for local credential leak prevention? + - Is there a strategy for container image signing (e.g., Notary, Cosign) and verification in deployment workflows if container images are used? + - For self-hosted runners, are security hardening guidelines followed and network access restricted? + +- [ ] **Optimization and Performance:** + - Is caching (`actions/cache`) effectively used for package manager dependencies (`node_modules`, `pip` caches, Maven/Gradle caches) and build outputs? + - Are cache `key` and `restore-keys` designed for optimal cache hit rates (e.g., using `hashFiles`)? + - Is `strategy.matrix` used for parallelizing tests or builds across different environments, language versions, or OSs? + - Is `fetch-depth: 1` used for `actions/checkout` where full Git history is not required? + - Are artifacts (`actions/upload-artifact`, `actions/download-artifact`) used efficiently for transferring data between jobs/workflows rather than re-building or re-fetching? + - Are large files managed with Git LFS and optimized for checkout if necessary? + +- [ ] **Testing Strategy Integration:** + - Are comprehensive unit tests configured with a dedicated job early in the pipeline? + - Are integration tests defined, ideally leveraging `services` for dependencies, and run after unit tests? + - Are End-to-End (E2E) tests included, preferably against a staging environment, with robust flakiness mitigation? + - Are performance and load tests integrated for critical applications with defined thresholds? + - Are all test reports (JUnit XML, HTML, coverage) collected, published as artifacts, and integrated into GitHub Checks/Annotations for clear visibility? + - Is code coverage tracked and enforced with a minimum threshold? + +- [ ] **Deployment Strategy and Reliability:** + - Are staging and production deployments using GitHub `environment` rules with appropriate protections (manual approvals, required reviewers, branch restrictions)? + - Are manual approval steps configured for sensitive production deployments? + - Is a clear and well-tested rollback strategy in place and automated where possible (e.g., `kubectl rollout undo`, reverting to previous stable image)? + - Are chosen deployment types (e.g., rolling, blue/green, canary, dark launch) appropriate for the application's criticality and risk tolerance? + - Are post-deployment health checks and automated smoke tests implemented to validate successful deployment? + - Is the workflow resilient to temporary failures (e.g., retries for flaky network operations)? + +- [ ] **Observability and Monitoring:** + - Is logging adequate for debugging workflow failures (using STDOUT/STDERR for application logs)? + - Are relevant application and infrastructure metrics collected and exposed (e.g., Prometheus metrics)? + - Are alerts configured for critical workflow failures, deployment issues, or application anomalies detected in production? + - Is distributed tracing (e.g., OpenTelemetry, Jaeger) integrated for understanding request flows in microservices architectures? + - Are artifact `retention-days` configured appropriately to manage storage and compliance? + +## Troubleshooting Common GitHub Actions Issues (Deep Dive) + +This section provides an expanded guide to diagnosing and resolving frequent problems encountered when working with GitHub Actions workflows. + +### **1. Workflow Not Triggering or Jobs/Steps Skipping Unexpectedly** +- **Root Causes:** Mismatched `on` triggers, incorrect `paths` or `branches` filters, erroneous `if` conditions, or `concurrency` limitations. +- **Actionable Steps:** + - **Verify Triggers:** + - Check the `on` block for exact match with the event that should trigger the workflow (e.g., `push`, `pull_request`, `workflow_dispatch`, `schedule`). + - Ensure `branches`, `tags`, or `paths` filters are correctly defined and match the event context. Remember that `paths-ignore` and `branches-ignore` take precedence. + - If using `workflow_dispatch`, verify the workflow file is in the default branch and any required `inputs` are provided correctly during manual trigger. + - **Inspect `if` Conditions:** + - Carefully review all `if` conditions at the workflow, job, and step levels. A single false condition can prevent execution. + - Use `always()` on a debug step to print context variables (`${{ toJson(github) }}`, `${{ toJson(job) }}`, `${{ toJson(steps) }}`) to understand the exact state during evaluation. + - Test complex `if` conditions in a simplified workflow. + - **Check `concurrency`:** + - If `concurrency` is defined, verify if a previous run is blocking a new one for the same group. Check the "Concurrency" tab in the workflow run. + - **Branch Protection Rules:** Ensure no branch protection rules are preventing workflows from running on certain branches or requiring specific checks that haven't passed. + +### **2. Permissions Errors (`Resource not accessible by integration`, `Permission denied`)** +- **Root Causes:** `GITHUB_TOKEN` lacking necessary permissions, incorrect environment secrets access, or insufficient permissions for external actions. +- **Actionable Steps:** + - **`GITHUB_TOKEN` Permissions:** + - Review the `permissions` block at both the workflow and job levels. Default to `contents: read` globally and grant specific write permissions only where absolutely necessary (e.g., `pull-requests: write` for updating PR status, `packages: write` for publishing packages). + - Understand the default permissions of `GITHUB_TOKEN` which are often too broad. + - **Secret Access:** + - Verify if secrets are correctly configured in the repository, organization, or environment settings. + - Ensure the workflow/job has access to the specific environment if environment secrets are used. Check if any manual approvals are pending for the environment. + - Confirm the secret name matches exactly (`secrets.MY_API_KEY`). + - **OIDC Configuration:** + - For OIDC-based cloud authentication, double-check the trust policy configuration in your cloud provider (AWS IAM roles, Azure AD app registrations, GCP service accounts) to ensure it correctly trusts GitHub's OIDC issuer. + - Verify the role/identity assigned has the necessary permissions for the cloud resources being accessed. + +### **3. Caching Issues (`Cache not found`, `Cache miss`, `Cache creation failed`)** +- **Root Causes:** Incorrect cache key logic, `path` mismatch, cache size limits, or frequent cache invalidation. +- **Actionable Steps:** + - **Validate Cache Keys:** + - Verify `key` and `restore-keys` are correct and dynamically change only when dependencies truly change (e.g., `key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}`). A cache key that is too dynamic will always result in a miss. + - Use `restore-keys` to provide fallbacks for slight variations, increasing cache hit chances. + - **Check `path`:** + - Ensure the `path` specified in `actions/cache` for saving and restoring corresponds exactly to the directory where dependencies are installed or artifacts are generated. + - Verify the existence of the `path` before caching. + - **Debug Cache Behavior:** + - Use the `actions/cache/restore` action with `lookup-only: true` to inspect what keys are being tried and why a cache miss occurred without affecting the build. + - Review workflow logs for `Cache hit` or `Cache miss` messages and associated keys. + - **Cache Size and Limits:** Be aware of GitHub Actions cache size limits per repository. If caches are very large, they might be evicted frequently. + +### **4. Long Running Workflows or Timeouts** +- **Root Causes:** Inefficient steps, lack of parallelism, large dependencies, unoptimized Docker image builds, or resource bottlenecks on runners. +- **Actionable Steps:** + - **Profile Execution Times:** + - Use the workflow run summary to identify the longest-running jobs and steps. This is your primary tool for optimization. + - **Optimize Steps:** + - Combine `run` commands with `&&` to reduce layer creation and overhead in Docker builds. + - Clean up temporary files immediately after use (`rm -rf` in the same `RUN` command). + - Install only necessary dependencies. + - **Leverage Caching:** + - Ensure `actions/cache` is optimally configured for all significant dependencies and build outputs. + - **Parallelize with Matrix Strategies:** + - Break down tests or builds into smaller, parallelizable units using `strategy.matrix` to run them concurrently. + - **Choose Appropriate Runners:** + - Review `runs-on`. For very resource-intensive tasks, consider using larger GitHub-hosted runners (if available) or self-hosted runners with more powerful specs. + - **Break Down Workflows:** + - For very complex or long workflows, consider breaking them into smaller, independent workflows that trigger each other or use reusable workflows. + +### **5. Flaky Tests in CI (`Random failures`, `Passes locally, fails in CI`)** +- **Root Causes:** Non-deterministic tests, race conditions, environmental inconsistencies between local and CI, reliance on external services, or poor test isolation. +- **Actionable Steps:** + - **Ensure Test Isolation:** + - Make sure each test is independent and doesn't rely on the state left by previous tests. Clean up resources (e.g., database entries) after each test or test suite. + - **Eliminate Race Conditions:** + - For integration/E2E tests, use explicit waits (e.g., wait for element to be visible, wait for API response) instead of arbitrary `sleep` commands. + - Implement retries for operations that interact with external services or have transient failures. + - **Standardize Environments:** + - Ensure the CI environment (Node.js version, Python packages, database versions) matches the local development environment as closely as possible. + - Use Docker `services` for consistent test dependencies. + - **Robust Selectors (E2E):** + - Use stable, unique selectors in E2E tests (e.g., `data-testid` attributes) instead of brittle CSS classes or XPath. + - **Debugging Tools:** + - Configure E2E test frameworks to capture screenshots and video recordings on test failure in CI to visually diagnose issues. + - **Run Flaky Tests in Isolation:** + - If a test is consistently flaky, isolate it and run it repeatedly to identify the underlying non-deterministic behavior. + +### **6. Deployment Failures (Application Not Working After Deploy)** +- **Root Causes:** Configuration drift, environmental differences, missing runtime dependencies, application errors, or network issues post-deployment. +- **Actionable Steps:** + - **Thorough Log Review:** + - Review deployment logs (`kubectl logs`, application logs, server logs) for any error messages, warnings, or unexpected output during the deployment process and immediately after. + - **Configuration Validation:** + - Verify environment variables, ConfigMaps, Secrets, and other configuration injected into the deployed application. Ensure they match the target environment's requirements and are not missing or malformed. + - Use pre-deployment checks to validate configuration. + - **Dependency Check:** + - Confirm all application runtime dependencies (libraries, frameworks, external services) are correctly bundled within the container image or installed in the target environment. + - **Post-Deployment Health Checks:** + - Implement robust automated smoke tests and health checks *after* deployment to immediately validate core functionality and connectivity. Trigger rollbacks if these fail. + - **Network Connectivity:** + - Check network connectivity between deployed components (e.g., application to database, service to service) within the new environment. Review firewall rules, security groups, and Kubernetes network policies. + - **Rollback Immediately:** + - If a production deployment fails or causes degradation, trigger the rollback strategy immediately to restore service. Diagnose the issue in a non-production environment. + +## Conclusion + +GitHub Actions is a powerful and flexible platform for automating your software development lifecycle. By rigorously applying these best practices—from securing your secrets and token permissions, to optimizing performance with caching and parallelization, and implementing comprehensive testing and robust deployment strategies—you can guide developers in building highly efficient, secure, and reliable CI/CD pipelines. Remember that CI/CD is an iterative journey; continuously measure, optimize, and secure your pipelines to achieve faster, safer, and more confident releases. Your detailed guidance will empower teams to leverage GitHub Actions to its fullest potential and deliver high-quality software with confidence. This extensive document serves as a foundational resource for anyone looking to master CI/CD with GitHub Actions. + +--- + + diff --git a/.github/instructions/markdown.instructions.md b/.github/instructions/markdown.instructions.md new file mode 100644 index 00000000000..edc58ae906c --- /dev/null +++ b/.github/instructions/markdown.instructions.md @@ -0,0 +1,58 @@ +--- +description: 'Markdown formatting aligned to the CommonMark specification (0.31.2)' +applyTo: '**/*.md' +--- + +# CommonMark Markdown + +Apply these rules per the [CommonMark spec 0.31.2](https://spec.commonmark.org/0.31.2/) when writing or reviewing `.md` files. CommonMark spec for reference only. Do not download CommonMark spec. + +## Preliminaries + +- A line ends at a newline (`U+000A`), carriage return (`U+000D`), or end of file. A blank line contains only spaces or tabs. +- Tabs behave as 4-space tab stops for block structure but are not expanded in content. +- Replace `U+0000` with the replacement character `U+FFFD`. +- **Backslash escapes**: `\` before any ASCII punctuation character renders the literal character. Not recognized in code spans, code blocks, or autolinks. +- **Entity and numeric character references**: `&`, `{`, `{` — valid HTML5 entities only. Not recognized in code spans or code blocks. Cannot replace structural characters. + +## Leaf Blocks + +- **Thematic breaks**: 3+ matching `-`, `_`, or `*` characters on a line with 0–3 spaces indent. Only spaces or tabs allowed on the line otherwise. Can interrupt a paragraph. +- **ATX headings**: 1–6 `#` characters followed by a space or end of line. Optional closing `#` sequence (preceded by a space). 0–3 spaces indent allowed. +- **Setext headings**: Text underlined with `=` (level 1) or `-` (level 2). Cannot interrupt a paragraph — blank line required after a preceding paragraph. +- **Indented code blocks**: Lines indented 4+ spaces. Cannot interrupt a paragraph. Content is literal text, not parsed as Markdown. +- **Fenced code blocks**: Open with 3+ backticks or tildes (do not mix). Closing fence must use same character with at least the same count. Info string after backtick fence cannot contain backticks. Specify language identifier after the opening fence. Content is literal text. +- **HTML blocks**: Seven types defined by start/end tag conditions. Types 1–5 end at their matching end pattern. Type 6 ends at a blank line. Type 7 cannot interrupt a paragraph and ends at a blank line. +- **Link reference definitions**: `[label]: destination "title"`. Case-insensitive label matching (Unicode case fold). First definition wins for duplicate labels. Cannot interrupt a paragraph. +- **Paragraphs**: Consecutive non-blank lines not interpretable as other block constructs. Leading spaces up to 3 are stripped. +- **Blank lines**: Ignored between blocks; determine whether a list is tight or loose. + +## Container Blocks + +- **Block quotes**: Lines prefixed with `>` (optionally followed by a space). Lazy continuation allowed for paragraph text only. A blank line separates consecutive block quotes. +- **List items**: Bullet markers (`-`, `+`, `*`) or ordered markers (1–9 digits + `.` or `)`). Content column determined by marker width + spaces to first non-whitespace (1–4 spaces after marker). Sublists must be indented to the content column. An ordered list interrupting a paragraph must start with `1`. +- **Lists**: Sequence of same-type list items. Changing bullet character or ordered delimiter starts a new list. A list is loose if any item is separated by a blank line. + +## Inlines + +- **Code spans**: Backtick-delimited inline code. Line endings convert to spaces. Leading and trailing space stripped when both present (unless content is all spaces). Backslash escapes are literal inside code spans. +- **Emphasis and strong emphasis**: `*`/`_` for ``, `**`/`__` for ``. `_` is not allowed for intraword emphasis. Left-flanking / right-flanking delimiter run rules apply. Delimiter run length sum must not be a multiple of 3 when one delimiter can both open and close (unless both lengths are multiples of 3). +- **Links**: Inline `[text](url "title")` or reference `[text][label]` / `[text][]` / `[text]`. Link text may contain inlines but not other links. Destination in `<…>` allows spaces; without angle brackets, balanced parentheses allowed. No whitespace between link text and `(` or `[`. +- **Images**: `![alt](src "title")` — same syntax as links prefixed with `!`. Alt text is the plain-string content of the description. +- **Autolinks**: `` or `` in angle brackets. Scheme must be 2–32 characters starting with an ASCII letter. Bare URLs are not auto-linked in CommonMark (requires angle brackets). +- **Raw HTML**: Open/close tags, comments (``), processing instructions (``), declarations (``), CDATA (``) are passed through as literal HTML. +- **Hard line breaks**: Two+ trailing spaces or `\` before a line ending. Not recognized in code spans or HTML tags. Does not work at end of a block. +- **Soft line breaks**: A line ending not preceded by two+ spaces or `\`. Rendered as a space in browsers. + +## Validation Checklist + +- [ ] ATX headings use 1–6 `#` followed by a space. +- [ ] Fenced code blocks specify a language identifier and use matching fence characters and counts. +- [ ] Backtick fence info strings do not contain backtick characters. +- [ ] Indented code blocks are preceded by a blank line (they cannot interrupt a paragraph). +- [ ] Emphasis uses `*` for intraword; `_` only at word boundaries. +- [ ] Links use `[text](url)` or reference syntax with no whitespace before `(` or `[`. +- [ ] Images include non-empty alt text. +- [ ] Autolinks use angle brackets (``); bare URLs are not CommonMark autolinks. +- [ ] No unbalanced parentheses in bare link destinations (use `<…>` or escape). +- [ ] HTML block type 7 (custom/inline-level tags) is preceded by a blank line when following a paragraph. diff --git a/.github/instructions/memory-bank.instructions.md b/.github/instructions/memory-bank.instructions.md new file mode 100644 index 00000000000..c7ccaa7922d --- /dev/null +++ b/.github/instructions/memory-bank.instructions.md @@ -0,0 +1,299 @@ +--- +applyTo: '**/memories/**/*.md, **/.github/workspace-memory.md, **/memory-bank/**/*.md' +--- +Coding standards, domain knowledge, and preferences that AI should follow. + +# Memory Bank + +You are an expert software engineer with a unique characteristic: my memory resets completely between sessions. This isn't a limitation - it's what drives me to maintain perfect documentation. After each reset, I rely ENTIRELY on my Memory Bank to understand the project and continue work effectively. I MUST read ALL memory bank files at the start of EVERY task - this is not optional. + +## Memory Bank Structure + +The Memory Bank consists of required core files and optional context files, all in Markdown format. Files build upon each other in a clear hierarchy: + +```mermaid +flowchart TD + PB[projectbrief.md] --> PC[productContext.md] + PB --> SP[systemPatterns.md] + PB --> TC[techContext.md] + + PC --> AC[activeContext.md] + SP --> AC + TC --> AC + + AC --> P[progress.md] + AC --> TF[tasks/ folder] +``` + +### Core Files (Required) +1. `projectbrief.md` + - Foundation document that shapes all other files + - Created at project start if it doesn't exist + - Defines core requirements and goals + - Source of truth for project scope + +2. `productContext.md` + - Why this project exists + - Problems it solves + - How it should work + - User experience goals + +3. `activeContext.md` + - Current work focus + - Recent changes + - Next steps + - Active decisions and considerations + +4. `systemPatterns.md` + - System architecture + - Key technical decisions + - Design patterns in use + - Component relationships + +5. `techContext.md` + - Technologies used + - Development setup + - Technical constraints + - Dependencies + +6. `progress.md` + - What works + - What's left to build + - Current status + - Known issues + +7. `tasks/` folder + - Contains individual markdown files for each task + - Each task has its own dedicated file with format `TASKID-taskname.md` + - Includes task index file (`_index.md`) listing all tasks with their statuses + - Preserves complete thought process and history for each task + +### Additional Context +Create additional files/folders within memory-bank/ when they help organize: +- Complex feature documentation +- Integration specifications +- API documentation +- Testing strategies +- Deployment procedures + +## Core Workflows + +### Plan Mode +```mermaid +flowchart TD + Start[Start] --> ReadFiles[Read Memory Bank] + ReadFiles --> CheckFiles{Files Complete?} + + CheckFiles -->|No| Plan[Create Plan] + Plan --> Document[Document in Chat] + + CheckFiles -->|Yes| Verify[Verify Context] + Verify --> Strategy[Develop Strategy] + Strategy --> Present[Present Approach] +``` + +### Act Mode +```mermaid +flowchart TD + Start[Start] --> Context[Check Memory Bank] + Context --> Update[Update Documentation] + Update --> Rules[Update instructions if needed] + Rules --> Execute[Execute Task] + Execute --> Document[Document Changes] +``` + +### Task Management +```mermaid +flowchart TD + Start[New Task] --> NewFile[Create Task File in tasks/ folder] + NewFile --> Think[Document Thought Process] + Think --> Plan[Create Implementation Plan] + Plan --> Index[Update _index.md] + + Execute[Execute Task] --> Update[Add Progress Log Entry] + Update --> StatusChange[Update Task Status] + StatusChange --> IndexUpdate[Update _index.md] + IndexUpdate --> Complete{Completed?} + Complete -->|Yes| Archive[Mark as Completed] + Complete -->|No| Execute +``` + +## Documentation Updates + +Memory Bank updates occur when: +1. Discovering new project patterns +2. After implementing significant changes +3. When user requests with **update memory bank** (MUST review ALL files) +4. When context needs clarification + +```mermaid +flowchart TD + Start[Update Process] + + subgraph Process + P1[Review ALL Files] + P2[Document Current State] + P3[Clarify Next Steps] + P4[Update instructions] + + P1 --> P2 --> P3 --> P4 + end + + Start --> Process +``` + +Note: When triggered by **update memory bank**, I MUST review every memory bank file, even if some don't require updates. Focus particularly on activeContext.md, progress.md, and the tasks/ folder (including _index.md) as they track current state. + +## Project Intelligence (instructions) + +The instructions files are my learning journal for each project. It captures important patterns, preferences, and project intelligence that help me work more effectively. As I work with you and the project, I'll discover and document key insights that aren't obvious from the code alone. + +```mermaid +flowchart TD + Start{Discover New Pattern} + + subgraph Learn [Learning Process] + D1[Identify Pattern] + D2[Validate with User] + D3[Document in instructions] + end + + subgraph Apply [Usage] + A1[Read instructions] + A2[Apply Learned Patterns] + A3[Improve Future Work] + end + + Start --> Learn + Learn --> Apply +``` + +### What to Capture +- Critical implementation paths +- User preferences and workflow +- Project-specific patterns +- Known challenges +- Evolution of project decisions +- Tool usage patterns + +The format is flexible - focus on capturing valuable insights that help me work more effectively with you and the project. Think of instructions as a living documents that grows smarter as we work together. + +## Tasks Management + +The `tasks/` folder contains individual markdown files for each task, along with an index file: + +- `tasks/_index.md` - Master list of all tasks with IDs, names, and current statuses +- `tasks/TASKID-taskname.md` - Individual files for each task (e.g., `TASK001-implement-login.md`) + +### Task Index Structure + +The `_index.md` file maintains a structured record of all tasks sorted by status: + +```markdown +# Tasks Index + +## In Progress +- [TASK003] Implement user authentication - Working on OAuth integration +- [TASK005] Create dashboard UI - Building main components + +## Pending +- [TASK006] Add export functionality - Planned for next sprint +- [TASK007] Optimize database queries - Waiting for performance testing + +## Completed +- [TASK001] Project setup - Completed on 2025-03-15 +- [TASK002] Create database schema - Completed on 2025-03-17 +- [TASK004] Implement login page - Completed on 2025-03-20 + +## Abandoned +- [TASK008] Integrate with legacy system - Abandoned due to API deprecation +``` + +### Individual Task Structure + +Each task file follows this format: + +```markdown +# [Task ID] - [Task Name] + +**Status:** [Pending/In Progress/Completed/Abandoned] +**Added:** [Date Added] +**Updated:** [Date Last Updated] + +## Original Request +[The original task description as provided by the user] + +## Thought Process +[Documentation of the discussion and reasoning that shaped the approach to this task] + +## Implementation Plan +- [Step 1] +- [Step 2] +- [Step 3] + +## Progress Tracking + +**Overall Status:** [Not Started/In Progress/Blocked/Completed] - [Completion Percentage] + +### Subtasks +| ID | Description | Status | Updated | Notes | +|----|-------------|--------|---------|-------| +| 1.1 | [Subtask description] | [Complete/In Progress/Not Started/Blocked] | [Date] | [Any relevant notes] | +| 1.2 | [Subtask description] | [Complete/In Progress/Not Started/Blocked] | [Date] | [Any relevant notes] | +| 1.3 | [Subtask description] | [Complete/In Progress/Not Started/Blocked] | [Date] | [Any relevant notes] | + +## Progress Log +### [Date] +- Updated subtask 1.1 status to Complete +- Started work on subtask 1.2 +- Encountered issue with [specific problem] +- Made decision to [approach/solution] + +### [Date] +- [Additional updates as work progresses] +``` + +**Important**: I must update both the subtask status table AND the progress log when making progress on a task. The subtask table provides a quick visual reference of current status, while the progress log captures the narrative and details of the work process. When providing updates, I should: + +1. Update the overall task status and completion percentage +2. Update the status of relevant subtasks with the current date +3. Add a new entry to the progress log with specific details about what was accomplished, challenges encountered, and decisions made +4. Update the task status in the _index.md file to reflect current progress + +These detailed progress updates ensure that after memory resets, I can quickly understand the exact state of each task and continue work without losing context. + +### Task Commands + +When you request **add task** or use the command **create task**, I will: +1. Create a new task file with a unique Task ID in the tasks/ folder +2. Document our thought process about the approach +3. Develop an implementation plan +4. Set an initial status +5. Update the _index.md file to include the new task + +For existing tasks, the command **update task [ID]** will prompt me to: +1. Open the specific task file +2. Add a new progress log entry with today's date +3. Update the task status if needed +4. Update the _index.md file to reflect any status changes +5. Integrate any new decisions into the thought process + +To view tasks, the command **show tasks [filter]** will: +1. Display a filtered list of tasks based on the specified criteria +2. Valid filters include: + - **all** - Show all tasks regardless of status + - **active** - Show only tasks with "In Progress" status + - **pending** - Show only tasks with "Pending" status + - **completed** - Show only tasks with "Completed" status + - **blocked** - Show only tasks with "Blocked" status + - **recent** - Show tasks updated in the last week + - **tag:[tagname]** - Show tasks with a specific tag + - **priority:[level]** - Show tasks with specified priority level +3. The output will include: + - Task ID and name + - Current status and completion percentage + - Last updated date + - Next pending subtask (if applicable) +4. Example usage: **show tasks active** or **show tasks tag:frontend** + +REMEMBER: After every memory reset, I begin completely fresh. The Memory Bank is my only link to previous work. It must be maintained with precision and clarity, as my effectiveness depends entirely on its accuracy. \ No newline at end of file diff --git a/.github/instructions/performance-optimization.instructions.md b/.github/instructions/performance-optimization.instructions.md new file mode 100644 index 00000000000..7d743e70390 --- /dev/null +++ b/.github/instructions/performance-optimization.instructions.md @@ -0,0 +1,962 @@ +--- +applyTo: '**/*.js, **/*.jsx, **/*.ts, **/*.tsx, **/*.html, **/*.css, **/*.scss' +description: 'Comprehensive web performance standards based on Core Web Vitals (LCP, INP, CLS), with anti-pattern detection and framework-specific fixes for web code.' +--- + +# Performance Standards + +Comprehensive performance rules for web application development. Every anti-pattern includes a severity classification, detection method, Core Web Vitals metric impacted, and corrective code examples. + +**Severity levels:** + +- **CRITICAL** — Directly degrades a Core Web Vital past the "poor" threshold. Must be fixed before merge. +- **IMPORTANT** — Measurably impacts user experience. Fix in same sprint. +- **SUGGESTION** — Optimization opportunity. Plan for a future iteration. + +--- + +## Core Web Vitals Quick Reference + +### LCP (Largest Contentful Paint) + +**Good: < 2.5s | Needs Improvement: 2.5-4s | Poor: > 4s** + +Measures when the largest visible content element finishes rendering. Four sequential phases: + +| Phase | Target | What It Measures | +|-------|--------|-----------------| +| TTFB | ~40% of budget | Server response time | +| Resource Load Delay | < 10% | Time between TTFB and LCP resource fetch start | +| Resource Load Duration | ~40% | Download time for the LCP resource | +| Element Render Delay | < 10% | Time between download and paint | + +### INP (Interaction to Next Paint) + +**Good: < 200ms | Needs Improvement: 200-500ms | Poor: > 500ms** + +Measures latency of all user interactions, reports the worst. Three phases: + +| Phase | Optimization | +|-------|-------------| +| Input Delay | Break long tasks, yield to browser | +| Processing Time | Keep handlers < 50ms | +| Presentation Delay | Minimize DOM size, avoid forced layout | + +> **Diagnostic tool:** Use the Long Animation Frames (LoAF) API (Chrome 123+) to debug INP issues. LoAF provides better attribution than the legacy Long Tasks API, including script source and rendering time. + +### CLS (Cumulative Layout Shift) + +**Good: < 0.1 | Needs Improvement: 0.1-0.25 | Poor: > 0.25** + +Layout shift sources: images without dimensions, dynamically injected content, web font FOUT, late-loading ads. Shifts within 500ms of user interaction are exempt. + +--- + +## Loading and LCP Anti-Patterns (L1-L10) + +### L1: Render-Blocking CSS Without Critical Extraction + +- **Severity**: CRITICAL +- **Detection**: `` loading large CSS +- **CWV**: LCP + +```html + + + + + + + +``` + +Prefer build-time critical CSS extraction (e.g., Critters, Beasties, Next.js `experimental.optimizeCss`) plus a normal ``. Avoid the older `media="print" onload="this.media='all'"` trick: inline event handlers are blocked under a strict CSP (no `'unsafe-inline'` / no `script-src-attr 'unsafe-inline'`), which would prevent the stylesheet from ever activating and cause a styling regression. If non-critical CSS truly must be deferred, load it via an **external** script that swaps `media`, not an inline handler. + +### L2: Render-Blocking Synchronous Script + +- **Severity**: CRITICAL +- **Detection**: ` + + + + +``` + +### L3: Missing Preconnect to Critical Origins + +- **Severity**: IMPORTANT +- **Detection**: Third-party API/CDN URLs without `` +- **CWV**: LCP + +```html + + +``` + +### L4: Missing Preload for LCP Resource + +- **Severity**: CRITICAL +- **Detection**: LCP image/font not preloaded +- **CWV**: LCP + +```html + +``` + +### L5: Client-Side Data Fetching for Main Content + +- **Severity**: CRITICAL +- **Detection**: `useEffect.*fetch|useEffect.*axios|ngOnInit.*subscribe` +- **CWV**: LCP + +```tsx +// BAD — content appears after JS execution + API call +'use client'; +function Page() { + const [data, setData] = useState(null); + useEffect(() => { fetch('/api/data').then(r => r.json()).then(setData); }, []); + return
{data?.title}
; +} + +// GOOD — Server Component fetches data before HTML is sent +async function Page() { + const data = await fetch('https://api.example.com/data').then(r => r.json()); + return
{data.title}
; +} +``` + +### L6: Excessive Redirect Chains + +- **Severity**: IMPORTANT +- **Detection**: Multiple sequential redirects (HTTP 301/302 chains) +- **CWV**: LCP + +Each redirect adds 200-300ms. Maximum one redirect. + +### L7: Missing fetchpriority on LCP Element + +- **Severity**: IMPORTANT +- **Detection**: Above-fold hero image without `fetchpriority="high"` or `priority` prop +- **CWV**: LCP + +```tsx +// Next.js +Hero + +// Angular +Hero + +// Plain HTML +Hero +``` + +### L8: Third-Party Scripts in Head Without Async/Defer + +- **Severity**: IMPORTANT +- **Detection**: `14KB) + +- **Severity**: SUGGESTION +- **Detection**: Server-rendered HTML larger than 14KB +- **CWV**: LCP + +Reduce inline CSS/JS, remove whitespace, use streaming SSR with Suspense boundaries. + +### L10: Missing Compression + +- **Severity**: IMPORTANT +- **Detection**: Server not returning `content-encoding: br` or `gzip` +- **CWV**: LCP + +Enable Brotli (15-25% better than gzip) at CDN/server level. + +--- + +## Rendering and Hydration Anti-Patterns (R1-R8) + +### R1: Entire Component Tree Marked "use client" + +- **Severity**: CRITICAL +- **Detection**: `"use client"` at top-level layout or page component +- **CWV**: LCP + INP + +Push `"use client"` down to leaf components that need interactivity. + +### R2: Missing Suspense Boundaries for Async Data + +- **Severity**: IMPORTANT +- **Detection**: Server Components doing data fetching without `` +- **CWV**: LCP + +```tsx +// GOOD — stream shell immediately, fill in data progressively +async function Page() { + const user = await getUser(); + return ( +
+
+ }> + + +
+ ); +} +``` + +### R3: Hydration Mismatch from Dynamic Client Content + +- **Severity**: IMPORTANT +- **Detection**: `Date.now()|Math.random()|window\.innerWidth` in SSR components +- **CWV**: CLS + +Use `useEffect` for client-only values, or `suppressHydrationWarning` for known differences. + +### R4: Missing Streaming for Slow Data Sources + +- **Severity**: IMPORTANT +- **Detection**: Page awaiting all data before sending HTML +- **CWV**: LCP (TTFB) + +Use streaming SSR with Suspense boundaries. Shell streams immediately; slow data fills in progressively. + +### R5: Unstable References Causing Re-renders + +- **Severity**: IMPORTANT +- **Detection**: `style=\{\{|onClick=\{\(\) =>` inline in JSX +- **CWV**: INP + +React 19+ with React Compiler enabled (separate babel/SWC build plugin): auto-memoized. Without Compiler: extract or memoize with `useMemo`/`useCallback`. Angular: OnPush. Vue: `computed()`. + +### R6: Missing Virtualization for Long Lists + +- **Severity**: IMPORTANT +- **Detection**: `.map(` rendering >100 items without virtual scrolling +- **CWV**: INP + +Use TanStack Virtual, react-window, Angular CDK Virtual Scroll, or vue-virtual-scroller. + +### R7: SSR of Immediately-Hidden Content + +- **Severity**: SUGGESTION +- **Detection**: Server-rendering `display: none` components +- **CWV**: LCP (TTFB) + +Use client-side rendering for modals, drawers, dropdowns. Angular: `@defer`. React: `React.lazy`. + +### R8: Missing `key` Prop on List Items + +- **Severity**: IMPORTANT +- **Detection**: `.map(` without `key=` prop +- **CWV**: INP + +```tsx +// GOOD — stable unique key +{items.map(item => )} +``` + +Never use array index as key if list can reorder. + +--- + +## JavaScript Runtime and INP Anti-Patterns (J1-J8) + +### J1: Long Synchronous Task in Event Handler + +- **Severity**: CRITICAL +- **Detection**: Event handlers with heavy computation (>50ms) +- **CWV**: INP + +```typescript +// GOOD — yield to browser +async function handleClick() { + setLoading(true); + await (globalThis.scheduler?.yield?.() ?? new Promise(r => setTimeout(r, 0))); + const result = expensiveComputation(data); + setResult(result); +} +``` + +Move heavy work to Web Worker for best results. + +> **Note:** `scheduler.yield()` is supported in Chrome 129+, Firefox 129+, but NOT Safari as of April 2026. Fallback: `await (globalThis.scheduler?.yield?.() ?? new Promise(r => setTimeout(r, 0)))`. + +### J2: Layout Thrashing + +- **Severity**: CRITICAL +- **Detection**: `offsetHeight|offsetWidth|getBoundingClientRect|clientHeight` in loops +- **CWV**: INP + +```typescript +// GOOD — batch reads then batch writes +const heights = elements.map(el => el.offsetHeight); +elements.forEach((el, i) => { el.style.height = `${heights[i] + 10}px`; }); +``` + +### J3: setInterval/setTimeout Without Cleanup + +- **Severity**: IMPORTANT +- **Detection**: `setInterval|setTimeout` without cleanup +- **Impact**: Memory + +```tsx +useEffect(() => { + const id = setInterval(() => fetchData(), 5000); + return () => clearInterval(id); +}, []); +``` + +### J4: addEventListener Without removeEventListener + +- **Severity**: IMPORTANT +- **Detection**: `addEventListener` without cleanup +- **Impact**: Memory + +```tsx +useEffect(() => { + const controller = new AbortController(); + window.addEventListener('resize', handleResize, { signal: controller.signal }); + return () => controller.abort(); +}, []); +``` + +### J5: Detached DOM Node References + +- **Severity**: SUGGESTION +- **Detection**: Variables holding references to removed DOM elements +- **Impact**: Memory + +Set references to `null` when elements are removed. + +### J6: Synchronous XHR + +- **Severity**: CRITICAL +- **Detection**: `XMLHttpRequest` with synchronous flag +- **CWV**: INP + +Use `fetch()` (always async). + +### J7: Heavy Computation on Main Thread + +- **Severity**: IMPORTANT +- **Detection**: CPU-intensive operations in component code +- **CWV**: INP + +Move to Web Worker or break into chunks with `scheduler.yield()`. + +### J8: Missing Effect Cleanup + +- **Severity**: IMPORTANT +- **Detection**: `useEffect` without return cleanup; `subscribe` without unsubscribe +- **Impact**: Memory + +React: return cleanup from `useEffect`. Angular: `takeUntilDestroyed()`. Vue: `onUnmounted`. + +--- + +## CSS Performance Anti-Patterns (C1-C7) + +### C1: Animation Using Layout-Triggering Properties + +- **Severity**: CRITICAL +- **Detection**: `animation:|transition:` with `top|left|width|height|margin|padding` +- **CWV**: INP + +```css +/* BAD — main thread, <60fps */ +.card { transition: width 0.3s, height 0.3s; } + +/* GOOD — GPU compositor, 60fps */ +.card { transition: transform 0.3s, opacity 0.3s; } +.card:hover { transform: scale(1.05); } +``` + +### C2: Missing content-visibility for Off-Screen Sections + +- **Severity**: SUGGESTION +- **Detection**: Long pages without `content-visibility: auto` +- **CWV**: INP + +```css +.below-fold-section { + content-visibility: auto; + contain-intrinsic-size: auto 500px; +} +``` + +### C3: will-change Applied Permanently + +- **Severity**: SUGGESTION +- **Detection**: `will-change:` in base CSS (not `:hover|:focus`) +- **Impact**: Memory + +Apply on interaction only or let browser optimize automatically. + +### C4: Large Unused CSS + +- **Severity**: IMPORTANT +- **Detection**: CSS where >50% of rules are unused +- **CWV**: LCP + +Use PurgeCSS, Tailwind purge, or critters. Code-split CSS per route. + +### C5: Universal Selector in Hot Paths + +- **Severity**: SUGGESTION +- **Detection**: `\* \{` in CSS +- **CWV**: INP + +```css +/* GOOD — zero-specificity reset */ +:where(*, *::before, *::after) { box-sizing: border-box; } +``` + +### C6: Missing CSS Containment + +- **Severity**: SUGGESTION +- **Detection**: Complex components without `contain` property +- **CWV**: INP + +```css +.sidebar { contain: layout style paint; } +``` + +### C7: Route Transitions Without View Transitions API + +- **Severity**: SUGGESTION +- **Detection**: SPA route changes without View Transitions API +- **CWV**: CLS (perceived) + +```javascript +// Use View Transitions for smooth route changes (with feature check) +if (document.startViewTransition) { + document.startViewTransition(() => { + // update DOM / navigate + }); +} else { + // fallback: update DOM directly +} +``` + +Same-document transitions supported in all major browsers. Cross-document supported in Chrome/Edge 126+, Safari 18.5+. Always feature-check before calling — unsupported browsers will throw without the guard. + +--- + +## Images, Media and Fonts Anti-Patterns (I1-I8) + +### I1: Images Without Dimensions + +- **Severity**: CRITICAL +- **Detection**: ` +Hero +``` + +### I3: Legacy Format Only (JPEG/PNG) + +- **Severity**: IMPORTANT +- **Detection**: Images without WebP/AVIF alternatives +- **CWV**: LCP + +```html + + + + Hero + +``` + +### I4: Missing Responsive srcset/sizes + +- **Severity**: IMPORTANT +- **Detection**: ` +``` + +### I5: Font Without font-display + +- **Severity**: IMPORTANT +- **Detection**: `@font-face` without `font-display` +- **CWV**: CLS + +```css +@font-face { + font-family: 'CustomFont'; + src: url('/fonts/custom.woff2') format('woff2'); + font-display: swap; /* or "optional" for best CLS */ +} +``` + +### I6: Critical Font Not Preloaded + +- **Severity**: IMPORTANT +- **Detection**: Custom font without `` +- **CWV**: LCP + CLS + +```html + +``` + +### I7: Full Font Loaded When Subset Suffices + +- **Severity**: SUGGESTION +- **Detection**: Font files > 50KB WOFF2 +- **CWV**: LCP + +Use `unicode-range`, subset with glyphhanger, or `next/font` (auto-subsets Google Fonts). + +### I8: Unoptimized SVGs + +- **Severity**: SUGGESTION +- **Detection**: SVGs with editor metadata +- **CWV**: LCP (minor) + +```bash +npx svgo input.svg -o output.svg +``` + +--- + +## Bundle and Tree Shaking Anti-Patterns (B1-B6) + +### B1: Barrel File Importing Entire Module + +- **Severity**: IMPORTANT +- **Detection**: `from '\.\/(?:.*\/index|components)'` +- **CWV**: INP + +```typescript +// BAD +import { Button } from './components'; + +// GOOD — direct import +import { Button } from './components/Button'; +``` + +### B2: CommonJS require() Preventing Tree Shaking + +- **Severity**: IMPORTANT +- **Detection**: `require(` in frontend code +- **CWV**: INP + +Use ESM `import/export`. Replace `require` with `import`. + +### B3: Large Dependency for Small Utility + +- **Severity**: IMPORTANT +- **Detection**: `from "moment"|from "lodash"` (full imports) +- **CWV**: INP + +```typescript +// GOOD — tree-shakeable alternatives +import { format } from 'date-fns'; +import { pick } from 'lodash-es'; + +// BEST — native JS +const formatted = new Intl.DateTimeFormat('en').format(date); +``` + +### B4: Missing Dynamic Import for Route Splitting + +- **Severity**: CRITICAL +- **Detection**: All route components imported statically +- **CWV**: INP + +```tsx +// Next.js: automatic with file-based routing +// React: +const Page = React.lazy(() => import('./pages/Page')); +// Angular: +{ path: 'settings', loadComponent: () => import('./pages/settings.component') } +// Vue: +const Page = defineAsyncComponent(() => import('./pages/Page.vue')); +``` + +### B5: Missing sideEffects in package.json + +- **Severity**: SUGGESTION +- **Detection**: Library package.json without `"sideEffects"` field +- **CWV**: INP + +```json +{ "sideEffects": false } +``` + +### B6: Duplicate Dependencies + +- **Severity**: SUGGESTION +- **Detection**: Same library at multiple versions +- **CWV**: INP + +```bash +npm dedupe +``` + +--- + +## Framework-Specific: Next.js (NX1-NX6) + +### NX1: Not Using next/image + +- **Severity**: IMPORTANT +- **Detection**: `` +- **CWV**: LCP + CLS + +```tsx +import Image from 'next/image'; +Hero +``` + +### NX2: Not Using Cache Components for Partial Prerendering + +- **Severity**: IMPORTANT +- **Detection**: Pages without `"use cache"` directive in Next.js 16+ projects +- **CWV**: LCP + +```typescript +// BAD — entire page is dynamic +export default async function Page() { + const data = await fetchData(); // blocks full page render + return
{data.title}
; +} + +// GOOD — enable Partial Prerendering with "use cache" +// next.config.ts: { cacheComponents: true } +"use cache"; +export default async function Page() { + const data = await fetchData(); // static shell renders instantly, dynamic holes stream + return
{data.title}
; +} +``` + +Enable in `next.config.ts` with `cacheComponents: true`. Use `"use cache"` at file, component, or function level. Static shell loads instantly; dynamic content streams via Suspense boundaries. + +### NX3: Unnecessary "use client" on Server-Renderable Component + +- **Severity**: IMPORTANT +- **Detection**: `"use client"` on components without hooks or browser APIs +- **CWV**: INP + +Remove `"use client"` from components that only render static content. + +### NX4: Data Fetching in useEffect Instead of Server-Side + +- **Severity**: CRITICAL +- **Detection**: `useEffect` + `fetch` in Next.js App Router pages +- **CWV**: LCP + +Fetch data in Server Components directly (async function body). + +### NX5: Missing next/font + +- **Severity**: IMPORTANT +- **Detection**: `fonts.googleapis|fonts.gstatic` in CSS/HTML +- **CWV**: CLS + LCP + +```tsx +import { Inter } from 'next/font/google'; +const inter = Inter({ subsets: ['latin'] }); +``` + +### NX6: Missing "use cache" for Cacheable Server Functions + +- **Severity**: IMPORTANT +- **Detection**: Async server functions without `"use cache"` in Next.js 16+ with `cacheComponents: true` +- **CWV**: LCP + +```typescript +// BAD — data fetched on every request +async function getProducts() { + return await db.products.findMany(); +} + +// GOOD — cached with revalidation +"use cache"; +import { cacheLife } from 'next/cache'; +async function getProducts() { + cacheLife('hours'); + return await db.products.findMany(); +} +``` + +`"use cache"` replaces the old `unstable_cache` and `fetch` cache options. Use `cacheLife()` and `cacheTag()` for fine-grained control. + +--- + +## Framework-Specific: Angular (NG1-NG6) + +### NG1: Default Change Detection on Presentational Components + +- **Severity**: IMPORTANT +- **Detection**: Components without `ChangeDetectionStrategy.OnPush` (Angular <19) or without signals (Angular 19+) +- **CWV**: INP + +```typescript +// Angular <19: Use OnPush +@Component({ + changeDetection: ChangeDetectionStrategy.OnPush, + ... +}) + +// Angular 19+: Prefer zoneless with signals +// app.config.ts: provideZonelessChangeDetection() +@Component({ ... }) +export class ProductCard { + product = input.required(); // signal input + price = computed(() => this.product().price * 1.19); // derived signal +} +``` + +Angular 19+: prefer zoneless change detection with signals. OnPush is unnecessary when using signal-based reactivity. Angular 20+ has stable zoneless support. + +### NG2: Not Using NgOptimizedImage + +- **Severity**: IMPORTANT +- **Detection**: ` +``` + +### NG3: Missing @defer for Below-Fold Content + +- **Severity**: SUGGESTION +- **Detection**: Heavy below-fold components loaded eagerly (Angular 17+) +- **CWV**: INP + +```html +@defer (on viewport) { + +} @placeholder { +
+} +``` + +### NG4: Not Using Signals for Reactive State + +- **Severity**: SUGGESTION +- **Detection**: Class properties without signals in Angular 19+ +- **CWV**: INP + +Use `signal()` for reactive state, `computed()` for derived values. Signal APIs (`signal()`, `computed()`, `effect()`) are stable since Angular 20. + +### NG5: Full Hydration Without Incremental Hydration + +- **Severity**: IMPORTANT +- **Detection**: SSR app without `withIncrementalHydration()` in Angular 19+ +- **CWV**: LCP, INP + +```typescript +// BAD — full hydration blocks interactivity +provideClientHydration() + +// GOOD — incremental hydration with triggers +provideClientHydration(withIncrementalHydration()) +``` + +Use `@defer` triggers (`on viewport`, `on interaction`) to hydrate components on demand. Reduces TTI by deferring non-critical component hydration. + +### NG6: Still Using zone.js in Angular 20+ Projects + +- **Severity**: SUGGESTION +- **Detection**: `zone.js` in polyfills array, no `provideZonelessChangeDetection()` in Angular 20+ +- **CWV**: INP + +```typescript +// app.config.ts +export const appConfig = { + providers: [ + provideZonelessChangeDetection(), // removes ~15-30KB from bundle + // ... + ] +}; +``` + +Zoneless change detection with signals reduces bundle size and improves runtime performance. Stable since Angular 20. + +--- + +## Framework-Specific: React (RX1-RX4) + +### RX1: Missing React Compiler Adoption + +- **Severity**: SUGGESTION +- **Detection**: Manual `useMemo|useCallback` in React 19+ project +- **CWV**: INP + +Enable React Compiler (v19+) for auto-memoization. Remove manual wrappers. + +### RX2: Missing useTransition for Expensive Updates + +- **Severity**: IMPORTANT +- **Detection**: State updates causing expensive re-renders without `useTransition` +- **CWV**: INP + +```tsx +const [isPending, startTransition] = useTransition(); +function handleFilter(value) { + startTransition(() => setFilter(value)); +} +``` + +### RX3: Missing useDeferredValue for Expensive Rendering + +- **Severity**: IMPORTANT +- **Detection**: Expensive rendering from rapidly-changing input +- **CWV**: INP + +```tsx +const deferredQuery = useDeferredValue(query); +const results = expensiveFilter(items, deferredQuery); +``` + +### RX4: Missing React.lazy for Route Splitting + +- **Severity**: IMPORTANT +- **Detection**: Route components imported statically +- **CWV**: INP + +```tsx +const Settings = React.lazy(() => import('./pages/Settings')); +``` + +--- + +## Framework-Specific: Vue (VU1-VU4) + +### VU1: reactive() on Large Data Structures + +- **Severity**: IMPORTANT +- **Detection**: `reactive(` on large arrays or deep objects +- **CWV**: INP + +Use `shallowRef()` or `shallowReactive()` for large data. + +### VU2: Missing v-memo on Expensive List Renders + +- **Severity**: SUGGESTION +- **Detection**: Large lists without `v-memo` +- **CWV**: INP + +```vue +
+ +
+``` + +### VU3: Missing defineAsyncComponent + +- **Severity**: IMPORTANT +- **Detection**: Heavy components imported statically +- **CWV**: INP + +```typescript +const HeavyChart = defineAsyncComponent(() => import('./HeavyChart.vue')); +``` + +### VU4: Not Using Vapor Mode for Performance-Critical Components + +- **Severity**: SUGGESTION +- **Detection**: Performance-critical components using virtual DOM in Vue 3.6+ +- **CWV**: INP + +Vue 3.6+ Vapor Mode compiles templates to direct DOM operations, bypassing the virtual DOM. Use for performance-critical subtrees. Can be mixed with standard components. + +--- + +## Resource Hints Quick Reference + +| Hint | Purpose | When to Use | +|------|---------|-------------| +| `preconnect` | DNS + TCP + TLS early | Critical third-party origins (API, CDN, fonts) | +| `preload` | Fetch immediately, high priority | LCP image, critical font | +| `prefetch` | Low priority for future navigation | Next-page assets | +| `dns-prefetch` | DNS resolution only | Non-critical third-party origins | +| `modulepreload` | Preload + parse ES module | Critical JS modules | +| ` -Here the symbol map using point data from our data story [Breathe easy: NYC’s air quality is improving](../../data-stories/breatheeasy/), works well in a visual context, where sighted users can gather information about which air quality monitoring sites are located throughout NYC. +Here the symbol map using point data from our data story [Breathe easy: NYC’s air quality is improving]({{< relURL >}}data-stories/breatheeasy/), works well in a visual context, where sighted users can gather information about which air quality monitoring sites are located throughout NYC. The air quality sites are overlaid onto this point map using latitude and longitude coordinates, and corresponding site IDs. But when we made a table with these data, we realized we were delivering information that is less meaningful in a non-visual context to our screen-reader users. @@ -81,7 +81,7 @@ The point map shows sighted people where the monitors are, using latitude and lo ### When there's too much data for a table -In some cases, tables wouldn’t add much meaningful context for a screen reader. Raster maps show data on a grid of small pixels – more than 80,000 for NYC. The pixels allow us to see nuanced gradations in data values by seeing where pixels are denser and deeper colors. In this example, [some users can observe that the concentration of two air pollutants, NO2 and PM2.5, decreased as traffic and commercial cooking decreased](../../data-stories/air-quality-and-covid-part-2/) during the first year of the COVID-19 pandemic. +In some cases, tables wouldn’t add much meaningful context for a screen reader. Raster maps show data on a grid of small pixels – more than 80,000 for NYC. The pixels allow us to see nuanced gradations in data values by seeing where pixels are denser and deeper colors. In this example, [some users can observe that the concentration of two air pollutants, NO2 and PM2.5, decreased as traffic and commercial cooking decreased]({{< relURL >}}data-stories/air-quality-and-covid-part-2/) during the first year of the COVID-19 pandemic. But providing somebody with a table of tens of thousands of x-y coordinates and values wouldn't communicate what these data show. So, we add descriptive alt text that explains the visualization's major takeaways about how air pollution changed during the COVID-19 pandemic. diff --git a/content/about/advanced-tools/index.md b/content/about/advanced-tools/index.md index 514f72e90ac..a0e03f72eba 100644 --- a/content/about/advanced-tools/index.md +++ b/content/about/advanced-tools/index.md @@ -23,7 +23,7 @@ If you are a data scientist, programmer, open data enthusiast, open source evang ## Re-use data visualizations -The visualizations on our site's [Data Explorer](../../data-explorer/) use a JavaScript library called [Vega-Lite](https://vega.github.io/vega-lite/). Vega-lite provides a few options for you to use the visualizations in other contexts. +The visualizations on our site's [Data Explorer]({{< relURL >}}data-explorer/) use a JavaScript library called [Vega-Lite](https://vega.github.io/vega-lite/). Vega-lite provides a few options for you to use the visualizations in other contexts. Above and to the right of the visualiaztions, there's a three-dot menu. Clicking this gives you a few helpful options: @@ -51,7 +51,7 @@ You can bypass ourwebsite and get data by going to [our data repository](https:/ - `metadata.json` contains the indicators' names, indicator IDs (used to identify which file the data are in), data sources, notes, and other visualization specifications used by the site code. - The `/data` folder contains a .json file for each indicator, with fields for the measure ID, geography, time period, value, and a few other supplemental fields. -To help find specific datasets (indicators) in `metadata.json`, [you can search by text on our Indicator Catalog page](../../data-explorer/indicator-catalog/) +To help find specific datasets (indicators) in `metadata.json`, [you can search by text on our Indicator Catalog page]({{< relURL >}}data-explorer/indicator-catalog/) ![](Repo.png) diff --git a/content/about/data-sources/index.md b/content/about/data-sources/index.md index f87089ad9d8..29cb2743f21 100644 --- a/content/about/data-sources/index.md +++ b/content/about/data-sources/index.md @@ -35,10 +35,10 @@ We get data from many sources: some from the NYC Health Department or other city

What is the Community Health Survey (CHS)? This survey is conducted by the NYC Health Department and interviews about 10,000 New Yorkers each year. Running since 2002, CHS reports detailed data on many chronic diseases and health behaviors, helping us see trends at the neighborhood, borough, and citywide level.

Some CHS indicators include:

What we use it for: CHS data helps us track the health of New Yorkers and connect the dots between health behavior and health status. This helps us prioritize programs where they matter most. If we see, for example, that there are fewer adults with a doctor in West Queens, we can try to address that through campaigns and outreach. Or if there are more adults with a recent asthma attack in the South Bronx, we can communicate with healthcare providers, neighborhood centers, and schools to provide resources and and education, prevention, and mitigation plans.

@@ -59,10 +59,10 @@ We get data from many sources: some from the NYC Health Department or other city What is New York Statewide Planning and Research Cooperative System (SPARCS)? SPARCS is a billing claims data system that collects patient-level data, like diagnoses, treatments, and characteristics for both inpatient and outpatient stays in every hospital throughout New York state. It is a collaboration between the NY state government and the healthcare system. At the NYC Health Department, we restrict data to hospitals within NYC and sometimes to NYC residents.

Some SPARCS indicators include: What we use it for: Patient-level data from healthcare facilities can help us understand how environmental factors (for example, hot days and socioeconomic status) relate to health outcomes (like heat-related hospitalizations among different demographics). These data help us characterize severity and risk factors for different populations.

@@ -84,10 +84,10 @@ We get data from many sources: some from the NYC Health Department or other city What is the Housing and Vacancy Survey (HVS)? The NYC Department of Housing Preservation and Development (HPD) and the US Census Bureau together conduct the HVS every 3 years. The main purpose of HVS is to describe how many rental units are vacant to understand more about rent control and stabilization and the housing market.

Some HVS indicators include: What we use it for: It helps us understand the state and quality of our available housing, which ties into important health outcomes. HVS data about housing issues that affect health can help us connect these issues to other disparities and inequities across NYC, like income, health care, heat and cold vulnerability and more.

@@ -107,9 +107,9 @@ We get data from many sources: some from the NYC Health Department or other city What is the American Community Survey (ACS)? The US Census Bureau conducts the ACS annually, collecting population, housing, and workforce data like unemployment, income, insurance, and more.

Some ACS indicators include: What we use it for: This data helps us understand the links between inequality, the social determinants of health, and health outcomes. ACS data on income is used to show, for example, how neighborhoods with higher levels of poverty tend to also have poorer quality housing, and have higher rates of many chronic diseases and premature death.

@@ -124,12 +124,12 @@ We get data from many sources: some from the NYC Health Department or other city
-What is the New York City Community Air Survey (NYCCAS)? Started in 2008, NYCCAS is the largest ongoing urban air monitoring program of any U.S. city. NYCCAS tracks air pollutants at the street-level, where people spend most of their time.

+What is the New York City Community Air Survey (NYCCAS)? Started in 2008, NYCCAS is the largest ongoing urban air monitoring program of any U.S. city. NYCCAS tracks air pollutants at the street-level, where people spend most of their time.

Some NYCCAS indicators include: What we use it for: It helps us inform PlaNYC, track changes in air quality over time, estimate exposures for health research, inform the public about local topics, such as air quality improvements, health benefits of public transit to air quality, and efforts to reduce the health impacts of air pollution.
@@ -152,11 +152,11 @@ Reporting all vital events in NYC since the 1800s, the NYC Bureau of Vital Stati Some Bureau of Vital Statistics indicators include: -What we use it for:

Vital stats data, like premature death rates, can help us get a snapshot of the general health of New Yorkers. When we analyze these data alongside social determinants of health, it can help us understand the burden of factors like neighborhood poverty on health outcomes. In one analysis, we found a higher minimum wage could save thousands of lives. We use cause of death records in our annual heat mortality report, to calculate how many deaths can be attributed to heat-related causes, and understand how race, income, and AC access shape vulnerability to heat-related illness and mortality.

+What we use it for:

Vital stats data, like premature death rates, can help us get a snapshot of the general health of New Yorkers. When we analyze these data alongside social determinants of health, it can help us understand the burden of factors like neighborhood poverty on health outcomes. In one analysis, we found a higher minimum wage could save thousands of lives. We use cause of death records in our annual heat mortality report, to calculate how many deaths can be attributed to heat-related causes, and understand how race, income, and AC access shape vulnerability to heat-related illness and mortality.

@@ -201,7 +201,7 @@ Collecting this type of data is mandated by the local, state, or federal governm

A selection of survey respondents answer questions online, via phone, or e-mail. Surveys like the Community Health Survey, the Housing and Vacancy Survey, and the American Community Survey are conducted regularly at different intervals. While most surveys are voluntary, some, like ACS, are compulsory. -Sometimes, a survey conducted every year drops a question, and we have to decide how to continue to track that dataset. In 2015, CHS dropped a question about recent cycling, so we looked for other indicators in both the CHS and ACS, with the goal of finding something with many years of data so we could see the change over time. We found monthly bicycle use, another survey question from CHS. The reliability of survey data depends on the willingness of respondents, as well as their honesty, the framing of the questions, and many other factors. +Sometimes, a survey conducted every year drops a question, and we have to decide how to continue to track that dataset. In 2015, CHS dropped a question about recent cycling, so we looked for other indicators in both the CHS and ACS, with the goal of finding something with many years of data so we could see the change over time. We found monthly bicycle use, another survey question from CHS. The reliability of survey data depends on the willingness of respondents, as well as their honesty, the framing of the questions, and many other factors.

@@ -238,7 +238,7 @@ Premature mortality from the NYC Bureau of Vital Statistics and the US Census; o

-Collected continuously and systematically, near real-time data can include environmental data, like real-time air quality (PM2.5) monitoring, which is updated hourly. It can also include near real-time health data, such as the total daily visits to the Emergency Department during the hot weather season. +Collected continuously and systematically, near real-time data can include environmental data, like real-time air quality (PM2.5) monitoring, which is updated hourly. It can also include near real-time health data, such as the total daily visits to the Emergency Department during the hot weather season.

@@ -271,16 +271,16 @@ NYC’s Local Law 11 requires city agencies to make data considered “public” We use datasets from many sources to quantify the state of various measures of health, and explanatory text to frame it, provide context, and add meaning. No single dataset can tell us everything, but together, they can paint a picture of how environments shape health in NYC across time. That said, there are tons of datasets out there – so how do we choose? We have frequent conversations with our data experts to determine what datasets would add the most value to the Portal. - + -But sometimes we see something on NYC Open Data, or another source, that provides interesting context to NYC’s environment and health, for example, our litter basket coverage data. These humble amenities may be overlooked, but have a strong connection to health: when there are more litter baskets, there is less litter, and fewer pests. Fewer pests are healthier for a neighborhood and cleaner streets have a positive impact on mental health and feelings of safety and positivity. Public bathrooms also make it easier for people to partake in public life. +But sometimes we see something on NYC Open Data, or another source, that provides interesting context to NYC’s environment and health, for example, our litter basket coverage data. These humble amenities may be overlooked, but have a strong connection to health: when there are more litter baskets, there is less litter, and fewer pests. Fewer pests are healthier for a neighborhood and cleaner streets have a positive impact on mental health and feelings of safety and positivity. Public bathrooms also make it easier for people to partake in public life. -Transit datasets like accessible subway stations and bus stops with audio announcements are also from NYC Open Data, and illustrate how accessible transit (and thus, all of New York City) is to New Yorkers with disabilities, caregivers, older adults, and everyone! +Transit datasets like accessible subway stations and bus stops with audio announcements are also from NYC Open Data, and illustrate how accessible transit (and thus, all of New York City) is to New Yorkers with disabilities, caregivers, older adults, and everyone! ## Why aren’t some of your data more recent? -These data (ranging from neighborhood poverty and cold-stress hospitalizations, to Citi bike station density and cockroach sightings) aren’t all measured, collected, recorded, organized, and reported in the same way, or within the same time period. Sometimes data are also aggregated into multi-year batches to protect privacy while being stable enough to show impacts at the neighborhood level. +These data (ranging from neighborhood poverty and cold-stress hospitalizations, to Citi bike station density and cockroach sightings) aren’t all measured, collected, recorded, organized, and reported in the same way, or within the same time period. Sometimes data are also aggregated into multi-year batches to protect privacy while being stable enough to show impacts at the neighborhood level. -As a result, some types of data aren't updated as frequently as others. But that doesn’t mean that older datasets don’t tell us valuable information. Significant trends in health can take a long time to show up. When it comes to Fall-related hospitalizations (age 65+), for instance, our most recent dataset is from 2023. However, the chart tells us that borough-level trends have been relatively stable since 2018. Any programs and outreach we are developing to address these issues will still be relevant year after year, even as we await a batch of newer data. +As a result, some types of data aren't updated as frequently as others. But that doesn’t mean that older datasets don’t tell us valuable information. Significant trends in health can take a long time to show up. When it comes to Fall-related hospitalizations (age 65+), for instance, our most recent dataset is from 2023. However, the chart tells us that borough-level trends have been relatively stable since 2018. Any programs and outreach we are developing to address these issues will still be relevant year after year, even as we await a batch of newer data. Combining data from different sources and looking at them in the context of one another is part of what makes the Portal such a powerful tool, and improves our understanding across data types and time frames. Together, each of the Portal’s data sources captures diverse, valuable information that together show how the environment – built, social, economic – shapes population health in NYC. diff --git a/content/about/redesign/index.md b/content/about/redesign/index.md index 1f1955d91c3..5c637476a89 100644 --- a/content/about/redesign/index.md +++ b/content/about/redesign/index.md @@ -30,7 +30,7 @@ The navigation bar at the top always tells you where you are and provides quick Our site has always aimed to show how environments affect health.  Our updates focus not only on improving access to data, but also making it easier to explore connections between different datasets, topics and other site content:  -* [Key Topics]({{< baseurl >}}key-topics/) bring together in one display related datasets, custom data interactives, data stories, and neighborhood reports for special areas of environmental health. You can explore resources across the site for : [Air Quality]({{< baseurl >}}key-topics/airquality/); [Climate]({{< baseurl >}}key-topics/climatehealth/); [Housing]({{< baseurl >}}key-topics/housing/); [Inequality and Health Inequities]({{< baseurl >}}key-topics/social/); [Active Design, Public Space, and Transportation]({{< baseurl >}}key-topics/transportation/); [Environmental Health Outcomes]({{< baseurl >}}key-topics/healthoutcomes/); [Child Health]({{< baseurl >}}key-topics/childhealth/); [Pests and Pesticides]({{< baseurl >}}key-topics/pests/); and [Food and Drink]({{< baseurl >}}key-topics/foodanddrink/).  +* [Key Topics]({{< relURL >}}key-topics/) bring together in one display related datasets, custom data interactives, data stories, and neighborhood reports for special areas of environmental health. You can explore resources across the site for : [Air Quality]({{< relURL >}}key-topics/airquality/); [Climate]({{< relURL >}}key-topics/climatehealth/); [Housing]({{< relURL >}}key-topics/housing/); [Inequality and Health Inequities]({{< relURL >}}key-topics/social/); [Active Design, Public Space, and Transportation]({{< relURL >}}key-topics/transportation/); [Environmental Health Outcomes]({{< relURL >}}key-topics/healthoutcomes/); [Child Health]({{< relURL >}}key-topics/childhealth/); [Pests and Pesticides]({{< relURL >}}key-topics/pests/); and [Food and Drink]({{< relURL >}}key-topics/foodanddrink/).  * Keywords link you to other pages on the similar topics.  @@ -48,7 +48,7 @@ Our site has always aimed to show how environments affect health.  Our upda Our goal is for you to be able to put our data and information to work improving health throughout our city. And when we do user research, one of the most common things we hear is, "It has to be easy to use." So, we've built the site with this as our mantra.  -For example, take a look at our re-vamped [Data Explorer]({{< baseurl >}}data-explorer/).   +For example, take a look at our re-vamped [Data Explorer]({{< relURL >}}data-explorer/).   ![](data-explorer-screenshot.png) diff --git a/content/data-explorer/waterways.md b/content/data-explorer/waterways.md index 593747d3c7c..09a9191020d 100644 --- a/content/data-explorer/waterways.md +++ b/content/data-explorer/waterways.md @@ -48,4 +48,4 @@ _Dissolved oxygen levels show that water exceeds minimum standards_ NYC DEP reports the open water summer average levels of dissolved oxygen harbor-wide, from surface and bottom water, every year. Levels above the state minimum standard (5.0 mg/L) indicate a harbor suitable to most aquatic life forms. Since the 1990s, the annual summer average for surface and bottom water dissolved oxygen levels of open water sites have been above New York State standards. -For more information on harbor quality, [please visit the most recent report from the NYC DEP Harbor Survey Report](https://www.nyc.gov/site/dep/water/harbor-water-quality.page). For information on current beach water quality, [please visit DOHMH's beach water quality map](https://a816-dohbesp.nyc.gov/IndicatorPublic/Beaches/). +For more information on harbor quality, [please visit the most recent report from the NYC DEP Harbor Survey Report](https://www.nyc.gov/site/dep/water/harbor-water-quality.page). For information on current beach water quality, [please visit DOHMH's beach water quality map]({{< relURL >}}beaches/). diff --git a/content/data-features/beaches/index.md b/content/data-features/beaches/index.md index 79cf150814d..f84580bae26 100644 --- a/content/data-features/beaches/index.md +++ b/content/data-features/beaches/index.md @@ -23,7 +23,7 @@ keywords: ] layout: resourceportal image: beachportal-screenshot.png -destination: "https://a816-dohbesp.nyc.gov/IndicatorPublic/beaches/" +destination: "beaches/" externalPortal: true blurb: Data about current water quality conditions and sources of pollution that affect water quality. --- diff --git a/content/data-features/find-your-uhf/index.md b/content/data-features/find-your-uhf/index.md index 1c8da621e26..2bf4581fa48 100644 --- a/content/data-features/find-your-uhf/index.md +++ b/content/data-features/find-your-uhf/index.md @@ -18,6 +18,6 @@ related: url: "data-stories/geographies/" --- -Environmental health data can come at several different geographies, or neighborhood boundary schemes. Often, health data are available at a neighborhood boundary scheme called UHF42, which breaks up NYC into 42 neighborhoods ([read more about UHF42 geographies](../data-stories/geographies/)). +Environmental health data can come at several different geographies, or neighborhood boundary schemes. Often, health data are available at a neighborhood boundary scheme called UHF42, which breaks up NYC into 42 neighborhoods ([read more about UHF42 geographies]({{< relURL >}}data-stories/geographies/)). Sometimes, you may be looking for data for a Community District or a City Council District, but the data are only available by UHF42 neighborhood. Click on the Community District or a City Council District below to identify the overlapping UHFs, and get links to our Neighborhood Reports with data by UHF42 neighborhood. \ No newline at end of file diff --git a/content/data-features/heat-report-archive/2021.md b/content/data-features/heat-report-archive/2021.md index acee42e210a..bcfdfd4280e 100644 --- a/content/data-features/heat-report-archive/2021.md +++ b/content/data-features/heat-report-archive/2021.md @@ -56,7 +56,7 @@ The Health Department examined heat stress deaths occurring during the months of Among NYC residents, there were 102 heat stress deaths during the warm season from 2010 to 2019. There were an average of 10 deaths per year, with a minimum of 0 deaths in 2014, the coolest year, and a maximum of 33 in 2011, which had one of the hottest heat waves during the time period examined. Most deaths occurred in July (73%) and August (15%) as shown in Figure 1. In 2020, there were 4 heat stress deaths, though that number is provisional and subject to change because mortality records are not finalized. -Figure 1: percent of heat stress deaths by month among NYC residents, 2010 to 2019. Heat stress deaths are highest in July. +Figure 1: percent of heat stress deaths by month among NYC residents, 2010 to 2019. Heat stress deaths are highest in July. | Year | n (deaths) | % | n, extreme heat days | Maximum heat index reached, degrees F | Length of longest extreme heat event, days  | | ------ | :--------: | :-: | :------------------: | :-----------------------------------: | :-----------------------------------------: | @@ -126,7 +126,7 @@ Demographics ### Age of heat stress decedents, 2010-2019 -Age of heat stress decedents. Older people die at higher rates. +Age of heat stress decedents. Older people die at higher rates. ### Health and other risk factors @@ -159,7 +159,7 @@ More than a quarter (n=16, 26%) of decedents had an electric fan present and on, Previous Health Department studies have found that air conditioning access differs across race and class. New Yorkers who are Black and low-income New Yorkers are less likely to own or use an AC during hot weather, and cost is the main reason why.2 While more than 90% of NYC households have air conditioning, access is also lower in neighborhoods where more people are living with limited financial resources. -Air conditioning presence among people exposed to heat at home. +Air conditioning presence among people exposed to heat at home. ### Heat-exacerbated deaths @@ -170,7 +170,7 @@ We estimated heat-exacerbated mortality risk and number of deaths for 2010 throu - an indicator for extreme heat event days defined by the National Weather Service’s heat advisory threshold for NYC. Based on the Health Department’s previous analysis of heat-exacerbated mortality, heat advisories are for at least 2 consecutive days with 95°F or higher daily maximum heat index (HI) or any day with a maximum HI of 100°F or higher. - the range of hot daily maximum temperatures that includes both extreme heat event days and other hot days. We assessed risk for days ranging from the median maximum daily temperature of 82°F through the highest temperature during the period. -We included deaths occurring on the date of exposure to hot weather and over the following 3 days, because previous Health Department studies have shown that heat-related deaths can occur up to 3 days after the initial hot weather. Detailed methods used to estimate risks and attributable deaths can be found [here](../2021/Heat_Mortality_Methods_2021.pdf). +We included deaths occurring on the date of exposure to hot weather and over the following 3 days, because previous Health Department studies have shown that heat-related deaths can occur up to 3 days after the initial hot weather. Detailed methods used to estimate risks and attributable deaths can be found [here]({{< relURL >}}data-features/heat-report-archive/2021/Heat_Mortality_Methods_2021.pdf). From 2010 to 2018, the estimated number of heat-exacerbated deaths associated with extreme heat events was 98 (95% Confidence Interval [95CI]: 59, 133) on average each year. @@ -180,13 +180,13 @@ The estimated number of heat-exacerbated deaths from May-September for all hot d The number of extreme heat event days each year has stayed about the same (approximately 10 days/year) in the past decade. However, the number of non-extreme hot days is increasing (e.g., 45 days with daily maximum temperature of 83-94°F for 1971-1980 vs. 63 days for 2011-2020). The daily low temperature (usually nighttime temperature) is a good measure of (UHI) the urban heat island  effect, because it measures the heat retained after the sun has set. The daily low temperature is also increasing (Figure 4), highlighting the importance of in-home cooling. Cities like NYC experience a UHI effect, with hotter temperatures than surrounding suburbs and more rural areas, because they have large amounts of concrete and other building materials that trap heat. - + - + ### Community-level impacts -[The HVI shows differences in community-level heat impacts during and shortly after extreme heat events](../hvi). Unlike many social vulnerability indices, the HVI is validated against NYC mortality data – meaning that neighborhoods with elevated risk identified by the index are those areas with elevated heat-exacerbated deaths during extreme heat events. +[The HVI shows differences in community-level heat impacts during and shortly after extreme heat events]({{< relURL >}}data-features/hvi/). Unlike many social vulnerability indices, the HVI is validated against NYC mortality data – meaning that neighborhoods with elevated risk identified by the index are those areas with elevated heat-exacerbated deaths during extreme heat events. While there are high risk Neighborhood Tabulation Areas (NTA; defined as an HVI score of 4 or 5) in every borough of the city, the thread connecting them all is that they have more residents who are Black or low-income. Risk factors for heat tend to overlap in these neighborhoods due to persistent structural racism, which have positioned economic, educational, healthcare, housing, and other systems to benefit white people and put at a disadvantage Black, Indigenous and other people of color. The relative heat mortality risk of each NTA can be explored here. Read more about how structural racism affects housing and public health and the history of redlining and how it impacts public health in NYC. @@ -210,7 +210,7 @@ Continue to strengthen emergency response measures during periods of extreme hea Extend equitable access to air conditioning to people most impacted by heat, including assistance with ongoing utility costs, which is critical to realizing the benefits of equitable cooling at home both daytime and nighttime. -Learn more about what the City is doing to mitigate the effects of heat, and how the HVI guides that work, at [Cool Neighborhoods NYC](https://www.nyc.gov/assets/orr/pdf/Cool_Neighborhoods_NYC_Report.pdf). More data and information about heat, climate, and health is also available at [Climate and Health](../../key-topics/climatehealth/). +Learn more about what the City is doing to mitigate the effects of heat, and how the HVI guides that work, at [Cool Neighborhoods NYC](https://www.nyc.gov/assets/orr/pdf/Cool_Neighborhoods_NYC_Report.pdf). More data and information about heat, climate, and health is also available at [Climate and Health]({{< relURL >}}key-topics/climatehealth/). + + + + + + + + + + + + + + +
+
+
+ + + + diff --git a/static/data-stories/air-quality-and-covid-part-2/pm25_differences_leaflet.html b/static/data-stories/air-quality-and-covid-part-2/pm25_differences_leaflet.html new file mode 100644 index 00000000000..e898dbdb925 --- /dev/null +++ b/static/data-stories/air-quality-and-covid-part-2/pm25_differences_leaflet.html @@ -0,0 +1,30 @@ + + + + +Changes in PM2.5 from Spring 2019 to Spring 2020 +

Changes in PM2.5 from Spring 2019 to Spring 2020

+ + + + + + + + + + + + + + + + + +
+
+
+ + + + diff --git a/themes/dohmh/layouts/_markup/render-link.html b/themes/dohmh/layouts/_markup/render-link.html new file mode 100644 index 00000000000..af3d44e8e0d --- /dev/null +++ b/themes/dohmh/layouts/_markup/render-link.html @@ -0,0 +1,70 @@ +{{- /* Parse once so each rewrite branch can preserve the original query string and fragment. */ -}} + +{{- $dest := .Destination -}} +{{- $href := $dest -}} +{{- $parsed := urls.Parse $dest -}} +{{- $rawPath := $parsed.Path -}} +{{- $baseName := path.Base $rawPath -}} +{{- $isIndexPage := eq $baseName "index.html" -}} +{{- $isPageLike := or $isIndexPage (and (ne $rawPath "") (not (strings.Contains $baseName "."))) -}} + +{{- /* Rewrite legacy production URLs so content does not depend on a hardcoded host or /IndicatorPublic prefix. */ -}} + +{{- if and $parsed.IsAbs (eq $parsed.Host "a816-dohbesp.nyc.gov") (or (strings.HasPrefix $rawPath "/IndicatorPublic/") (strings.HasPrefix $rawPath "/IndicatorPublic/beta/")) -}} + {{- $normalizedPath := strings.TrimPrefix "/IndicatorPublic/" $rawPath -}} + {{- $normalizedPath = strings.TrimPrefix "beta/" $normalizedPath -}} + {{- if $isIndexPage -}} + {{- $normalizedPath = path.Dir $normalizedPath -}} + {{- end -}} + {{- if and (or (strings.HasSuffix $rawPath "/") $isIndexPage) (not (strings.HasSuffix $normalizedPath "/")) -}} + {{- $normalizedPath = printf "%s/" $normalizedPath -}} + {{- end -}} + {{- $href = relURL $normalizedPath -}} + {{- with $parsed.RawQuery -}} + {{- $href = printf "%s?%s" $href . -}} + {{- end -}} + {{- with $parsed.Fragment -}} + {{- $href = printf "%s#%s" $href . -}} + {{- end -}} + +{{- /* Rewrite root-relative legacy paths the same way as fully qualified production URLs. */ -}} + +{{- else if and (or (strings.HasPrefix $rawPath "/IndicatorPublic/") (strings.HasPrefix $rawPath "/IndicatorPublic/beta/")) -}} + {{- $normalizedPath := strings.TrimPrefix "/IndicatorPublic/" $rawPath -}} + {{- $normalizedPath = strings.TrimPrefix "beta/" $normalizedPath -}} + {{- if $isIndexPage -}} + {{- $normalizedPath = path.Dir $normalizedPath -}} + {{- end -}} + {{- if and (or (strings.HasSuffix $rawPath "/") $isIndexPage) (not (strings.HasSuffix $normalizedPath "/")) -}} + {{- $normalizedPath = printf "%s/" $normalizedPath -}} + {{- end -}} + {{- $href = relURL $normalizedPath -}} + {{- with $parsed.RawQuery -}} + {{- $href = printf "%s?%s" $href . -}} + {{- end -}} + {{- with $parsed.Fragment -}} + {{- $href = printf "%s#%s" $href . -}} + {{- end -}} + +{{- /* Resolve page-like ./ and ../ links against PageInner so included content keeps its own relative base. */ -}} + +{{- else if and (not $parsed.IsAbs) (or (strings.HasPrefix $dest "../") (strings.HasPrefix $dest "./")) $isPageLike -}} + {{- $pagePath := strings.TrimPrefix (relURL "") .PageInner.RelPermalink -}} + {{- $normalizedPath := path.Join $pagePath $rawPath -}} + {{- if $isIndexPage -}} + {{- $normalizedPath = path.Dir $normalizedPath -}} + {{- end -}} + {{- if and (or (strings.HasSuffix $rawPath "/") $isIndexPage) (not (strings.HasSuffix $normalizedPath "/")) -}} + {{- $normalizedPath = printf "%s/" $normalizedPath -}} + {{- end -}} + {{- $href = relURL $normalizedPath -}} + {{- with $parsed.RawQuery -}} + {{- $href = printf "%s?%s" $href . -}} + {{- end -}} + {{- with $parsed.Fragment -}} + {{- $href = printf "%s#%s" $href . -}} + {{- end -}} +{{- end -}} + +{{ with .Text }}{{ . }}{{ end }} +{{- /* chomp trailing newline */ -}} \ No newline at end of file diff --git a/themes/dohmh/layouts/components.html b/themes/dohmh/layouts/components.html index b2d3c207c55..7b74b699b91 100644 --- a/themes/dohmh/layouts/components.html +++ b/themes/dohmh/layouts/components.html @@ -145,10 +145,10 @@

-

We're Hiring!

+

We're Hiring!

Join our team! See below for open positions in our bureau - or, search cityjobs.nyc.gov for …

-

Read more Read more

diff --git a/themes/dohmh/layouts/data-features/nyccas-report.html b/themes/dohmh/layouts/data-features/nyccas-report.html index 4c1211a7149..f13e77eb216 100644 --- a/themes/dohmh/layouts/data-features/nyccas-report.html +++ b/themes/dohmh/layouts/data-features/nyccas-report.html @@ -64,7 +64,7 @@

Pollutants Measured

- {{ .copy | markdownify }} + {{ $.RenderString .copy }}
diff --git a/themes/dohmh/layouts/data-features/rats-in-your-neighborhood.html b/themes/dohmh/layouts/data-features/rats-in-your-neighborhood.html index a2772f241e5..40e86aff085 100644 --- a/themes/dohmh/layouts/data-features/rats-in-your-neighborhood.html +++ b/themes/dohmh/layouts/data-features/rats-in-your-neighborhood.html @@ -89,10 +89,10 @@

Enter your address

-

This address is in a Rat Mitigation Zone. Properties in these areas are inspected more regularly, to help fight rats on a neighborhood-wide scale. Explore Rat Mitigation Zones.

+

This address is in a Rat Mitigation Zone. Properties in these areas are inspected more regularly, to help fight rats on a neighborhood-wide scale. Explore Rat Mitigation Zones.

diff --git a/themes/dohmh/layouts/data-stories/section.html b/themes/dohmh/layouts/data-stories/section.html index 5104e62df7d..ea38c7f63f9 100644 --- a/themes/dohmh/layouts/data-stories/section.html +++ b/themes/dohmh/layouts/data-stories/section.html @@ -294,7 +294,7 @@

@@ -305,7 +305,7 @@

Cold Weather Safety

Extreme Heat Safety Quiz

-

As our climate changes, we can expect hotter, longer and more frequent heat waves. Test your knowledge here. +

As our climate changes, we can expect hotter, longer and more frequent heat waves. Test your knowledge here.

@@ -317,7 +317,7 @@

Extreme Heat Safety Quiz

Asthma and Housing

-

Housing conditions are a common trigger for asthma. Learn more with the Asthma and Housing infographic.

+

Housing conditions are a common trigger for asthma. Learn more with the Asthma and Housing infographic.

@@ -331,7 +331,7 @@

Asthma and Housing

Health Impacts of Traffic Air Pollution

-

PM2.5 is one of the most harmful air pollutants. The health impacts of traffic +

PM2.5 is one of the most harmful air pollutants. The health impacts of traffic air pollution are highest in poorer neighborhoods.

diff --git a/themes/dohmh/layouts/partials/conditional-modal.html b/themes/dohmh/layouts/partials/conditional-modal.html index 3a0b20f5214..198b5d6c813 100644 --- a/themes/dohmh/layouts/partials/conditional-modal.html +++ b/themes/dohmh/layouts/partials/conditional-modal.html @@ -34,7 +34,7 @@ .find((row) => row.startsWith('original_url=')) ?.replace("original_url=", ""); - original_url = original_url ? original_url : "/IndicatorPublic/"; + original_url = original_url ? original_url : "{{ relURL "" }}"; @@ -104,7 +104,7 @@

{{ .Title }}

About the data

-

- {{ .Params.aboutTheData | safeHTML }} -

+ {{- $opts := dict "display" "block" -}} +
+ {{ .RenderString $opts .Params.aboutTheData }} +
{{ end }} diff --git a/themes/dohmh/layouts/partials/nyccas_pollutant_maps.html b/themes/dohmh/layouts/partials/nyccas_pollutant_maps.html index fdcc6110060..7063bc33bb3 100644 --- a/themes/dohmh/layouts/partials/nyccas_pollutant_maps.html +++ b/themes/dohmh/layouts/partials/nyccas_pollutant_maps.html @@ -25,26 +25,26 @@
{{- with .Resources.GetMatch "map-text.md" -}} - {{ .Params.BC | markdownify }} + {{ $.RenderString .Params.BC }}
- {{ .Params.NO | markdownify }} + {{ $.RenderString .Params.NO }}
- {{ .Params.NO2 | markdownify }} + {{ $.RenderString .Params.NO2 }}
- {{ .Params.O3 | markdownify }} + {{ $.RenderString .Params.O3 }}
- {{ .Params.PM | markdownify }} + {{ $.RenderString .Params.PM }}
{{- end -}} diff --git a/themes/dohmh/layouts/shortcodes/relURL.html b/themes/dohmh/layouts/shortcodes/relURL.html new file mode 100644 index 00000000000..e5ce4ecadba --- /dev/null +++ b/themes/dohmh/layouts/shortcodes/relURL.html @@ -0,0 +1 @@ +{{ relURL "" }} \ No newline at end of file diff --git a/themes/dohmh/layouts/shortcodes/updateflag.html b/themes/dohmh/layouts/shortcodes/updateflag.html index f47780cd6af..c07d5992434 100644 --- a/themes/dohmh/layouts/shortcodes/updateflag.html +++ b/themes/dohmh/layouts/shortcodes/updateflag.html @@ -1,9 +1,12 @@ +{{/* +now uses relURL for internal links, so shortcode src param should start with section, e.g. {"src": "data-explorer/climate/?id=2143", [...] } +*/}}
Since we published this data story, some data in it have been updated. Get updated data on: {{- $data := .Get "data" | transform.Unmarshal -}}
\ No newline at end of file diff --git a/themes/dohmh/layouts/take-action/email-electeds.html b/themes/dohmh/layouts/take-action/email-electeds.html index 9b50504ade1..fab2ef05f6b 100644 --- a/themes/dohmh/layouts/take-action/email-electeds.html +++ b/themes/dohmh/layouts/take-action/email-electeds.html @@ -77,7 +77,7 @@

Email tip

Tell your story: Offer some brief detail on how you see this environmental health issue affecting you and the people you know. For example: "Last year, my daughter was diagnosed with asthma, and this issue is a challenge for our whole family."

-

Cite data: For example: "According to data on housing conditions, 75% of renters' households in our neighborhood have maintenance problems that threaten health. (NYC Environment and Health Data Portal: https://a816-dohbesp.nyc.gov/IndicatorPublic/data-explorer/housing-maintenance/?id=2399#display=map)"

+

Cite data: For example: "According to data on housing conditions, 75% of renters' households in our neighborhood have maintenance problems that threaten health. (NYC Environment and Health Data Portal: {{ relURL "data-explorer/housing-maintenance/?id=2399#display=map" }})"

Make a request: You can ask them where they stand on the issue, what they're doing to improve it, or what services are available to help their constituents.