diff --git a/.gitignore b/.gitignore index 6f6d1db5..bab5d079 100644 --- a/.gitignore +++ b/.gitignore @@ -7,3 +7,4 @@ dist/ coverage/ .chub/ *.tgz +runbook-template_v_3_template.md diff --git a/content/nist/docs/nvd-cpe-api/DOC.md b/content/nist/docs/nvd-cpe-api/DOC.md new file mode 100644 index 00000000..2d50a670 --- /dev/null +++ b/content/nist/docs/nvd-cpe-api/DOC.md @@ -0,0 +1,215 @@ +--- +name: nvd-cpe-api +description: "NVD CPE and CPE Match APIs v2.0 — query the Official CPE Dictionary and match criteria for product identification and vulnerability correlation." +metadata: + languages: "http" + versions: "2.0" + revision: 1 + updated-on: "2026-03-22" + source: community + tags: "nist,nvd,cpe,product,vulnerability,security,api" +--- + +# NVD CPE APIs v2.0 + +The NVD Product APIs let you query the Official CPE Dictionary (1.6M+ CPE Names, 420K+ match strings). There are two distinct APIs: the **CPE API** for looking up product entries and the **Match Criteria API** for resolving CPE match strings used in CVE configurations. + +> "This product uses data from the NVD API but is not endorsed or certified by the NVD." + +## CPE names vs CPE match strings + +### CPE Name +A CPE (Common Platform Enumeration) Name uniquely identifies a specific product version. It uses the CPE 2.3 formatted string with 13 colon-separated components: + +``` +cpe:2.3::::::::::: +``` + +Example: +``` +cpe:2.3:a:apache:http_server:2.4.49:*:*:*:*:*:*:* +``` + +| Component | Values | Meaning | +|-----------|--------|---------| +| `part` | `a` = application, `o` = OS, `h` = hardware | Product type | +| `vendor` | e.g. `apache`, `microsoft` | Vendor name | +| `product` | e.g. `http_server`, `windows_10` | Product name | +| `version` | e.g. `2.4.49`, `*` | Specific version or wildcard | +| Others | `*` = any, `-` = not applicable | Remaining attributes | + +### CPE Match String +A match string is a CPE pattern with optional version ranges. It is used in CVE configurations to define which products are affected. Unlike a CPE Name, a match string can use wildcards in the `part`, `vendor`, `product`, or `version` fields and can include `versionStartIncluding`, `versionEndExcluding`, etc. + +## CPE API + +Look up CPE Names in the Official CPE Dictionary. + +### Base URL + +``` +https://services.nvd.nist.gov/rest/json/cpes/2.0 +``` + +### Parameters + +| Parameter | Description | Example | +|-----------|-------------|---------| +| `cpeNameId` | UUID of a specific CPE record | `?cpeNameId=87316812-...` | +| `cpeMatchString` | CPE 2.3 pattern to match against | `?cpeMatchString=cpe:2.3:a:apache:*:*:*:*:*:*:*:*:*` | +| `keywordSearch` | Text search in CPE titles | `?keywordSearch=apache http` | +| `keywordExactMatch` | Require exact phrase | `?keywordExactMatch&keywordSearch=Apache HTTP Server` | +| `matchCriteriaId` | UUID of a match criteria | `?matchCriteriaId=...` | +| `lastModStartDate` / `lastModEndDate` | Modified date range (ISO-8601) | `?lastModStartDate=2024-01-01T00:00:00.000` | +| `startIndex` | Pagination offset (0-based) | `?startIndex=0` | +| `resultsPerPage` | Page size | `?resultsPerPage=100` | + +### Example: Find CPEs for Apache HTTP Server +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cpes/2.0?\ +keywordSearch=apache+http+server&\ +resultsPerPage=5" \ + -H "apiKey:YOUR_KEY" | jq '.products[].cpe.cpeName' +``` + +### Example: Look up by CPE match string +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cpes/2.0?\ +cpeMatchString=cpe:2.3:a:apache:http_server:*:*:*:*:*:*:*:*&\ +resultsPerPage=5" \ + -H "apiKey:YOUR_KEY" +``` + +### Response structure + +```json +{ + "resultsPerPage": 1, + "startIndex": 0, + "totalResults": 1, + "products": [ + { + "cpe": { + "cpeName": "cpe:2.3:a:apache:http_server:2.4.49:*:*:*:*:*:*:*", + "cpeNameId": "87316812-...", + "deprecated": false, + "lastModified": "2023-...", + "created": "2021-...", + "titles": [ + { "title": "Apache HTTP Server 2.4.49", "lang": "en" } + ], + "refs": [ + { "ref": "https://httpd.apache.org/", "type": "Vendor" } + ] + } + } + ] +} +``` + +## CPE Match API + +Look up match criteria — the CPE patterns with version ranges used in CVE configurations. + +### Base URL + +``` +https://services.nvd.nist.gov/rest/json/cpematch/2.0 +``` + +### Parameters + +| Parameter | Description | Example | +|-----------|-------------|---------| +| `cveId` | Match criteria associated with a CVE | `?cveId=CVE-2023-44487` | +| `matchCriteriaId` | UUID of a specific match criteria | `?matchCriteriaId=...` | +| `lastModStartDate` / `lastModEndDate` | Modified date range | Same format as CVE API | +| `startIndex` | Pagination offset | `?startIndex=0` | +| `resultsPerPage` | Page size | `?resultsPerPage=100` | + +### Example: Match criteria for a CVE +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cpematch/2.0?\ +cveId=CVE-2023-44487&\ +resultsPerPage=10" \ + -H "apiKey:YOUR_KEY" | jq '.matchStrings[].matchString' +``` + +### Response structure + +```json +{ + "resultsPerPage": 1, + "startIndex": 0, + "totalResults": 1, + "matchStrings": [ + { + "matchString": { + "matchCriteriaId": "...", + "criteria": "cpe:2.3:a:apache:http_server:*:*:*:*:*:*:*:*", + "versionStartIncluding": "2.4.0", + "versionEndExcluding": "2.4.58", + "lastModified": "2024-...", + "cpeLastModified": "2024-...", + "created": "2023-...", + "status": "Active", + "matches": [ + { + "cpeName": "cpe:2.3:a:apache:http_server:2.4.49:*:*:*:*:*:*:*", + "cpeNameId": "87316812-..." + } + ] + } + } + ] +} +``` + +Key fields in match strings: +- `versionStartIncluding` / `versionStartExcluding` — lower bound of affected versions +- `versionEndIncluding` / `versionEndExcluding` — upper bound of affected versions +- `matches` — list of specific CPE Names that satisfy this match criteria + +## Common search patterns + +### "Is my product version affected by this CVE?" + +1. Get the CVE: `GET /cves/2.0?cveId=CVE-YYYY-NNNN` +2. Extract `configurations[].nodes[].cpeMatch[].criteria` and version ranges +3. Compare your product's CPE name against the match criteria + +### "What CPE Name represents my product?" + +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cpes/2.0?\ +keywordSearch=nginx&resultsPerPage=10" \ + -H "apiKey:YOUR_KEY" | jq '.products[].cpe | {cpeName, title: .titles[0].title}' +``` + +### "What products are affected by this match criteria?" + +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cpematch/2.0?\ +matchCriteriaId=UUID_HERE" \ + -H "apiKey:YOUR_KEY" | jq '.matchStrings[].matchString.matches[].cpeName' +``` + +## Common pitfalls + +1. **CPE Name ≠ match string.** A CPE Name identifies a specific product/version. A match string is a pattern with optional version ranges. Don't confuse the two APIs. +2. **Wildcard encoding.** The `*` in CPE strings must be URL-encoded as `%2A` in query parameters, or use the exact CPE string as-is if your HTTP client handles encoding. +3. **Version ranges are in match strings, not CPE Names.** To determine affected version ranges, use the Match Criteria API or parse CVE configurations directly. +4. **Same rate limits.** Both CPE APIs share the same rate limits as the CVE API (5 req/30s public, 50 req/30s keyed). +5. **Deprecated CPEs.** Some CPE Names are deprecated (replaced by newer names). Check the `deprecated` field and follow `deprecatedBy` references. + +## Official sources + +- CPE and Match Criteria API docs: https://nvd.nist.gov/developers/products +- CPE specification: https://csrc.nist.gov/publications/detail/nistir/7695/final +- CPE dictionary statistics: https://nvd.nist.gov/products/cpe/statistics +- Getting started: https://nvd.nist.gov/developers/start-here + +## Reference files + +- `references/cpe-match-api.md` — Deep dive into match criteria, version range resolution, and CPE dictionary lookups +- `references/cpe-format-and-match-strings.md` — CPE 2.3 format breakdown, encoding rules, and common patterns diff --git a/content/nist/docs/nvd-cpe-api/references/cpe-format-and-match-strings.md b/content/nist/docs/nvd-cpe-api/references/cpe-format-and-match-strings.md new file mode 100644 index 00000000..49dc669e --- /dev/null +++ b/content/nist/docs/nvd-cpe-api/references/cpe-format-and-match-strings.md @@ -0,0 +1,158 @@ +# CPE 2.3 format and match strings + +Detailed reference for CPE naming conventions, encoding rules, and common patterns. + +## CPE 2.3 formatted string + +A CPE Name has exactly 13 colon-separated components: + +``` +cpe:2.3::::::::::: +``` + +### Component reference + +| # | Component | Description | Common values | +|---|-----------|-------------|---------------| +| 1 | `cpe` | Fixed prefix | Always `cpe` | +| 2 | `2.3` | Format version | Always `2.3` | +| 3 | `part` | Product type | `a` = application, `o` = operating system, `h` = hardware | +| 4 | `vendor` | Vendor/publisher | e.g. `apache`, `microsoft`, `google` | +| 5 | `product` | Product name | e.g. `http_server`, `chrome`, `linux_kernel` | +| 6 | `version` | Version string | e.g. `2.4.49`, `10.0.19041` | +| 7 | `update` | Update/patch level | e.g. `sp1`, `*` | +| 8 | `edition` | Legacy edition | Usually `*` | +| 9 | `language` | Language tag | e.g. `en`, `*` | +| 10 | `sw_edition` | Software edition | e.g. `enterprise`, `*` | +| 11 | `target_sw` | Target software platform | e.g. `linux`, `*` | +| 12 | `target_hw` | Target hardware platform | e.g. `x64`, `*` | +| 13 | `other` | Other attributes | Usually `*` | + +### Special values + +| Value | Meaning | +|-------|---------| +| `*` | ANY — matches any value | +| `-` | NA — not applicable | + +### Examples + +``` +# Apache HTTP Server 2.4.49 (application) +cpe:2.3:a:apache:http_server:2.4.49:*:*:*:*:*:*:* + +# Microsoft Windows 10 (operating system) +cpe:2.3:o:microsoft:windows_10:1903:*:*:*:*:*:*:* + +# Cisco Router model (hardware) +cpe:2.3:h:cisco:rv340:*:*:*:*:*:*:*:* + +# Python 3.9.7 running on Linux +cpe:2.3:a:python:python:3.9.7:*:*:*:*:*:*:* + +# Node.js 18.x +cpe:2.3:a:nodejs:node.js:18.0.0:*:*:*:*:*:*:* +``` + +## URL encoding + +When using CPE strings in API query parameters: + +| Character | Encoded | Notes | +|-----------|---------|-------| +| `*` | `%2A` | Wildcard — must encode in URLs | +| `:` | `%3A` | Colon — most HTTP clients handle automatically | +| Space | `%20` | Should not appear in CPE names | + +### curl example with encoded wildcards +```bash +# Search all Apache HTTP Server versions +curl -s "https://services.nvd.nist.gov/rest/json/cpes/2.0?\ +cpeMatchString=cpe:2.3:a:apache:http_server:%2A:%2A:%2A:%2A:%2A:%2A:%2A:%2A" \ + -H "apiKey:YOUR_KEY" +``` + +Most HTTP libraries handle encoding automatically: +```python +import requests +params = { + "cpeMatchString": "cpe:2.3:a:apache:http_server:*:*:*:*:*:*:*:*" +} +resp = requests.get("https://services.nvd.nist.gov/rest/json/cpes/2.0", params=params) +``` + +## Building CPE names + +### From package manager data + +| Source | How to build CPE | +|--------|-----------------| +| `npm` package | `cpe:2.3:a::::*:*:*:*:node.js:*:*` | +| Python pip | `cpe:2.3:a::::*:*:*:*:python:*:*` | +| OS packages | Match vendor to distro, product to package name | + +**Important:** NVD vendor/product names often don't match package manager names exactly. Always search the CPE dictionary first to find the correct vendor:product pair. + +### Common vendor name mappings + +| Package manager name | NVD vendor | NVD product | +|---------------------|------------|-------------| +| `express` (npm) | `openjsf` | `express` | +| `django` (pip) | `djangoproject` | `django` | +| `flask` (pip) | `palletsprojects` | `flask` | +| `nginx` | `f5` | `nginx` | +| `openssl` | `openssl` | `openssl` | +| `linux` | `linux` | `linux_kernel` | +| `node` | `nodejs` | `node.js` | + +## Match string patterns + +### Exact version +``` +cpe:2.3:a:apache:http_server:2.4.49:*:*:*:*:*:*:* +``` +Matches only version 2.4.49. + +### All versions of a product +``` +cpe:2.3:a:apache:http_server:*:*:*:*:*:*:*:* +``` +Matches any version of Apache HTTP Server. + +### Version range (via Match Criteria API) +The CPE string uses `*` for version, and version bounds are in separate fields: +```json +{ + "criteria": "cpe:2.3:a:apache:http_server:*:*:*:*:*:*:*:*", + "versionStartIncluding": "2.4.0", + "versionEndExcluding": "2.4.58" +} +``` + +### All products from a vendor +``` +cpe:2.3:a:apache:*:*:*:*:*:*:*:*:* +``` +Matches any Apache application product. + +## Deprecated CPE names + +CPE Names can be deprecated when NVD standardizes naming. The API response includes: + +```json +{ + "cpeName": "cpe:2.3:a:old_vendor:old_name:...", + "deprecated": true, + "deprecatedBy": [ + { "cpeName": "cpe:2.3:a:new_vendor:new_name:...", "cpeNameId": "..." } + ] +} +``` + +Always check `deprecated` and follow `deprecatedBy` to the current name. + +## Official sources + +- CPE specification (NISTIR 7695): https://csrc.nist.gov/publications/detail/nistir/7695/final +- CPE Dictionary: https://nvd.nist.gov/products/cpe +- NVD Product APIs: https://nvd.nist.gov/developers/products diff --git a/content/nist/docs/nvd-cpe-api/references/cpe-match-api.md b/content/nist/docs/nvd-cpe-api/references/cpe-match-api.md new file mode 100644 index 00000000..d34ecb1a --- /dev/null +++ b/content/nist/docs/nvd-cpe-api/references/cpe-match-api.md @@ -0,0 +1,176 @@ +# CPE Match API deep dive + +Detailed reference for CPE match criteria resolution, version range handling, and correlating products with vulnerabilities. + +## What is a match criteria? + +A match criteria is a CPE pattern stored in the NVD that maps to one or more specific CPE Names. CVE configurations reference match criteria (by UUID) to define which products/versions are affected. + +The flow: +1. A CVE's `configurations` field contains `cpeMatch` entries +2. Each `cpeMatch` has a `matchCriteriaId` (UUID) and a `criteria` (CPE pattern) +3. The Match Criteria API resolves that UUID to the full list of matching CPE Names + +## Version range fields + +Match strings can include version range bounds: + +| Field | Meaning | +|-------|---------| +| `versionStartIncluding` | Affected from this version (inclusive) | +| `versionStartExcluding` | Affected from this version (exclusive) | +| `versionEndIncluding` | Affected through this version (inclusive) | +| `versionEndExcluding` | Affected up to but not including this version | + +### Example: Apache HTTP Server 2.4.0 through 2.4.57 + +```json +{ + "criteria": "cpe:2.3:a:apache:http_server:*:*:*:*:*:*:*:*", + "versionStartIncluding": "2.4.0", + "versionEndExcluding": "2.4.58", + "matchCriteriaId": "..." +} +``` + +This means: all Apache HTTP Server versions >= 2.4.0 and < 2.4.58 are affected. + +### Checking if a specific version is affected + +```python +from packaging.version import Version + +def is_affected(product_version, match): + v = Version(product_version) + + start_inc = match.get("versionStartIncluding") + start_exc = match.get("versionStartExcluding") + end_inc = match.get("versionEndIncluding") + end_exc = match.get("versionEndExcluding") + + if start_inc and v < Version(start_inc): + return False + if start_exc and v <= Version(start_exc): + return False + if end_inc and v > Version(end_inc): + return False + if end_exc and v >= Version(end_exc): + return False + return True +``` + +## CVE configuration structure + +CVE configurations use a node tree with AND/OR operators: + +```json +{ + "configurations": [ + { + "operator": "OR", + "negate": false, + "nodes": [ + { + "operator": "OR", + "negate": false, + "cpeMatch": [ + { + "vulnerable": true, + "criteria": "cpe:2.3:a:vendor:product:*:...", + "versionEndExcluding": "1.2.3", + "matchCriteriaId": "UUID" + } + ] + } + ] + } + ] +} +``` + +- **OR nodes**: Any matching CPE means affected +- **AND nodes**: All children must match (common for "running on" relationships) +- `vulnerable: true` = this CPE is the vulnerable product +- `vulnerable: false` = this CPE is a platform requirement (e.g., "running on Windows") + +### AND configuration example + +"Product X running on Windows" is expressed as: + +```json +{ + "operator": "AND", + "nodes": [ + { + "cpeMatch": [{ "vulnerable": true, "criteria": "cpe:2.3:a:vendor:product_x:..." }] + }, + { + "cpeMatch": [{ "vulnerable": false, "criteria": "cpe:2.3:o:microsoft:windows:..." }] + } + ] +} +``` + +## Workflow: Resolving affected products for a CVE + +```python +import requests + +HEADERS = {"apiKey": "YOUR_KEY"} + +def get_affected_products(cve_id): + """Get all affected CPE names for a CVE.""" + # Step 1: Get CVE configurations + resp = requests.get( + "https://services.nvd.nist.gov/rest/json/cves/2.0", + params={"cveId": cve_id}, + headers=HEADERS + ) + cve = resp.json()["vulnerabilities"][0]["cve"] + configs = cve.get("configurations", []) + + affected = [] + for config in configs: + for node in config.get("nodes", []): + for match in node.get("cpeMatch", []): + if match.get("vulnerable"): + affected.append({ + "criteria": match["criteria"], + "matchCriteriaId": match["matchCriteriaId"], + "versionStartIncluding": match.get("versionStartIncluding"), + "versionEndExcluding": match.get("versionEndExcluding"), + }) + return affected +``` + +## Workflow: Looking up a match criteria by UUID + +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cpematch/2.0?\ +matchCriteriaId=UUID_HERE" \ + -H "apiKey:YOUR_KEY" \ + | jq '.matchStrings[0].matchString | {criteria, versionStartIncluding, versionEndExcluding, matchCount: (.matches | length)}' +``` + +The response's `matches` array lists every specific CPE Name that satisfies the match criteria. + +## Common patterns + +### Build a local "product → CVEs" index + +1. Fetch all CVEs (paginated) +2. For each CVE, extract `cpeMatch` entries where `vulnerable=true` +3. Index by the CPE vendor:product pair +4. When querying "what CVEs affect product X version Y?", filter by CPE name and version range + +### Build a "CVE → affected products" report + +1. Fetch a CVE by ID +2. Extract all `matchCriteriaId` values +3. For each, call the Match Criteria API +4. Collect the `matches[].cpeName` values + +## Official sources + +- CPE Match Criteria API: https://nvd.nist.gov/developers/products +- CPE specification (NISTIR 7695): https://csrc.nist.gov/publications/detail/nistir/7695/final diff --git a/content/nist/docs/nvd-cve-api/DOC.md b/content/nist/docs/nvd-cve-api/DOC.md new file mode 100644 index 00000000..8b64a05d --- /dev/null +++ b/content/nist/docs/nvd-cve-api/DOC.md @@ -0,0 +1,215 @@ +--- +name: nvd-cve-api +description: "NVD CVE API v2.0 — query vulnerabilities by CVE ID, keyword, CVSS score, date ranges, and CWE. Pagination, rate limits, and incremental sync." +metadata: + languages: "http" + versions: "2.0" + revision: 1 + updated-on: "2026-03-22" + source: community + tags: "nist,nvd,cve,vulnerability,security,api" +--- + +# NVD CVE API v2.0 + +The NVD (National Vulnerability Database) CVE API lets you retrieve vulnerability data from NIST's collection of 339,000+ CVE records. It is a JSON REST API using HTTP GET requests with URL query parameters. + +> "This product uses data from the NVD API but is not endorsed or certified by the NVD." + +## Base URL + +``` +https://services.nvd.nist.gov/rest/json/cves/2.0 +``` + +## API key and rate limits + +| | Without API key | With API key | +|---|---|---| +| **Rate limit** | 5 requests / rolling 30 seconds | 50 requests / rolling 30 seconds | +| **Recommended sleep** | 6 seconds between requests | 0.6 seconds between requests | + +**Pass the key in a request header**, not a query parameter: + +``` +apiKey:YOUR_KEY_HERE +``` + +Request a free key at: https://nvd.nist.gov/developers/request-an-api-key + +**Important:** The key value is case-sensitive. The header format has no spaces: `apiKey:abc123...` + +## Pagination + +The API uses offset-based pagination: + +| Parameter | Description | Default | +|-----------|-------------|---------| +| `startIndex` | 0-based offset into results | 0 | +| `resultsPerPage` | Number of results per page | 2000 | + +The response includes: +- `totalResults` — total matching CVEs +- `startIndex` — current offset +- `resultsPerPage` — page size used + +### Pagination loop pattern + +```python +import requests, time + +base = "https://services.nvd.nist.gov/rest/json/cves/2.0" +headers = {"apiKey": "YOUR_KEY"} +start = 0 +all_cves = [] + +while True: + resp = requests.get(base, params={"startIndex": start}, headers=headers) + data = resp.json() + all_cves.extend(data["vulnerabilities"]) + + if start + data["resultsPerPage"] >= data["totalResults"]: + break + start += data["resultsPerPage"] + time.sleep(0.6) # respect rate limit with API key +``` + +## Most useful filters + +| Parameter | Description | Example | +|-----------|-------------|---------| +| `cveId` | Exact CVE ID | `?cveId=CVE-2023-44487` | +| `keywordSearch` | Full-text search in descriptions | `?keywordSearch=log4j` | +| `keywordExactMatch` | Require exact phrase match | `?keywordExactMatch&keywordSearch=remote code execution` | +| `cvssV3Severity` | CVSS v3 severity level | `?cvssV3Severity=CRITICAL` | +| `cweId` | Filter by CWE weakness type | `?cweId=CWE-79` | +| `cpeName` | CVEs affecting a specific product | `?cpeName=cpe:2.3:a:apache:http_server:2.4.49:*:*:*:*:*:*:*` | +| `pubStartDate` / `pubEndDate` | Published date range (ISO-8601) | `?pubStartDate=2024-01-01T00:00:00.000&pubEndDate=2024-12-31T23:59:59.999` | +| `lastModStartDate` / `lastModEndDate` | Last-modified date range | `?lastModStartDate=2024-06-01T00:00:00.000` | +| `isVulnerable` | Only CVEs with matching CPE configs | `?cpeName=...&isVulnerable` | +| `hasCertAlerts` | Has CERT/CC alert | `?hasCertAlerts` | +| `hasKev` | In CISA Known Exploited Vulnerabilities | `?hasKev` | +| `sourceIdentifier` | CVE source (e.g. CNA) | `?sourceIdentifier=cve@mitre.org` | + +All parameter names and values are **case insensitive**. + +## Minimal curl examples + +### Single CVE lookup +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cves/2.0?cveId=CVE-2023-44487" \ + -H "apiKey:YOUR_KEY" | jq '.vulnerabilities[0].cve' +``` + +### Search by keyword +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cves/2.0?keywordSearch=log4j&resultsPerPage=5" \ + -H "apiKey:YOUR_KEY" | jq '.vulnerabilities[].cve.id' +``` + +### Critical CVEs published this year +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cves/2.0?\ +pubStartDate=2025-01-01T00:00:00.000&\ +cvssV3Severity=CRITICAL&\ +resultsPerPage=20" \ + -H "apiKey:YOUR_KEY" | jq '.vulnerabilities[].cve | {id, description: .descriptions[0].value}' +``` + +### CVEs for a specific product (CPE) +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cves/2.0?\ +cpeName=cpe:2.3:a:apache:http_server:*:*:*:*:*:*:*:*&\ +resultsPerPage=10" \ + -H "apiKey:YOUR_KEY" +``` + +## Incremental sync pattern + +After initial population, use `lastModStartDate` / `lastModEndDate` to fetch only changes: + +1. Store the timestamp of your last successful sync. +2. Every 2+ hours, query with `lastModStartDate` = last sync time and `lastModEndDate` = now. +3. Update your local data with the results. +4. Store the new sync timestamp. + +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cves/2.0?\ +lastModStartDate=2025-03-20T00:00:00.000&\ +lastModEndDate=2025-03-22T00:00:00.000" \ + -H "apiKey:YOUR_KEY" +``` + +NVD recommends syncing no more than once every 2 hours. + +## Response structure (key fields) + +```json +{ + "resultsPerPage": 1, + "startIndex": 0, + "totalResults": 1, + "vulnerabilities": [ + { + "cve": { + "id": "CVE-2023-44487", + "sourceIdentifier": "...", + "published": "2023-10-10T...", + "lastModified": "2024-...", + "descriptions": [ + { "lang": "en", "value": "..." } + ], + "metrics": { + "cvssMetricV31": [ + { + "cvssData": { + "baseScore": 7.5, + "baseSeverity": "HIGH", + "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H" + } + } + ] + }, + "weaknesses": [ + { "description": [{ "value": "CWE-400" }] } + ], + "configurations": [...], + "references": [ + { "url": "...", "source": "...", "tags": ["Exploit", "Third Party Advisory"] } + ] + } + } + ] +} +``` + +## Common pitfalls + +1. **Rate limiting.** Exceeding the rate limit triggers NIST firewall blocks. Always sleep between requests (6s public, 0.6s keyed). +2. **Date format.** Timestamps must be ISO-8601 with milliseconds: `2024-01-01T00:00:00.000`. Missing milliseconds may cause errors. +3. **Empty results ≠ error.** A 200 response with zero results means the filters matched nothing — check your parameters. +4. **CVSS version gaps.** CVEs before 2016 often lack `cvssMetricV31`. Check for `cvssMetricV2` as a fallback. +5. **Use default `resultsPerPage`.** NVD has optimized the default page size. Overriding to very large values may cause timeouts. +6. **API key in header only.** Do not pass the key as a URL parameter — it must be in the HTTP header as `apiKey:VALUE`. + +## CVE Change History API + +Separate endpoint for tracking when and why CVE records changed: + +``` +https://services.nvd.nist.gov/rest/json/cvehistory/2.0 +``` + +Useful for monitoring NVD updates to specific CVEs. Accepts `cveId`, `changeStartDate`, `changeEndDate`, `eventName` parameters. + +## Official sources + +- CVE API docs: https://nvd.nist.gov/developers/vulnerabilities +- Getting started + rate limits: https://nvd.nist.gov/developers/start-here +- API workflows / best practices: https://nvd.nist.gov/developers/api-workflows +- Request an API key: https://nvd.nist.gov/developers/request-an-api-key + +## Reference files + +- `references/pagination-and-rate-limits.md` — Detailed pagination patterns, rate-limit handling, API key management, and incremental sync workflows +- `references/cve-change-history-api.md` — CVE Change History API parameters, response structure, and monitoring patterns diff --git a/content/nist/docs/nvd-cve-api/references/cve-change-history-api.md b/content/nist/docs/nvd-cve-api/references/cve-change-history-api.md new file mode 100644 index 00000000..3229d504 --- /dev/null +++ b/content/nist/docs/nvd-cve-api/references/cve-change-history-api.md @@ -0,0 +1,118 @@ +# CVE Change History API reference + +The CVE Change History API provides transparency into when and why NVD vulnerability records change. + +## Base URL + +``` +https://services.nvd.nist.gov/rest/json/cvehistory/2.0 +``` + +## Parameters + +| Parameter | Description | Example | +|-----------|-------------|---------| +| `cveId` | Filter by specific CVE ID | `?cveId=CVE-2023-44487` | +| `changeStartDate` | Start of change date range (ISO-8601) | `?changeStartDate=2024-01-01T00:00:00.000` | +| `changeEndDate` | End of change date range (ISO-8601) | `?changeEndDate=2024-12-31T23:59:59.999` | +| `eventName` | Filter by type of change | `?eventName=CVE Modified` | +| `startIndex` | Pagination offset (0-based) | `?startIndex=0` | +| `resultsPerPage` | Page size | `?resultsPerPage=100` | + +### Event name values + +| Event | Description | +|-------|-------------| +| `Initial Analysis` | CVE first analyzed by NVD | +| `Reanalysis` | NVD re-analyzed the CVE | +| `CVE Modified` | Source (CNA) modified the CVE | +| `Modified Analysis` | NVD updated its analysis | +| `CVE Received` | CVE first received by NVD | +| `CVE Rejected` | CVE marked as rejected | +| `CVE Translated` | CNA-provided description was translated | + +## Response structure + +```json +{ + "resultsPerPage": 1, + "startIndex": 0, + "totalResults": 1, + "cveChanges": [ + { + "change": { + "cveId": "CVE-2023-44487", + "eventName": "CVE Modified", + "cveChangeId": "...", + "sourceIdentifier": "...", + "created": "2024-03-15T12:00:00.000", + "details": [ + { + "action": "Changed", + "type": "Reference", + "oldValue": "...", + "newValue": "..." + } + ] + } + } + ] +} +``` + +### Detail action types + +| Action | Meaning | +|--------|---------| +| `Added` | New data added to the CVE | +| `Changed` | Existing data modified | +| `Removed` | Data removed from the CVE | + +### Detail types (common) + +- `CVSS V3.1` — CVSS score changed +- `CWE` — Weakness classification changed +- `CPE Configuration` — Affected product list changed +- `Reference` — External references added/changed +- `Description` — CVE description updated + +## Example queries + +### Check history of a specific CVE +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cvehistory/2.0?cveId=CVE-2023-44487" \ + -H "apiKey:YOUR_KEY" | jq '.cveChanges[].change | {eventName, created}' +``` + +### Changes in a date range +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cvehistory/2.0?\ +changeStartDate=2025-03-01T00:00:00.000&\ +changeEndDate=2025-03-22T00:00:00.000&\ +resultsPerPage=10" \ + -H "apiKey:YOUR_KEY" +``` + +### Monitor for re-analyses +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cvehistory/2.0?\ +changeStartDate=2025-03-01T00:00:00.000&\ +changeEndDate=2025-03-22T00:00:00.000&\ +eventName=Reanalysis" \ + -H "apiKey:YOUR_KEY" +``` + +## Use cases + +1. **Change monitoring** — Detect when CVSS scores change for CVEs in your inventory +2. **Compliance audit trail** — Track when NVD updated its analysis of CVEs relevant to your environment +3. **Alert on CVSS upgrades** — Watch for CVEs that get re-scored to CRITICAL +4. **Rejection monitoring** — Detect when CVEs you track get rejected + +## Pagination notes + +Same pagination model as the CVE API — use `startIndex` and `resultsPerPage` with the same rate limits. + +## Official sources + +- CVE API docs (includes Change History): https://nvd.nist.gov/developers/vulnerabilities diff --git a/content/nist/docs/nvd-cve-api/references/pagination-and-rate-limits.md b/content/nist/docs/nvd-cve-api/references/pagination-and-rate-limits.md new file mode 100644 index 00000000..c2871d02 --- /dev/null +++ b/content/nist/docs/nvd-cve-api/references/pagination-and-rate-limits.md @@ -0,0 +1,169 @@ +# Pagination and rate limits reference + +Detailed guide to NVD API pagination, rate-limit handling, API key management, and incremental sync workflows. + +## Rate limits + +| Tier | Limit | Recommended sleep | +|------|-------|-------------------| +| **Public** (no API key) | 5 requests / rolling 30-second window | 6 seconds between requests | +| **API key** | 50 requests / rolling 30-second window | 0.6 seconds between requests | + +If you exceed the rate limit, NIST firewall rules will block your IP. There is no retry-after header — you must implement conservative throttling. + +### API key header format + +``` +apiKey:YOUR_KEY_VALUE_HERE +``` + +- No spaces around the colon +- Key value is **case-sensitive** +- Do not pass as a URL query parameter + +### curl example with API key +```bash +curl -s "https://services.nvd.nist.gov/rest/json/cves/2.0?cveId=CVE-2023-44487" \ + -H "apiKey:abc123def456" +``` + +### Python example with API key +```python +import requests + +headers = {"apiKey": "abc123def456"} +resp = requests.get( + "https://services.nvd.nist.gov/rest/json/cves/2.0", + params={"cveId": "CVE-2023-44487"}, + headers=headers +) +``` + +## Pagination mechanics + +All NVD collection endpoints use offset-based pagination: + +### Request parameters + +| Parameter | Type | Default | Description | +|-----------|------|---------|-------------| +| `startIndex` | integer | 0 | 0-based offset into the result set | +| `resultsPerPage` | integer | 2000 | Number of results per response | + +### Response fields + +| Field | Description | +|-------|-------------| +| `totalResults` | Total number of matching records | +| `startIndex` | The offset used in this response | +| `resultsPerPage` | The page size used in this response | + +### Complete pagination pattern + +```python +import requests +import time + +BASE_URL = "https://services.nvd.nist.gov/rest/json/cves/2.0" +HEADERS = {"apiKey": "YOUR_KEY"} +SLEEP_SECONDS = 0.6 # with API key; use 6.0 without + +def fetch_all_cves(params=None): + """Fetch all CVEs matching the given parameters.""" + if params is None: + params = {} + + all_vulnerabilities = [] + start_index = 0 + + while True: + params["startIndex"] = start_index + response = requests.get(BASE_URL, params=params, headers=HEADERS) + response.raise_for_status() + data = response.json() + + vulnerabilities = data.get("vulnerabilities", []) + all_vulnerabilities.extend(vulnerabilities) + + total = data["totalResults"] + returned = data["resultsPerPage"] + + if start_index + returned >= total: + break + + start_index += returned + time.sleep(SLEEP_SECONDS) + + return all_vulnerabilities +``` + +### Tips + +- **Use the default `resultsPerPage`** (2000). NVD has optimized this value for response time. +- **Do not parallelize requests.** Process pages sequentially with sleep between them. +- **Check `totalResults` changes.** If the NVD is updated mid-pagination, totals may shift. Re-run if the final count doesn't match expectations. + +## Incremental sync workflow + +After initial data population, use modified-date filtering to maintain a local mirror efficiently. + +### Initial population + +1. Start with `startIndex=0`, no date filters. +2. Page through all results using the loop above. +3. Record the current UTC timestamp as `last_sync_time`. + +### Incremental updates + +1. No more than once every **2 hours**, query: + ``` + ?lastModStartDate={last_sync_time}&lastModEndDate={now} + ``` +2. Page through results. +3. Upsert each CVE into your local store. +4. Update `last_sync_time` to the current UTC timestamp. + +### Date parameter format + +All date parameters use ISO-8601 with **milliseconds** and UTC: + +``` +2025-03-22T00:00:00.000 +``` + +- Both `Start` and `End` dates are required together +- Maximum range: 120 days between start and end dates + +### Example incremental sync +```python +from datetime import datetime, timedelta, timezone + +def incremental_sync(last_sync_time): + now = datetime.now(timezone.utc) + params = { + "lastModStartDate": last_sync_time.strftime("%Y-%m-%dT%H:%M:%S.000"), + "lastModEndDate": now.strftime("%Y-%m-%dT%H:%M:%S.000"), + } + cves = fetch_all_cves(params) + # upsert cves into local store + return now # new last_sync_time +``` + +## Error handling + +| Status | Meaning | Action | +|--------|---------|--------| +| 200 | Success (may have 0 results) | Check `totalResults`; 0 means filters matched nothing | +| 403 | Rate limited / blocked | Stop requests, wait 30+ seconds, reduce request frequency | +| 404 | Invalid endpoint | Check base URL | +| 500 | Server error | Retry with exponential backoff | + +### Debugging 4xx errors + +Check the response header for a `message` field — NVD provides additional context about what went wrong (missing parameters, invalid values, etc.). + +## Official sources + +- Getting started: https://nvd.nist.gov/developers/start-here +- API workflow best practices: https://nvd.nist.gov/developers/api-workflows +- Request an API key: https://nvd.nist.gov/developers/request-an-api-key diff --git a/content/osquery/docs/osquery/DOC.md b/content/osquery/docs/osquery/DOC.md new file mode 100644 index 00000000..0ccb3c51 --- /dev/null +++ b/content/osquery/docs/osquery/DOC.md @@ -0,0 +1,204 @@ +--- +name: osquery +description: "osquery SQL-based operating system instrumentation framework — tables, schema, osqueryi shell, and query patterns for host monitoring and security." +metadata: + languages: "sql" + versions: "5.13.1" + revision: 1 + updated-on: "2026-03-22" + source: community + tags: "osquery,sql,security,monitoring,endpoint,host,os,tables" +--- + +# osquery for agents + +osquery exposes an operating system as a high-performance relational database. You write SQL queries against virtual tables that represent processes, network connections, file hashes, installed packages, and hundreds of other OS concepts. Data is returned in real time from OS APIs. + +## Golden rules + +- **SELECT only.** INSERT / UPDATE / DELETE exist but are no-ops (unless you create runtime views or use extensions). +- Use `osqueryi --json` for machine-readable output: `osqueryi --json "SELECT pid, name FROM processes LIMIT 5;"` +- Always run `.tables` and `.schema ` first to discover available columns before writing a query. +- Some tables (e.g. `file`, `hash`) **require a WHERE predicate** on specific columns (marked with a pushpin icon in the schema docs). `SELECT * FROM file` returns nothing — you must constrain with `path` or `directory`. +- Run as root/admin when possible — many tables return fewer results without elevated privileges. +- Use `--enable_foreign` flag to see tables for platforms other than the current host. + +## osqueryi shell + +`osqueryi` is the interactive query console. It is completely standalone — no daemon, no network. + +```bash +# Launch +osqueryi + +# Launch with JSON output +osqueryi --json "SELECT * FROM routes WHERE destination = '::1'" + +# Pipe a query via stdin +echo "SELECT * FROM os_version;" | osqueryi --json +``` + +### Key meta-commands + +| Command | Description | +|---------|-------------| +| `.tables` | List all available tables | +| `.schema
` | Show CREATE TABLE for a table | +| `PRAGMA table_info(
);` | Show columns with types and constraints | +| `.mode line` | One key=value per line (easier reading) | +| `.mode pretty` | Default columnar output | +| `.all
` | SELECT * from a table | +| `.help` | List all meta-commands | +| `.exit` or Ctrl-D | Exit the shell | + +## High-value tables + +### System info +| Table | What it returns | +|-------|----------------| +| `os_version` | OS name, version, build, platform | +| `system_info` | Hostname, CPU, RAM, hardware model | +| `uptime` | Days, hours, minutes since boot | +| `kernel_info` | Kernel version and arguments | +| `osquery_info` | osquery PID, version, config status | + +### Processes and sockets +| Table | What it returns | +|-------|----------------| +| `processes` | PID, name, path, cmdline, uid, start_time, memory | +| `listening_ports` | Ports bound on 0.0.0.0 or specific addresses | +| `process_open_sockets` | Per-process socket details (local/remote addr, port, state) | +| `process_open_files` | Per-process open file descriptors | + +### Files and hashes +| Table | Notes | +|-------|-------| +| `file` | Stat a file or directory — **requires** `path` or `directory` in WHERE | +| `hash` | SHA256/SHA1/MD5 — **requires** `path` in WHERE | + +### Users and auth +| Table | What it returns | +|-------|----------------| +| `users` | Local user accounts | +| `logged_in_users` | Currently logged-in sessions | +| `last` | Login history | + +### Packages and software +| Table | Platform | +|-------|----------| +| `deb_packages` | Debian/Ubuntu | +| `rpm_packages` | RHEL/CentOS | +| `homebrew_packages` | macOS | +| `npm_packages` | Node.js | +| `python_packages` | Python | +| `chrome_extensions` | Cross-platform | +| `firefox_addons` | Cross-platform | +| `apps` | macOS applications | +| `programs` | Windows installed programs | + +### Network +| Table | What it returns | +|-------|----------------| +| `interface_addresses` | IP addresses per interface | +| `interface_details` | MTU, MAC, flags per interface | +| `routes` | Routing table | +| `arp_cache` | ARP table | +| `dns_resolvers` | Configured DNS servers | + +### Security (platform-specific) +| Table | Platform | +|-------|----------| +| `alf` / `alf_services` | macOS Application Layer Firewall | +| `iptables` | Linux firewall rules | +| `suid_bin` | Linux SUID/SGID binaries | +| `kernel_modules` | Linux loaded kernel modules | +| `kernel_extensions` | macOS kernel extensions | +| `certificates` | Keychain / cert store | +| `yara_events` / `yara` | YARA rule matching | + +## Common query patterns + +### List listening services +```sql +SELECT DISTINCT p.name, l.port, p.pid +FROM processes AS p +JOIN listening_ports AS l ON p.pid = l.pid +WHERE l.address = '0.0.0.0'; +``` + +### Find recently started processes +```sql +SELECT pid, name, path, cmdline, start_time +FROM processes +ORDER BY start_time DESC +LIMIT 20; +``` + +### Hash a specific file +```sql +SELECT path, sha256 +FROM hash +WHERE path = '/usr/bin/ssh'; +``` + +### List files in a directory with hashes +```sql +SELECT f.path, f.size, f.mtime, h.sha256 +FROM file AS f +JOIN hash AS h ON f.path = h.path +WHERE f.directory = '/etc' +ORDER BY f.mtime DESC; +``` + +### Find SUID binaries (Linux) +```sql +SELECT path, username, permissions +FROM suid_bin; +``` + +### Installed packages with version +```sql +-- Debian/Ubuntu +SELECT name, version, source FROM deb_packages ORDER BY name; + +-- RHEL/CentOS +SELECT name, version, release FROM rpm_packages ORDER BY name; +``` + +### Check osquery version and status +```sql +SELECT pid, version, config_valid, extensions, build_platform +FROM osquery_info; +``` + +## SQL extensions + +osquery adds functions beyond standard SQLite: + +- **Hashing:** `sha1(col)`, `sha256(col)`, `md5(col)` +- **String:** `split(col, tokens, index)`, `regex_match(col, pattern, index)`, `concat(...)`, `concat_ws(sep, ...)` +- **Network:** `in_cidr_block(cidr, ip)`, `community_id_v1(src, dst, sport, dport, proto)` +- **Encoding:** `to_base64(col)`, `from_base64(col)` +- **Math:** `sqrt`, `log`, `ceil`, `floor`, `power`, `pi` + +## Common pitfalls + +1. **Forgetting required WHERE predicates.** Tables like `file`, `hash`, `yara` produce no results without constrained columns. Check the schema docs for the pushpin icon. +2. **Running without root.** Many tables return partial data as a non-root user. +3. **Event tables need `--disable_events=false`.** Event-based tables (FIM, process auditing) are disabled by default in the shell. +4. **`osqueryi` uses an in-memory DB.** To access event history, connect to the daemon's database: `--database_path=/var/osquery/osquery.db` (only one process may attach at a time). +5. **Platform-specific tables.** Not all tables exist on all platforms. Use `.tables` to see what is available on the current host. + +## Official sources + +- osquery documentation: https://osquery.readthedocs.io/ +- Full schema (all platforms): https://osquery.io/schema/ +- osqueryi shell usage: https://osquery.readthedocs.io/en/stable/introduction/using-osqueryi/ +- SQL introduction: https://osquery.readthedocs.io/en/stable/introduction/sql/ +- GitHub: https://github.com/osquery/osquery + +## Reference files + +- `references/schema-and-tables.md` — Platform-specific high-value tables, common joins, and table argument patterns +- `references/osqueryi-usage.md` — Shell modes, flags, piping, and output formatting +- `references/query-patterns.md` — Security-focused query recipes for incident response and compliance diff --git a/content/osquery/docs/osquery/references/osqueryi-usage.md b/content/osquery/docs/osquery/references/osqueryi-usage.md new file mode 100644 index 00000000..28d8a0d1 --- /dev/null +++ b/content/osquery/docs/osquery/references/osqueryi-usage.md @@ -0,0 +1,115 @@ +# osqueryi usage reference + +Detailed reference for osqueryi shell modes, flags, piping, and output formatting. + +## Invocation + +```bash +# Interactive mode +osqueryi + +# Single query with default output +osqueryi "SELECT * FROM os_version;" + +# JSON output (best for machine consumption) +osqueryi --json "SELECT * FROM os_version;" + +# CSV output +osqueryi --csv "SELECT * FROM os_version;" + +# Pipe query via stdin +echo "SELECT name, version FROM os_version;" | osqueryi --json + +# With a specific database (to read event tables) +osqueryi --database_path=/var/osquery/osquery.db + +# Enable foreign platform tables +osqueryi --enable_foreign +``` + +## Output modes + +Set interactively with `.mode ` or via flags: + +| Mode | Flag | Description | +|------|------|-------------| +| `pretty` | (default) | Column-aligned ASCII table | +| `line` | | One `key = value` per line, blank line between rows | +| `csv` | `--csv` | Comma-separated output | +| `json` | `--json` | JSON array of objects | +| `column` | | Padded columns without borders | +| `list` | | Pipe-separated values | + +**Recommendation:** Use `--json` for all programmatic processing. + +## Useful flags + +| Flag | Description | +|------|-------------| +| `--json` | Output JSON | +| `--csv` | Output CSV | +| `--header=false` | Suppress column headers (list/csv modes) | +| `--separator=\|` | Change list-mode separator | +| `--database_path=PATH` | Attach to osqueryd's RocksDB for event data | +| `--disable_events=false` | Enable event publisher tables | +| `--enable_foreign` | Show tables for other platforms | +| `--config_path=PATH` | Load a config file for packs/decorators | +| `-S` or `-A` | When invoking `osqueryd`, act as interactive shell | +| `--verbose` | Enable verbose logging | + +## Meta-commands + +| Command | Description | +|---------|-------------| +| `.help` | List all meta-commands | +| `.tables` | List all tables | +| `.tables ` | List tables matching a partial string | +| `.schema
` | Show CREATE TABLE statement | +| `.all
` | SELECT * FROM table | +| `.mode ` | Set output mode | +| `.separator ` | Set separator for list mode | +| `.headers on|off` | Toggle column headers | +| `.bail on|off` | Stop after hitting an error | +| `.echo on|off` | Echo commands before execution | +| `.connect ` | Connect to an osquery extension socket | +| `.disconnect` | Disconnect from extension | +| `.exit` / Ctrl-D | Exit the shell | + +## Scripting patterns + +### Run multiple queries from a file +```bash +osqueryi < queries.sql +``` + +### JSON output piped to jq +```bash +osqueryi --json "SELECT name, pid FROM processes WHERE name = 'sshd';" | jq '.[].pid' +``` + +### Combining with shell tools +```bash +# Count listening ports +osqueryi --json "SELECT port FROM listening_ports WHERE address='0.0.0.0';" | jq length + +# Extract specific fields +osqueryi --json "SELECT name, version FROM deb_packages;" | jq -r '.[] | "\(.name)=\(.version)"' +``` + +### Checking if a table exists +```bash +osqueryi --json "SELECT name FROM osquery_registry WHERE registry='table' AND name='iptables';" +``` + +## Key behavior notes + +- `osqueryi` is an **in-memory** virtual database by default. It does not connect to `osqueryd`. +- Event-based tables (`process_events`, `file_events`, `socket_events`) are empty unless using `--disable_events=false` or connecting to the daemon's database with `--database_path`. +- Only one process can attach to a RocksDB database at a time. If `osqueryd` is running, you cannot also attach with `--database_path`. +- The shell supports standard SQLite syntax with osquery extensions. See SQL reference: https://osquery.readthedocs.io/en/stable/introduction/sql/ +- Running as non-root returns fewer rows for many tables (processes, sockets, etc.). + +## Official sources + +- osqueryi shell usage: https://osquery.readthedocs.io/en/stable/introduction/using-osqueryi/ +- CLI flags reference: https://osquery.readthedocs.io/en/stable/installation/cli-flags/ diff --git a/content/osquery/docs/osquery/references/query-patterns.md b/content/osquery/docs/osquery/references/query-patterns.md new file mode 100644 index 00000000..75d4f913 --- /dev/null +++ b/content/osquery/docs/osquery/references/query-patterns.md @@ -0,0 +1,212 @@ +# Security-focused query patterns + +Practical query recipes for incident response, compliance auditing, and threat hunting with osquery. + +## Incident response + +### Suspicious processes +```sql +-- Processes running from /tmp or unusual locations +SELECT pid, name, path, cmdline, uid, start_time +FROM processes +WHERE path LIKE '/tmp/%' + OR path LIKE '/dev/shm/%' + OR path LIKE '/var/tmp/%' +ORDER BY start_time DESC; + +-- Processes where the binary is deleted from disk +SELECT pid, name, path, cmdline +FROM processes +WHERE on_disk = 0; + +-- Processes with suspicious command-line patterns +SELECT pid, name, path, cmdline +FROM processes +WHERE cmdline LIKE '%base64%' + OR cmdline LIKE '%curl%|%sh%' + OR cmdline LIKE '%wget%|%bash%' + OR cmdline LIKE '%-e /bin/sh%'; +``` + +### Network investigation +```sql +-- Established outbound connections +SELECT p.pid, p.name, p.path, + pos.remote_address, pos.remote_port, pos.local_port +FROM process_open_sockets pos +JOIN processes p ON pos.pid = p.pid +WHERE pos.remote_address != '' + AND pos.remote_address != '0.0.0.0' + AND pos.remote_address != '::'; + +-- Processes listening on non-standard ports +SELECT l.port, l.address, l.protocol, p.name, p.pid, p.path +FROM listening_ports l +JOIN processes p ON l.pid = p.pid +WHERE l.port NOT IN (22, 80, 443, 53, 8080, 8443, 3306, 5432) +ORDER BY l.port; + +-- DNS resolver configuration (detect tampering) +SELECT * FROM dns_resolvers; +``` + +### File integrity +```sql +-- Recently modified files in sensitive directories +SELECT path, mtime, size, uid +FROM file +WHERE directory = '/etc' + AND mtime > (SELECT unix_timestamp FROM time) - 86400 +ORDER BY mtime DESC; + +-- Hash critical binaries +SELECT h.path, h.sha256, f.mtime, f.size +FROM hash h +JOIN file f ON h.path = f.path +WHERE h.path IN ('/usr/bin/ssh', '/usr/bin/sudo', '/usr/bin/passwd', '/bin/login'); + +-- World-writable files in system directories +SELECT path, mode, uid, gid +FROM file +WHERE directory = '/usr/local/bin' + AND mode LIKE '%7' + AND type = 'regular'; +``` + +### User activity +```sql +-- Currently logged-in users +SELECT user, tty, host, time, pid +FROM logged_in_users +WHERE type = 'user'; + +-- Recent logins +SELECT username, tty, host, time +FROM last +ORDER BY time DESC +LIMIT 50; + +-- Shell history (run as each user or as root) +SELECT uid, command, history_file +FROM shell_history +WHERE command LIKE '%passwd%' + OR command LIKE '%shadow%' + OR command LIKE '%ssh-keygen%' + OR command LIKE '%authorized_keys%'; +``` + +## Compliance auditing + +### CIS Benchmark-style checks +```sql +-- SSH configuration via augeas +SELECT label, value +FROM augeas +WHERE path = '/etc/ssh/sshd_config' + AND label IN ('PermitRootLogin', 'PasswordAuthentication', 'Protocol', 'MaxAuthTries'); + +-- Verify disk encryption (macOS FileVault) +SELECT name, uuid, encrypted +FROM disk_encryption; + +-- Check firewall status (macOS ALF) +SELECT global_state, stealth_enabled, logging_enabled +FROM alf; + +-- Verify no duplicate UIDs +SELECT uid, COUNT(*) as cnt +FROM users +GROUP BY uid +HAVING cnt > 1; + +-- List SUID/SGID binaries +SELECT path, username, groupname, permissions +FROM suid_bin +ORDER BY path; +``` + +### Software inventory and vulnerability surface +```sql +-- All installed packages with versions (Debian) +SELECT name, version, source, maintainer +FROM deb_packages +ORDER BY name; + +-- Python packages (potential supply-chain risk) +SELECT name, version, directory +FROM python_packages +ORDER BY name; + +-- Browser extensions (Chrome) +SELECT identifier, name, version, author, path, profile +FROM chrome_extensions; + +-- Kernel modules loaded +SELECT name, size, used_by, status +FROM kernel_modules +WHERE status = 'Live'; +``` + +## Threat hunting patterns + +### Persistence mechanisms +```sql +-- Cron jobs +SELECT command, path, minute, hour, day_of_month +FROM crontab; + +-- Startup items (macOS launchd) +SELECT name, program, program_arguments, run_at_load, keep_alive +FROM launchd +WHERE run_at_load = '1'; + +-- Startup items (Linux systemd) +SELECT id, description, load_state, active_state, sub_state, path +FROM systemd_units +WHERE active_state = 'active' + AND sub_state = 'running'; +``` + +### Lateral movement indicators +```sql +-- Authorized SSH keys +SELECT uid, key, key_file +FROM authorized_keys; + +-- Known hosts entries +SELECT uid, key, key_file +FROM known_hosts; + +-- Recent SSH sessions +SELECT user, host, time +FROM logged_in_users +WHERE host != '' + AND host != 'localhost' +ORDER BY time DESC; +``` + +### Process tree analysis +```sql +-- Process parent-child relationships +SELECT p.pid, p.name, p.path, p.parent, + pp.name AS parent_name, pp.path AS parent_path +FROM processes p +LEFT JOIN processes pp ON p.parent = pp.pid +WHERE p.name IN ('sh', 'bash', 'python', 'perl', 'ruby', 'nc', 'ncat') +ORDER BY p.start_time DESC; +``` + +## Performance tips + +1. **Always use WHERE clauses** to avoid full table scans on large tables. +2. **LIMIT results** when exploring — `LIMIT 20` prevents overwhelming output. +3. **Avoid wildcards on constrained tables** — `SELECT * FROM file WHERE directory = '/'` scans only one directory level, but wide scans are still expensive. +4. **Use `--json` with `jq`** for post-processing instead of complex SQL when possible. +5. **Schedule heavy queries in osqueryd** rather than running them interactively in production. + +## Official sources + +- FIM documentation: https://osquery.readthedocs.io/en/stable/deployment/file-integrity-monitoring/ +- Process auditing: https://osquery.readthedocs.io/en/stable/deployment/process-auditing/ +- YARA scanning: https://osquery.readthedocs.io/en/stable/deployment/yara/ +- Performance safety: https://osquery.readthedocs.io/en/stable/deployment/performance-safety/ diff --git a/content/osquery/docs/osquery/references/schema-and-tables.md b/content/osquery/docs/osquery/references/schema-and-tables.md new file mode 100644 index 00000000..2b9cc0fe --- /dev/null +++ b/content/osquery/docs/osquery/references/schema-and-tables.md @@ -0,0 +1,145 @@ +# Schema and tables reference + +Detailed reference for osquery's table model, platform-specific tables, table arguments, and common join patterns. + +> Full interactive schema: https://osquery.io/schema/ + +## Table categories + +osquery tables fall into several categories: + +| Category | Description | Examples | +|----------|-------------|----------| +| **Cross-platform** | Available on macOS, Linux, Windows | `processes`, `users`, `routes`, `interface_addresses`, `os_version` | +| **Platform-specific** | Only on one or two OSes | `iptables` (Linux), `kernel_extensions` (macOS), `programs` (Windows) | +| **Event-based** | Buffered event streams, require `--disable_events=false` | `process_events`, `file_events`, `socket_events`, `yara_events` | +| **Utility / Action** | Require input columns (pushpin tables) | `file`, `hash`, `yara`, `curl`, `augeas` | +| **Meta** | Info about osquery itself | `osquery_info`, `osquery_schedule`, `osquery_packs`, `osquery_extensions` | + +## Tables requiring WHERE predicates (pushpin tables) + +These tables need at least one constrained column to produce results. Check the schema docs for columns marked with a pushpin icon. + +| Table | Required column(s) | Example | +|-------|-------------------|---------| +| `file` | `path` or `directory` | `SELECT * FROM file WHERE directory = '/etc';` | +| `hash` | `path` or `directory` | `SELECT sha256 FROM hash WHERE path = '/bin/ls';` | +| `yara` | `path` and `sigfile` or `sig_group` | `SELECT * FROM yara WHERE path = '/tmp/suspicious' AND sigfile = '/rules/malware.yar';` | +| `augeas` | `path` or `node` | `SELECT * FROM augeas WHERE path = '/etc/ssh/sshd_config';` | +| `curl` | `url` | `SELECT result FROM curl WHERE url = 'https://example.com/health';` | +| `plist` | `path` | `SELECT * FROM plist WHERE path = '/Library/Preferences/...';` | + +## High-value tables by use case + +### Incident response + +| Table | Use | +|-------|-----| +| `processes` | Running processes with cmdline, path, start_time | +| `process_open_sockets` | Network connections per process | +| `process_open_files` | Open file handles per process | +| `listening_ports` | Services listening on network ports | +| `logged_in_users` | Active sessions | +| `last` | Login history | +| `shell_history` | Shell command history (user-scoped) | +| `file` + `hash` | File metadata and hashes | +| `yara` | YARA rule matches | +| `process_events` | Process execution events (event table) | +| `socket_events` | Network connection events (event table) | +| `file_events` | FIM events (event table) | + +### Compliance and inventory + +| Table | Use | +|-------|-----| +| `os_version` | OS version compliance | +| `deb_packages` / `rpm_packages` / `programs` | Installed software inventory | +| `interface_addresses` | Network configuration | +| `users` / `groups` | Account inventory | +| `disk_encryption` | Encryption status (macOS FileVault, Linux LUKS) | +| `suid_bin` | SUID/SGID binaries (Linux) | +| `alf` / `iptables` | Firewall configuration | +| `certificates` | Certificate inventory | +| `browser_plugins` / `chrome_extensions` | Browser extension audit | + +### Performance monitoring + +| Table | Use | +|-------|-----| +| `cpu_time` | Per-CPU usage | +| `memory_info` | System memory breakdown | +| `processes` (resident_size, user_time, system_time) | Per-process resource usage | +| `mounts` / `disk_info` | Disk space and mount status | +| `load_average` | System load (Linux/macOS) | + +## Platform availability quick reference + +| Table | macOS | Linux | Windows | +|-------|:-----:|:-----:|:-------:| +| `processes` | Y | Y | Y | +| `users` | Y | Y | Y | +| `routes` | Y | Y | Y | +| `file` / `hash` | Y | Y | Y | +| `deb_packages` | - | Y | - | +| `rpm_packages` | - | Y | - | +| `homebrew_packages` | Y | Y | - | +| `programs` | - | - | Y | +| `iptables` | - | Y | - | +| `alf` | Y | - | - | +| `kernel_modules` | - | Y | - | +| `kernel_extensions` | Y | - | - | +| `suid_bin` | - | Y | - | +| `process_events` | Y | Y | Y | +| `file_events` | Y | Y | - | +| `yara` | Y | Y | Y | +| `certificates` | Y | - | Y | + +## Common JOIN patterns + +### Process with network connections +```sql +SELECT p.pid, p.name, p.path, pos.local_address, pos.local_port, + pos.remote_address, pos.remote_port, pos.protocol +FROM processes p +JOIN process_open_sockets pos ON p.pid = pos.pid +WHERE pos.remote_address != '' AND pos.remote_address != '0.0.0.0'; +``` + +### Process with open files +```sql +SELECT p.pid, p.name, pof.path AS open_file +FROM processes p +JOIN process_open_files pof ON p.pid = pof.pid +WHERE pof.path LIKE '/etc/%'; +``` + +### File metadata with hash +```sql +SELECT f.path, f.size, f.mtime, + h.sha256, h.md5 +FROM file f +JOIN hash h ON f.path = h.path +WHERE f.directory = '/usr/bin' +ORDER BY f.mtime DESC; +``` + +### Listening port to process mapping +```sql +SELECT l.port, l.address, l.protocol, p.name, p.pid, p.path +FROM listening_ports l +JOIN processes p ON l.pid = p.pid +ORDER BY l.port; +``` + +### Users with login history +```sql +SELECT u.username, u.uid, u.shell, l.tty, l.time AS last_login +FROM users u +LEFT JOIN last l ON u.username = l.username +ORDER BY l.time DESC; +``` + +## Official sources + +- Complete schema for all platforms: https://osquery.io/schema/ +- Creating new tables: https://osquery.readthedocs.io/en/stable/development/creating-tables/ diff --git a/content/owasp/docs/asvs/v5/DOC.md b/content/owasp/docs/asvs/v5/DOC.md new file mode 100644 index 00000000..7d265d7a --- /dev/null +++ b/content/owasp/docs/asvs/v5/DOC.md @@ -0,0 +1,170 @@ +--- +name: asvs +description: "OWASP Application Security Verification Standard (ASVS) v5.0.0 — requirement structure, referencing, mapping findings, and machine-readable assets." +metadata: + languages: "http" + versions: "5.0.0" + revision: 1 + updated-on: "2026-03-22" + source: community + tags: "owasp,asvs,security,verification,requirements,appsec" +--- + +# OWASP ASVS v5 for agents + +The OWASP Application Security Verification Standard (ASVS) provides a list of security requirements for web application design, development, and testing. It normalizes the range and rigor of application security verification into a commercially-workable open standard. + +## What ASVS is for + +- **As a metric** — Assess the degree of trust that can be placed in a web application's security +- **As guidance** — Tell developers what security controls to build +- **During procurement** — Specify application security verification requirements in contracts +- **During testing** — Map security findings to structured requirement IDs + +ASVS is **not** a penetration testing methodology. It is a requirements checklist that tells you _what_ to verify, not _how_ to test it. + +## Requirement structure + +ASVS v5.0.0 organizes requirements into chapters, sections, and individual requirements: + +``` +.
. +``` + +| Level | Example | Meaning | +|-------|---------|---------| +| Chapter | `1` | Encoding and Sanitization | +| Section | `1.2` | Injection Prevention | +| Requirement | `1.2.5` | Specific requirement within that section | + +### ASVS v5.0.0 chapters + +| # | Chapter | +|---|---------| +| 1 | Encoding and Sanitization | +| 2 | Validation and Business Logic | +| 3 | Authentication | +| 4 | Session Management | +| 5 | Access Control | +| 6 | Cryptography | +| 7 | Error Handling and Logging | +| 8 | Data Protection | +| 9 | Communication | +| 10 | Configuration | +| 11 | BOM (Bill of Materials) | +| 12 | API and Web Services | +| 13 | Files and Resources | +| 14 | Self-contained Tokens | + +## How to reference requirements + +### Preferred format (version-pinned) + +``` +v-.
. +``` + +Examples: +- `v5.0.0-1.2.5` — Requirement 1.2.5 from ASVS version 5.0.0 +- `v5.0.0-3.1.1` — Requirement 3.1.1 (Authentication chapter) + +**Rules:** +- The `v` prefix is always lowercase +- Always include the version to avoid ambiguity as the standard evolves +- Without a version prefix, the identifier is assumed to refer to the latest version + +### Example requirement + +`v5.0.0-1.2.5`: +> Verify that the application protects against OS command injection and that operating system calls use parameterized OS queries or use contextual command line output encoding. + +## How to map findings to ASVS + +When you have a security finding (from a scan, pen test, or code review), map it to ASVS like this: + +1. **Identify the vulnerability class** — e.g., SQL injection, XSS, weak password policy +2. **Find the relevant ASVS chapter** — e.g., injection → Chapter 1 (Encoding and Sanitization) +3. **Find the specific section** — e.g., SQL injection → Section 1.2 (Injection Prevention) +4. **Match the specific requirement** — read the requirement text to confirm it covers your finding +5. **Reference it** — use `v5.0.0-.
.` + +### Common vulnerability → ASVS mapping + +| Vulnerability | ASVS Area | +|--------------|-----------| +| SQL Injection | Chapter 1 — Encoding and Sanitization | +| XSS (Cross-Site Scripting) | Chapter 1 — Encoding and Sanitization | +| Broken Authentication | Chapter 3 — Authentication | +| Session Fixation | Chapter 4 — Session Management | +| IDOR / Broken Access Control | Chapter 5 — Access Control | +| Sensitive Data Exposure | Chapter 8 — Data Protection | +| TLS/SSL Issues | Chapter 9 — Communication | +| Security Misconfiguration | Chapter 10 — Configuration | +| Insecure Deserialization | Chapter 1 — Encoding and Sanitization | +| Insufficient Logging | Chapter 7 — Error Handling and Logging | + +## How to use machine-readable assets + +ASVS v5.0.0 is available in machine-readable formats for programmatic use: + +| Format | URL | +|--------|-----| +| **CSV** | https://github.com/OWASP/ASVS/raw/v5.0.0/5.0/docs_en/OWASP_Application_Security_Verification_Standard_5.0.0_en.csv | +| **PDF** | https://github.com/OWASP/ASVS/raw/v5.0.0/5.0/OWASP_Application_Security_Verification_Standard_5.0.0_en.pdf | +| **Word** | https://github.com/OWASP/ASVS/raw/v5.0.0/5.0/docs_en/OWASP_Application_Security_Verification_Standard_5.0.0_en.docx | +| **GitHub (source)** | https://github.com/OWASP/ASVS/tree/v5.0.0/5.0 | + +### Using the CSV + +The CSV is ideal for building tooling around ASVS. It contains columns for: +- Requirement ID (chapter.section.requirement) +- Requirement text +- Level (L1, L2, L3) +- CWE mapping + +```python +import csv + +with open("OWASP_ASVS_5.0.0_en.csv") as f: + reader = csv.DictReader(f) + for row in reader: + print(f"{row['req_id']}: {row['req_description']}") +``` + +## Verification levels + +ASVS defines three verification levels: + +| Level | Use case | Rigor | +|-------|----------|-------| +| **L1** | All applications | Low-hanging fruit, automatable checks | +| **L2** | Applications handling sensitive data | Most applications should target this | +| **L3** | Critical applications (medical, military, financial) | Full verification, defense in depth | + +Each requirement specifies which levels it applies to. L1 is a subset of L2, which is a subset of L3. + +## Common pitfalls + +1. **Using requirements without version prefix.** Always use `v5.0.0-X.Y.Z` format — requirement IDs can change between versions. +2. **Treating ASVS as a pentest checklist.** ASVS defines _what_ to verify, not _how_ to test. Use OWASP Testing Guide for test procedures. +3. **Targeting L3 for all apps.** Not every application needs L3. Determine the appropriate level based on risk and data sensitivity. +4. **Ignoring the CWE mappings.** ASVS requirements map to CWE IDs, which in turn map to CVEs. Use these mappings to connect ASVS to vulnerability databases. +5. **Not updating when the standard changes.** ASVS v5.0.0 restructured chapters significantly from v4.x. Re-map your requirements when upgrading. + +## Related OWASP resources + +- OWASP Top Ten: https://owasp.org/Top10/ +- OWASP Testing Guide: https://owasp.org/www-project-web-security-testing-guide/ +- OWASP Cheat Sheet Series: https://cheatsheetseries.owasp.org/ +- OWASP SAMM: https://owaspsamm.org/ + +## Official sources + +- ASVS project page: https://owasp.org/www-project-application-security-verification-standard/ +- ASVS v5.0.0 on GitHub: https://github.com/OWASP/ASVS/tree/v5.0.0 +- Machine-readable downloads: https://github.com/OWASP/ASVS/tree/v5.0.0/5.0 + +## Reference files + +- `references/requirement-referencing.md` — Full referencing format, cross-walk between v4.x and v5.0.0, integration with tools +- `references/machine-readable-assets.md` — CSV/JSON sources, parsing examples, and building ASVS-based tooling diff --git a/content/owasp/docs/asvs/v5/references/machine-readable-assets.md b/content/owasp/docs/asvs/v5/references/machine-readable-assets.md new file mode 100644 index 00000000..fd5cbb35 --- /dev/null +++ b/content/owasp/docs/asvs/v5/references/machine-readable-assets.md @@ -0,0 +1,202 @@ +# ASVS machine-readable assets + +Guide to downloading, parsing, and building tooling around the ASVS CSV, JSON, and other machine-readable formats. + +## Available formats + +### ASVS v5.0.0 + +| Format | URL | +|--------|-----| +| CSV (English) | https://github.com/OWASP/ASVS/raw/v5.0.0/5.0/docs_en/OWASP_Application_Security_Verification_Standard_5.0.0_en.csv | +| PDF (English) | https://github.com/OWASP/ASVS/raw/v5.0.0/5.0/OWASP_Application_Security_Verification_Standard_5.0.0_en.pdf | +| Word (English) | https://github.com/OWASP/ASVS/raw/v5.0.0/5.0/docs_en/OWASP_Application_Security_Verification_Standard_5.0.0_en.docx | +| GitHub source | https://github.com/OWASP/ASVS/tree/v5.0.0/5.0 | + +Translations available: Turkish, Russian, French, Korean, and more at https://github.com/OWASP/ASVS/tree/v5.0.0/5.0 + +### Previous version (v4.0.3) + +| Format | URL | +|--------|-----| +| GitHub source | https://github.com/OWASP/ASVS/tree/v4.0.3/4.0 | + +## CSV structure + +The CSV file contains one row per requirement with these key columns: + +| Column | Description | +|--------|-------------| +| `chapter_id` | Chapter number | +| `chapter_name` | Chapter title | +| `section_id` | Section number within chapter | +| `section_name` | Section title | +| `req_id` | Full requirement ID (e.g., `1.2.5`) | +| `req_description` | Requirement text | +| `level1` | Applicable to L1 (checkbox) | +| `level2` | Applicable to L2 | +| `level3` | Applicable to L3 | +| `cwe` | CWE ID(s) mapped to this requirement | +| `nist` | NIST mapping (where applicable) | + +## Parsing the CSV + +### Python: Load all requirements + +```python +import csv +from dataclasses import dataclass + +@dataclass +class ASVSRequirement: + chapter: str + section: str + req_id: str + description: str + level1: bool + level2: bool + level3: bool + cwe: list + +def load_asvs(csv_path): + requirements = [] + with open(csv_path, encoding="utf-8") as f: + reader = csv.DictReader(f) + for row in reader: + req = ASVSRequirement( + chapter=row.get("chapter_name", ""), + section=row.get("section_name", ""), + req_id=row.get("req_id", ""), + description=row.get("req_description", ""), + level1=bool(row.get("level1", "").strip()), + level2=bool(row.get("level2", "").strip()), + level3=bool(row.get("level3", "").strip()), + cwe=[c.strip() for c in row.get("cwe", "").split(",") if c.strip()], + ) + requirements.append(req) + return requirements + +reqs = load_asvs("OWASP_ASVS_5.0.0_en.csv") +print(f"Loaded {len(reqs)} requirements") +``` + +### Python: Filter by verification level + +```python +# Get all L2 requirements +l2_reqs = [r for r in reqs if r.level2] +print(f"L2 requirements: {len(l2_reqs)}") + +# Get L1 requirements for specific chapter +auth_l1 = [r for r in reqs if r.level1 and r.chapter == "Authentication"] +for r in auth_l1: + print(f" v5.0.0-{r.req_id}: {r.description[:80]}") +``` + +### Python: Build CWE → ASVS index + +```python +from collections import defaultdict + +cwe_index = defaultdict(list) +for req in reqs: + for cwe in req.cwe: + cwe_index[cwe].append(f"v5.0.0-{req.req_id}") + +# Look up which ASVS requirements cover CWE-89 (SQL Injection) +print(f"CWE-89: {cwe_index.get('CWE-89', [])}") +``` + +### JavaScript/Node.js: Parse CSV + +```javascript +const fs = require('fs'); +const { parse } = require('csv-parse/sync'); + +const csv = fs.readFileSync('OWASP_ASVS_5.0.0_en.csv', 'utf8'); +const records = parse(csv, { columns: true, skip_empty_lines: true }); + +// Filter L2 requirements +const l2 = records.filter(r => r.level2?.trim()); +console.log(`L2 requirements: ${l2.length}`); + +// Build CWE index +const cweIndex = {}; +for (const r of records) { + for (const cwe of (r.cwe || '').split(',').map(c => c.trim()).filter(Boolean)) { + (cweIndex[cwe] ||= []).push(`v5.0.0-${r.req_id}`); + } +} +``` + +## Building ASVS-based tools + +### Compliance checker template + +```python +def check_compliance(findings, asvs_reqs): + """ + Match security findings (with CWE IDs) against ASVS requirements. + Returns a compliance report. + """ + # Build CWE → requirements mapping + cwe_map = defaultdict(list) + for req in asvs_reqs: + for cwe in req.cwe: + cwe_map[cwe].append(req) + + report = [] + for finding in findings: + cwe = finding.get("cwe") + matched_reqs = cwe_map.get(cwe, []) + for req in matched_reqs: + report.append({ + "finding": finding["title"], + "asvs_ref": f"v5.0.0-{req.req_id}", + "asvs_text": req.description, + "cwe": cwe, + "status": "FAIL", + }) + return report +``` + +### Generating a gap analysis + +```python +def gap_analysis(verified_reqs, target_level="L2"): + """ + Compare verified requirements against target level. + Returns missing (unverified) requirements. + """ + all_reqs = load_asvs("OWASP_ASVS_5.0.0_en.csv") + + target_reqs = set() + for req in all_reqs: + if target_level == "L1" and req.level1: + target_reqs.add(req.req_id) + elif target_level == "L2" and req.level2: + target_reqs.add(req.req_id) + elif target_level == "L3" and req.level3: + target_reqs.add(req.req_id) + + verified = set(verified_reqs) + gaps = target_reqs - verified + return sorted(gaps) +``` + +## Downloading programmatically + +```bash +# Download CSV +curl -sL "https://github.com/OWASP/ASVS/raw/v5.0.0/5.0/docs_en/OWASP_Application_Security_Verification_Standard_5.0.0_en.csv" \ + -o OWASP_ASVS_5.0.0_en.csv + +# Download PDF +curl -sL "https://github.com/OWASP/ASVS/raw/v5.0.0/5.0/OWASP_Application_Security_Verification_Standard_5.0.0_en.pdf" \ + -o OWASP_ASVS_5.0.0_en.pdf +``` + +## Official sources + +- ASVS v5.0.0 downloads: https://github.com/OWASP/ASVS/tree/v5.0.0/5.0 +- ASVS project page: https://owasp.org/www-project-application-security-verification-standard/ diff --git a/content/owasp/docs/asvs/v5/references/requirement-referencing.md b/content/owasp/docs/asvs/v5/references/requirement-referencing.md new file mode 100644 index 00000000..4e18f75a --- /dev/null +++ b/content/owasp/docs/asvs/v5/references/requirement-referencing.md @@ -0,0 +1,159 @@ +# ASVS requirement referencing + +Detailed guide to referencing ASVS requirements correctly, cross-walking between versions, and integrating with security tools. + +## Referencing format + +### Full version-pinned reference (recommended) + +``` +v-.
. +``` + +- The `v` is always **lowercase** +- `version` is the ASVS release tag (e.g., `5.0.0`, `4.0.3`) +- `chapter`, `section`, and `requirement` are numeric + +**Examples:** +- `v5.0.0-1.2.5` — OS command injection protection +- `v5.0.0-3.1.1` — Authentication requirement +- `v4.0.3-2.1.1` — Password length requirement from v4 + +### Short reference (when version is contextually obvious) + +``` +.
. +``` + +Example: `1.2.5` + +**Warning:** Without version context, this is assumed to be the latest version. Since requirement IDs change between versions, this can be ambiguous. Always prefer the full format. + +## Version changes: v4.x → v5.0.0 + +ASVS v5.0.0 **restructured chapters significantly**. Chapter numbers and requirement IDs from v4.x do NOT map 1:1 to v5.0.0. + +### Chapter mapping (v4 → v5) + +| v4 Chapter | v4 Topic | v5 Equivalent | +|------------|----------|---------------| +| V1 | Architecture, Design and Threat Modeling | Restructured across multiple chapters | +| V2 | Authentication | Chapter 3 | +| V3 | Session Management | Chapter 4 | +| V4 | Access Control | Chapter 5 | +| V5 | Validation, Sanitization and Encoding | Chapter 1 (Encoding & Sanitization) + Chapter 2 (Validation & Business Logic) | +| V6 | Stored Cryptography | Chapter 6 | +| V7 | Error Handling and Logging | Chapter 7 | +| V8 | Data Protection | Chapter 8 | +| V9 | Communication | Chapter 9 | +| V10 | Malicious Code | Removed / redistributed | +| V11 | Business Logic | Chapter 2 (Validation and Business Logic) | +| V12 | Files and Resources | Chapter 13 | +| V13 | API and Web Services | Chapter 12 | +| V14 | Configuration | Chapter 10 | + +### Migrating references + +If you have existing v4.x references: + +1. Download both the v4.0.3 and v5.0.0 CSV files +2. Match requirements by their text content, not by ID +3. Update all references to the new `v5.0.0-X.Y.Z` format +4. Audit any requirements that were removed or merged + +## Using ASVS in reports + +### Security assessment report format + +```markdown +## Finding: SQL Injection in User Search + +**ASVS Reference:** v5.0.0-1.2.1 +**Severity:** High +**CWE:** CWE-89 + +**Requirement:** Verify that all SQL queries, HQL, OSQL, NOSQL and stored +procedures use parameterized queries or are otherwise protected from injection. + +**Status:** FAIL — The /api/users/search endpoint concatenates user input +directly into SQL queries. +``` + +### Compliance matrix format + +```markdown +| ASVS Ref | Requirement Summary | Status | Evidence | +|----------|-------------------|--------|----------| +| v5.0.0-1.2.1 | Parameterized SQL queries | PASS | Code review confirmed | +| v5.0.0-1.2.5 | OS command injection protection | FAIL | See finding #3 | +| v5.0.0-3.1.1 | Authentication mechanism | PASS | OIDC integration verified | +``` + +## Integration with security tools + +### Mapping scanner findings to ASVS + +Most security scanners report CWE IDs. ASVS requirements include CWE mappings in the CSV: + +```python +import csv + +# Build a CWE → ASVS mapping +cwe_to_asvs = {} +with open("OWASP_ASVS_5.0.0_en.csv") as f: + for row in csv.DictReader(f): + for cwe in row.get("cwe", "").split(","): + cwe = cwe.strip() + if cwe: + cwe_to_asvs.setdefault(cwe, []).append(row["req_id"]) + +# Map a scanner finding +scanner_cwe = "CWE-79" # XSS +asvs_reqs = cwe_to_asvs.get(scanner_cwe, []) +print(f"CWE-79 maps to ASVS: {asvs_reqs}") +``` + +### Building automated checklists + +```python +import csv + +def get_requirements_for_level(csv_path, level="L2"): + """Get all ASVS requirements for a given verification level.""" + reqs = [] + with open(csv_path) as f: + for row in csv.DictReader(f): + if row.get(level, "").strip(): + reqs.append({ + "id": f"v5.0.0-{row['req_id']}", + "description": row["req_description"], + "chapter": row.get("chapter", ""), + "cwe": row.get("cwe", ""), + }) + return reqs +``` + +## CWE crosswalk + +ASVS requirements map to CWE (Common Weakness Enumeration) IDs. This creates a chain: + +``` +ASVS Requirement → CWE ID → CVE records (via NVD) +``` + +This allows you to: +1. Start with an ASVS requirement (what to verify) +2. Find the corresponding CWE (the weakness class) +3. Search NVD for CVEs with that CWE (real-world examples) + +```bash +# Find CVEs for CWE-79 (XSS) — which maps to ASVS Chapter 1 +curl -s "https://services.nvd.nist.gov/rest/json/cves/2.0?cweId=CWE-79&resultsPerPage=5" \ + -H "apiKey:YOUR_KEY" | jq '.vulnerabilities[].cve.id' +``` + +## Official sources + +- ASVS v5.0.0: https://github.com/OWASP/ASVS/tree/v5.0.0 +- ASVS v4.0.3 (for migration reference): https://github.com/OWASP/ASVS/tree/v4.0.3 +- ASVS project page: https://owasp.org/www-project-application-security-verification-standard/ diff --git a/content/owasp/docs/samm/v2/DOC.md b/content/owasp/docs/samm/v2/DOC.md new file mode 100644 index 00000000..61b57b08 --- /dev/null +++ b/content/owasp/docs/samm/v2/DOC.md @@ -0,0 +1,171 @@ +--- +name: samm +description: "OWASP SAMM v2.0 — Software Assurance Maturity Model with 5 business functions, 15 security practices, assessment workflow, and roadmap planning." +metadata: + languages: "http" + versions: "2.0" + revision: 1 + updated-on: "2026-03-22" + source: community + tags: "owasp,samm,security,maturity,sdlc,appsec,governance" +--- + +# OWASP SAMM v2 for agents + +OWASP SAMM (Software Assurance Maturity Model) is a framework for assessing, formulating, and implementing a software security strategy. It works with any SDLC approach (waterfall, agile, DevOps) and for organizations that develop, outsource, or acquire software. + +## Model overview + +SAMM v2.0 has **5 business functions**, each containing **3 security practices**, for a total of **15 practices**. Each practice has **2 streams** with **3 maturity levels** (1–3). + +``` +Business Function → Security Practice → Stream → Maturity Level (1-3) +``` + +### The 5 business functions and 15 practices + +| Business Function | Practice 1 | Practice 2 | Practice 3 | +|-------------------|-----------|-----------|-----------| +| **Governance** | Strategy and Metrics | Policy and Compliance | Education and Guidance | +| **Design** | Threat Assessment | Security Requirements | Secure Architecture | +| **Implementation** | Secure Build | Secure Deployment | Defect Management | +| **Verification** | Architecture Assessment | Requirements-driven Testing | Security Testing | +| **Operations** | Incident Management | Environment Management | Operational Management | + +### Business function descriptions + +**Governance** — Cross-functional processes: security strategy, policies, compliance, and security education across the organization. + +**Design** — Activities during project inception: threat modeling, security requirements gathering, and secure architecture decisions. + +**Implementation** — Building and deploying software: secure build practices, secure deployment, and tracking/managing defects. + +**Verification** — Testing and reviewing: architecture assessment, requirements-driven testing, and security testing (SAST/DAST/etc). + +**Operations** — Post-deployment: incident management, environment hardening, and operational management throughout the application lifecycle. + +## Maturity levels + +Each practice has 3 maturity levels: + +| Level | Characteristics | +|-------|----------------| +| **1** | Ad-hoc, initial understanding. Basic activities that are easy to implement. Low formalization. | +| **2** | Defined process, increased efficiency. Activities are repeatable and consistent. Moderate formalization. | +| **3** | Comprehensive mastery, continuous improvement. Advanced activities with full organizational integration. High formalization. | + +**Important:** SAMM does NOT insist all organizations reach level 3 everywhere. You determine the target maturity level for each practice based on your risk profile and needs. + +## Assessment workflow + +The typical SAMM cycle: **Prepare → Assess → Set Target → Define Plan → Implement → Roll Out** + +This cycle is executed continuously, typically in 3-12 month periods. + +### 1. Prepare + +- Identify the scope (whole organization or specific business line) +- Get executive sponsorship +- Assemble the assessment team (security lead, architects, developers, operations) +- Gather existing documentation (policies, procedures, tool inventory) + +### 2. Assess + +For each of the 15 practices, evaluate current maturity: + +- Interview stakeholders (architects, developers, QA, ops) +- Review evidence (documentation, tool configurations, training records) +- Score each practice stream on the 0-3 scale +- Calculate the overall maturity score per practice + +**Scoring guide:** + +| Score | Meaning | +|-------|---------| +| 0 | Practice is not performed | +| 0.25 | Some ad-hoc activity exists | +| 0.5 | Activity exists but is inconsistent | +| 0.75 | Activity is mostly consistent | +| 1.0 | Activity is fully established for this level | + +Multiply the raw score (0-1) by the level being assessed to get the maturity score. + +### 3. Set the target + +- Define target maturity levels for each practice (not all need to be 3) +- Base targets on risk tolerance, business criticality, and regulatory requirements +- Consider a 1-2 year horizon for ambitious targets + +### 4. Define the plan + +- Identify the gap between current and target maturity for each practice +- Prioritize practices with the largest gaps or highest business impact +- Create a phased roadmap (quick wins first, then deeper improvements) +- Assign ownership and resources + +### 5. Implement + +- Execute the roadmap activities +- Start with practices that have the broadest impact +- Document processes and controls as you implement them +- Train staff on new practices + +### 6. Roll out + +- Deploy improved practices across the organization +- Monitor adoption and compliance +- Collect metrics to validate improvements +- Feed lessons learned into the next assessment cycle + +## Setting targets and roadmap + +### Risk-based target setting + +| Application criticality | Governance | Design | Implementation | Verification | Operations | +|------------------------|:----------:|:------:|:--------------:|:------------:|:----------:| +| **Critical** (financial, health) | 3 | 3 | 3 | 3 | 3 | +| **High** (customer-facing, PII) | 2 | 2-3 | 2-3 | 2-3 | 2 | +| **Medium** (internal tools) | 1-2 | 2 | 2 | 2 | 1-2 | +| **Low** (static sites, demos) | 1 | 1 | 1-2 | 1 | 1 | + +### Roadmap template + +``` +Phase 1 (Months 1-3): Foundation + - Governance: Establish security policy (Policy & Compliance L1) + - Design: Begin threat modeling (Threat Assessment L1) + - Implementation: Integrate SAST into CI (Secure Build L1) + +Phase 2 (Months 4-6): Process + - Governance: Security training program (Education & Guidance L2) + - Verification: Automated security testing (Security Testing L2) + - Operations: Incident response plan (Incident Management L1) + +Phase 3 (Months 7-12): Maturity + - Design: Security requirements in all projects (Security Requirements L2) + - Implementation: Secure deployment automation (Secure Deployment L2) + - Verification: Architecture review process (Architecture Assessment L2) +``` + +## Common pitfalls + +1. **Trying to reach L3 in everything.** Maturity L3 requires significant investment. Target L3 only for practices critical to your risk profile. +2. **Assessment without buy-in.** Executive sponsorship is essential. Without it, recommendations won't be funded or enforced. +3. **Skipping the assessment.** Don't jump to implementation without understanding your current posture. The assessment reveals where effort will have the most impact. +4. **One-time use.** SAMM is designed for continuous improvement. Run the cycle every 3-12 months to track progress and adjust targets. +5. **Ignoring organizational context.** A startup and a bank need different maturity targets. Tailor to your organization's size, industry, and risk tolerance. +6. **Confusing SAMM with ASVS.** SAMM measures organizational maturity (process-oriented). ASVS measures application security (requirement-oriented). They complement each other. + +## Official sources + +- SAMM model overview: https://owaspsamm.org/model/ +- Quick start guide: https://owaspsamm.org/guidance/quick-start-guide/ +- Assessment guide: https://owaspsamm.org/assessment-guide/ +- SAMM project page: https://owasp.org/www-project-samm/ +- PDF version: https://drive.google.com/file/d/1cI3Qzfrly_X89z7StLWI5p_Jfqs0-OZv/view +- GitHub: https://github.com/owaspsamm + +## Reference files + +- `references/model-overview.md` — Detailed breakdown of all 5 business functions, 15 practices, and their streams +- `references/quick-start-and-assessment.md` — Step-by-step assessment guide with interview questions, evidence collection, and scoring diff --git a/content/owasp/docs/samm/v2/references/model-overview.md b/content/owasp/docs/samm/v2/references/model-overview.md new file mode 100644 index 00000000..8eecaa92 --- /dev/null +++ b/content/owasp/docs/samm/v2/references/model-overview.md @@ -0,0 +1,224 @@ +# SAMM model overview — all practices and streams + +Detailed breakdown of all 5 business functions, 15 security practices, their 2 streams each, and what each maturity level looks like. + +## Governance + +Governance focuses on cross-functional processes and activities that manage overall software development at the organizational level. + +### Strategy and Metrics + +| Stream | Focus | +|--------|-------| +| **A: Create and Promote** | Define and communicate security strategy | +| **B: Measure and Improve** | Collect and use metrics to drive improvement | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Identify organization's risk tolerance; create basic security roadmap | Define basic security metrics; ad-hoc KPI tracking | +| 2 | Align security strategy with business objectives; publish strategy | Establish regular metric collection; dashboard visibility | +| 3 | Continuously adjust strategy based on metrics and threat landscape | Metrics drive resource allocation; automated collection | + +### Policy and Compliance + +| Stream | Focus | +|--------|-------| +| **A: Policy Management** | Create, maintain, and enforce security policies | +| **B: Compliance Management** | Map to regulatory and standards compliance | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Basic security policies exist | Identify applicable regulations/standards | +| 2 | Policies are maintained, communicated, and have exceptions process | Compliance requirements mapped to controls | +| 3 | Automatic policy enforcement and regular review | Continuous compliance monitoring and audit readiness | + +### Education and Guidance + +| Stream | Focus | +|--------|-------| +| **A: Training and Awareness** | Security training for different roles | +| **B: Organization and Culture** | Champions network and security culture | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Basic security awareness training | Security is part of general awareness | +| 2 | Role-based training (developers, architects, testers) | Security champions in development teams | +| 3 | Continuous learning, advanced training, certifications | Security culture embedded, champions program mature | + +## Design + +Design concerns requirements gathering, high-level architecture, and detailed design decisions. + +### Threat Assessment + +| Stream | Focus | +|--------|-------| +| **A: Application Risk Profile** | Classify applications by risk | +| **B: Threat Modeling** | Identify and prioritize threats | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Basic application inventory with risk classification | Ad-hoc threat identification for high-risk apps | +| 2 | Standardized risk classification methodology | Systematic threat modeling for all significant apps | +| 3 | Continuous risk assessment integrated into development | Threat modeling automated and maintained through lifecycle | + +### Security Requirements + +| Stream | Focus | +|--------|-------| +| **A: Software Requirements** | Define security requirements for development | +| **B: Supplier Security** | Security requirements for third-party components | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Basic security requirements for high-risk features | Ad-hoc review of third-party components | +| 2 | Standardized security requirements framework | Formal vendor/component security assessment | +| 3 | Automated requirements verification; requirements reuse library | Continuous third-party monitoring; SCA integrated | + +### Secure Architecture + +| Stream | Focus | +|--------|-------| +| **A: Architecture Design** | Secure architecture patterns and reference architectures | +| **B: Technology Management** | Technology stack governance and security | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Basic secure design principles applied | Ad-hoc technology selection with some security input | +| 2 | Reference architectures and design patterns documented | Technology standards with security requirements | +| 3 | Architecture review integrated into development; auto-validation | Technology management integrated with vulnerability intelligence | + +## Implementation + +Implementation covers building, deploying, and managing defects in software. + +### Secure Build + +| Stream | Focus | +|--------|-------| +| **A: Build Process** | Secure CI/CD pipeline and build integrity | +| **B: Software Dependencies** | Third-party component management | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Basic build automation; SAST in some projects | Known vulnerability scanning for dependencies | +| 2 | Standardized secure build pipeline; SAST across all projects | SCA integrated into build; automated vulnerability alerts | +| 3 | Build integrity verification; reproducible builds | Continuous dependency monitoring; automated remediation | + +### Secure Deployment + +| Stream | Focus | +|--------|-------| +| **A: Deployment Process** | Secure deployment pipeline and verification | +| **B: Secret Management** | Secrets handling and rotation | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Deployment automation exists; basic deployment verification | Centralized secret storage | +| 2 | Deployment pipeline includes security checks | Automated secret rotation; access controls | +| 3 | Immutable deployments; deployment integrity verification | Advanced secret management; runtime secret injection | + +### Defect Management + +| Stream | Focus | +|--------|-------| +| **A: Defect Tracking** | Track and manage security defects | +| **B: Metrics and Feedback** | Defect metrics feed back into development | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Security defects tracked in issue tracker | Basic defect counts and aging | +| 2 | SLA-driven defect management; severity-based prioritization | Defect trend analysis; root cause categories | +| 3 | Defect management integrated across all tools; auto-triage | Predictive analysis; metrics drive process improvement | + +## Verification + +Verification covers testing, assessment, and quality assurance for security. + +### Architecture Assessment + +| Stream | Focus | +|--------|-------| +| **A: Architecture Validation** | Verify architecture meets security requirements | +| **B: Architecture Mitigation** | Identify and address architectural weaknesses | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Ad-hoc security review of architecture for high-risk apps | Known architectural weaknesses documented | +| 2 | Standardized architecture review process | Systematic mitigation of architectural risks | +| 3 | Automated architecture compliance checking | Architecture risk treatment integrated into design | + +### Requirements-driven Testing + +| Stream | Focus | +|--------|-------| +| **A: Control Verification** | Verify security controls work as designed | +| **B: Misuse/Abuse Testing** | Test for abuse cases and negative scenarios | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Basic functional security testing | Ad-hoc abuse case testing | +| 2 | Systematic verification of security requirements | Standardized abuse case testing from threat models | +| 3 | Automated requirements verification in CI/CD | Comprehensive abuse testing including business logic | + +### Security Testing + +| Stream | Focus | +|--------|-------| +| **A: Scalable Baseline** | Automated security testing (SAST, DAST) | +| **B: Deep Understanding** | Manual security testing and pen testing | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Basic automated scanning (SAST/DAST) | Ad-hoc penetration testing | +| 2 | Customized scanning rules; integrated into CI/CD | Regular pen testing with defined methodology | +| 3 | Advanced automated testing; custom rules; correlation | Expert-driven testing; red teaming; continuous assessment | + +## Operations + +Operations covers activities ensuring CIA throughout the operational lifetime. + +### Incident Management + +| Stream | Focus | +|--------|-------| +| **A: Incident Detection** | Detect security incidents | +| **B: Incident Response** | Respond to and recover from incidents | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Basic logging and monitoring; ad-hoc incident detection | Basic incident response process exists | +| 2 | Centralized log management; correlation and alerting | Defined incident response playbooks; regular drills | +| 3 | Advanced detection (behavioral, anomaly); automated response | Mature IR capability; lessons learned feed back into development | + +### Environment Management + +| Stream | Focus | +|--------|-------| +| **A: Configuration Hardening** | Harden runtime environments | +| **B: Patching and Updating** | Keep systems patched and current | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Basic hardening standards exist | Ad-hoc patching process | +| 2 | Automated configuration compliance checking | Risk-based patch management with SLAs | +| 3 | Continuous configuration monitoring and auto-remediation | Automated patching; zero-day response process | + +### Operational Management + +| Stream | Focus | +|--------|-------| +| **A: Data Protection** | Protect data in operational environments | +| **B: System Decomissioning** | Secure end-of-life processes | + +| Level | Stream A | Stream B | +|-------|----------|----------| +| 1 | Data classification exists; basic data handling procedures | Ad-hoc system retirement | +| 2 | Data protection controls aligned with classification | Defined decommissioning process with security steps | +| 3 | Automated data lifecycle management | Comprehensive decommissioning; data sanitization verified | + +## Official sources + +- Business functions: https://owaspsamm.org/model/governance/, /design/, /implementation/, /verification/, /operations/ +- SAMM model overview: https://owaspsamm.org/model/ +- PDF version: https://drive.google.com/file/d/1cI3Qzfrly_X89z7StLWI5p_Jfqs0-OZv/view diff --git a/content/owasp/docs/samm/v2/references/quick-start-and-assessment.md b/content/owasp/docs/samm/v2/references/quick-start-and-assessment.md new file mode 100644 index 00000000..e61a5ffe --- /dev/null +++ b/content/owasp/docs/samm/v2/references/quick-start-and-assessment.md @@ -0,0 +1,246 @@ +# SAMM quick start and assessment guide + +Step-by-step guidance for running a SAMM assessment, including preparation, interview questions, evidence collection, and scoring. + +## Quick start overview + +A single person can execute the first four phases (Prepare → Assess → Set Target → Define Plan) in 1-2 days. Implementation and rollout require more time and organizational support. + +## Phase 1: Prepare + +### Checklist + +- [ ] Define scope: whole organization, business unit, or product line +- [ ] Secure executive sponsorship +- [ ] Assemble the assessment team (2-4 people recommended): + - Security lead + - Development lead / architect + - Operations / DevOps representative + - Business stakeholder (optional but valuable) +- [ ] Gather existing documentation: + - Security policies and standards + - SDLC documentation + - Tool inventory (SAST, DAST, SCA, WAF, SIEM) + - Training records + - Incident response plans + - Previous audit reports +- [ ] Schedule interview sessions (30-45 min per practice area) + +### Scoping decisions + +| Scope level | When to use | +|-------------|-------------| +| **Organization-wide** | Small-medium org with unified SDLC | +| **Business unit** | Large org with different dev practices per unit | +| **Application portfolio** | Assess a specific set of critical applications | +| **Single application** | Deep assessment for a specific high-risk app | + +## Phase 2: Assess + +### Assessment approach + +For each of the 15 practices, evaluate both streams at each maturity level. + +**Method:** Structured interviews + evidence review + +1. For each practice, ask questions about activities, processes, and tools +2. Review documentary evidence where available +3. Score each stream on a 0-1 scale per level +4. Calculate the practice maturity score + +### Scoring rubric + +| Score | Interpretation | +|-------|----------------| +| 0 | No coverage — activity is not performed | +| 0.25 | Some ad-hoc activity, not consistent | +| 0.5 | Activity exists in some projects or teams | +| 0.75 | Activity is performed consistently with few exceptions | +| 1.0 | Activity is fully established and verified | + +### Calculating maturity + +For a given practice/stream: +- Score Level 1 activities (0-1) +- If Level 1 score >= 0.75, also score Level 2 activities +- If Level 2 score >= 0.75, also score Level 3 activities +- The maturity level is the highest level where score >= 0.75 + +### Interview questions by practice + +#### Governance: Strategy and Metrics +- Do you have a documented software security strategy? +- How is the security strategy communicated to development teams? +- What security metrics do you collect? How often? +- Who reviews security metrics and what decisions do they drive? + +#### Governance: Policy and Compliance +- Do you have documented security policies for development? +- How are security policies communicated and enforced? +- What compliance requirements apply (SOC2, PCI, HIPAA, GDPR)? +- How do you verify compliance with security policies? + +#### Governance: Education and Guidance +- What security training do developers receive? How often? +- Is training tailored by role (developer, architect, tester)? +- Do you have a security champions program? +- How do you share security knowledge (wikis, guidelines, office hours)? + +#### Design: Threat Assessment +- Do you maintain an application inventory with risk classifications? +- Do you perform threat modeling? For which applications? +- What methodology (STRIDE, PASTA, attack trees)? +- How are threat model results tracked and used? + +#### Design: Security Requirements +- How are security requirements defined for new features? +- Do you use a standard set of security requirements (e.g., ASVS)? +- How do you assess third-party component security? +- Do you track security requirements through implementation? + +#### Design: Secure Architecture +- Do you have secure architecture guidelines or reference architectures? +- Is there an architecture review process for security? +- How are technology choices vetted for security? +- Do you maintain a list of approved/banned technologies? + +#### Implementation: Secure Build +- Describe your CI/CD pipeline. What security tools are integrated? +- Do you run SAST? On all projects? Who triages results? +- How do you manage third-party dependencies? +- Do you use SCA tools? Which ones? + +#### Implementation: Secure Deployment +- How are deployments performed? Manual? Automated? +- What security checks happen during deployment? +- How are secrets (API keys, passwords) managed? +- How quickly can you roll back a deployment? + +#### Implementation: Defect Management +- How are security defects tracked? +- Do you have SLAs for security defect remediation? +- How do you categorize and prioritize security defects? +- What metrics do you track on security defects? + +#### Verification: Architecture Assessment +- Do you perform security architecture reviews? +- How often? Triggered by what? +- Who performs them (internal, external)? +- How are findings tracked? + +#### Verification: Requirements-driven Testing +- Do you verify that security requirements are met in testing? +- Do you test abuse cases / negative scenarios? +- Are security test cases derived from threat models? +- How are test results reported? + +#### Verification: Security Testing +- What automated security testing tools do you use (SAST, DAST, IAST)? +- How are automated scan results triaged and handled? +- Do you perform manual penetration testing? How often? +- Do you do red team exercises? + +#### Operations: Incident Management +- How do you detect security incidents? +- Do you have an incident response plan? +- How recently was it tested (tabletop, simulation)? +- How are incident lessons learned fed back into development? + +#### Operations: Environment Management +- Do you have hardening standards for your environments? +- How do you verify configuration compliance? +- What is your patching strategy and cadence? +- How quickly can you patch a critical vulnerability? + +#### Operations: Operational Management +- Do you have a data classification scheme? +- How is data protected based on classification? +- What is your process for decommissioning systems? +- How is data sanitized when systems are retired? + +## Phase 3: Set targets + +### Input +- Current maturity scores from assessment +- Business risk profile +- Regulatory requirements +- Available resources and budget + +### Process +1. For each practice, determine the target maturity level (1, 2, or 3) +2. Consider dependencies — some Level 2 activities require Level 1 in other practices +3. Be realistic — a jump of 2+ levels in one cycle is rare +4. Prioritize practices with the highest gap × highest business impact + +### Template + +| Business Function | Practice | Current | Target | Gap | Priority | +|-------------------|----------|:-------:|:------:|:---:|:--------:| +| Governance | Strategy and Metrics | 0.5 | 2 | 1.5 | Medium | +| Governance | Policy and Compliance | 1.0 | 2 | 1.0 | High | +| Governance | Education and Guidance | 0.25 | 1 | 0.75 | High | +| Design | Threat Assessment | 0 | 1 | 1.0 | High | +| Design | Security Requirements | 0.5 | 2 | 1.5 | High | +| Design | Secure Architecture | 0.5 | 1 | 0.5 | Medium | +| Implementation | Secure Build | 1.0 | 2 | 1.0 | High | +| Implementation | Secure Deployment | 0.5 | 1 | 0.5 | Medium | +| Implementation | Defect Management | 0.25 | 1 | 0.75 | Medium | +| Verification | Architecture Assessment | 0 | 1 | 1.0 | Medium | +| Verification | Requirements Testing | 0.25 | 1 | 0.75 | Medium | +| Verification | Security Testing | 0.5 | 2 | 1.5 | High | +| Operations | Incident Management | 0.25 | 1 | 0.75 | High | +| Operations | Environment Management | 0.5 | 1 | 0.5 | Medium | +| Operations | Operational Management | 0.25 | 1 | 0.75 | Low | + +## Phase 4: Define the plan + +### Prioritization framework + +1. **Quick wins** — Activities that are easy to implement and have high impact +2. **Foundation** — Level 1 activities that enable future improvement +3. **Process maturity** — Level 2 activities for consistency and repeatability +4. **Optimization** — Level 3 activities for comprehensive coverage + +### Example plan + +``` +Quarter 1: Quick wins and foundations + ✓ Integrate SAST into CI pipeline (Secure Build L1) + ✓ Deploy SCA for dependency scanning (Secure Build L1) + ✓ Create application risk inventory (Threat Assessment L1) + ✓ Basic security awareness training (Education & Guidance L1) + +Quarter 2: Process establishment + ✓ Security requirements framework from ASVS (Security Requirements L1→L2) + ✓ Incident response plan and first tabletop (Incident Management L1) + ✓ Security champions nomination (Education & Guidance L1→L2) + +Quarter 3: Verification and operations + ✓ Automated DAST in staging (Security Testing L2) + ✓ Hardening standards for environments (Environment Management L1) + ✓ Defect SLAs and tracking (Defect Management L1) + +Quarter 4: Review and iterate + ✓ Re-assess all 15 practices + ✓ Update targets for next cycle + ✓ Report progress to leadership +``` + +## Evidence collection checklist + +For each practice, collect: + +| Evidence type | Examples | +|--------------|---------| +| **Documents** | Policies, standards, guidelines, runbooks | +| **Tool configs** | SAST rules, DAST configs, SCA policies | +| **Metrics** | Defect counts, training completion rates, scan coverage | +| **Records** | Meeting minutes, review notes, incident reports | +| **Screenshots** | Dashboard views, pipeline configs, tool outputs | + +## Official sources + +- SAMM quick start guide: https://owaspsamm.org/guidance/quick-start-guide/ +- SAMM assessment guide: https://owaspsamm.org/assessment-guide/ +- SAMM toolbox (Excel-based assessment): https://owaspsamm.org/assessment/ +- SAMM model: https://owaspsamm.org/model/