Skip to content

Commit

Permalink
Merge pull request #535 from bugcrowd/Layout-Changes
Browse files Browse the repository at this point in the history
Layout Changes plus Find Replace
  • Loading branch information
RRudder authored Oct 9, 2024
2 parents a9f9c9c + e872fbd commit 431f8aa
Show file tree
Hide file tree
Showing 450 changed files with 1,404 additions and 3,176 deletions.
3 changes: 1 addition & 2 deletions .markdownlint.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,5 @@
"line_length": false,
"fenced-code-language": false,
"no-emphasis-as-heading": false,
"MD041": false,
"blanks-around-headings": false
"first-line-heading": false
}
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,20 +149,20 @@ Incorrect:

Incorrect:

> Throughout the course of the engagement, a critical severity SQL injection was discovered in the web application (www.example.com) which could be used by an attacker to exfiltrate personally identifiable information from the backend database.
> Throughout the course of the engagement, a critical severity SQL injection was discovered in the web application (<www.example.com>) which could be used by an attacker to exfiltrate personally identifiable information from the backend database.
Correct:

> An SQL injection was discovered in www.example.com allowing a malicious attacker to exfiltrate personally identifiable information.
> An SQL injection was discovered in <www.example.com> allowing a malicious attacker to exfiltrate personally identifiable information.
### Split Up Long Sentences

Incorrect:

> An SQL injection was discovered in www.example.com allowing a malicious attacker to exfiltrate personally identifiable information including email addresses which would be considered a GDPR violation and poses a considerable business risk.
> An SQL injection was discovered in <www.example.com> allowing a malicious attacker to exfiltrate personally identifiable information including email addresses which would be considered a GDPR violation and poses a considerable business risk.
Correct:
> An SQL injection was discovered in www.example.com allowing a malicious attacker to exfiltrate personally identifiable information. The retrievable data includes passwords, email addresses and full names. This poses a GDPR violation and considerable business risk.
> An SQL injection was discovered in <www.example.com> allowing a malicious attacker to exfiltrate personally identifiable information. The retrievable data includes passwords, email addresses and full names. This poses a GDPR violation and considerable business risk.
## Acronyms

Expand All @@ -184,7 +184,7 @@ Incorrect: pen test, PenTest, Pen Test

## A vs. An

"An" should be used when the next word starts with a consonant _sound_. Otherwise, "A" should be used.
"An" should be used when the next word starts with a consonant *sound*. Otherwise, "A" should be used.

Correct:

Expand Down
3 changes: 3 additions & 0 deletions methodology/notes/website_testing/information.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,19 @@
# Information gathering and Reconnaisance

## Tools used

<!--
Provide a 1-2 sentence overview of the tools you used to do information gathering and recon, and how you used those tools
-->

## Attack Surface Summary

<!--
Provide a 1-2 sentence overview of the attack surface you discovered
-->

## What is done well

<!--
Provide a 1-2 sentence overview of what the application does well, where it seems most robust and well-designed
-->
12 changes: 6 additions & 6 deletions spec/bugcrowd_templates_spec.rb
Original file line number Diff line number Diff line change
Expand Up @@ -70,15 +70,15 @@
let!(:file_name) { 'template' }

it 'returns the bugcrowd template value as string' do
is_expected.to include('# Outdated Software Version')
is_expected.to include('Outdated Software Version')
end

context 'when file_name with multiple options' do
context 'file_name as template' do
let!(:file_name) { 'template' }

it 'returns the bugcrowd template value as string' do
is_expected.to include('# Outdated Software Version')
is_expected.to include('Outdated Software Version')
end
end

Expand Down Expand Up @@ -113,7 +113,7 @@
let!(:file_name) { 'template' }

it 'returns the bugcrowd template value as string' do
is_expected.to include('# Outdated Software Version')
is_expected.to include('Outdated Software Version')
end
end

Expand Down Expand Up @@ -159,7 +159,7 @@
let!(:file_name) { 'template' }

it 'returns the template defined in the subcategory folder' do
is_expected.to include('# Clickjacking')
is_expected.to include('Clickjacking')
end
end

Expand All @@ -170,7 +170,7 @@
let!(:file_name) { 'template' }

it 'returns the template defined in the subcategory folder' do
is_expected.to include('# Clickjacking')
is_expected.to include('Clickjacking')
end
end

Expand All @@ -181,7 +181,7 @@
let!(:file_name) { 'template' }

it 'returns the template defined in the subcategory folder' do
is_expected.to include('# Outdated Software Version')
is_expected.to include('Outdated Software Version')
end
end

Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,10 @@
# Excessive Agency or Permission Manipulation

## Overview of the Vulnerability

Excessive agency or permission manipulation occurs when an attacker is able to manipulate the Large Language Model (LLM) outputs to perform actions that may be damaging or otherwise harmful. An attacker can abuse excessive agency or permission manipulation within the LLM to gain access to, modify, or delete data, without any confirmation from a user.

## Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage if an attacker compromises the LLM decision making or accesses unauthorized data. These cirvumstances not only harm the company but also weaken users' trust. The extent of business impact depends on the sensitivity of the data transmitted by the application.

## Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL:
1. Enter the following prompt into the LLM:
Expand All @@ -19,7 +15,7 @@ This vulnerability can lead to reputational and financial damage if an attacker

1. Observe that the output from the LLM returns sensitive data

## Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,10 @@
# Large Language Model (LLM) Output Handling

## Overview of the Vulnerability

Insecure output handling within Large Language Models (LLMs) occurs when the output generated by the LLM is not sanitized or validated before being passed downstream to other systems. This can allow an attacker to indirectly gain access to systems, elevate their privileges, or gain arbitrary code execution by using crafted prompts.

## Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

## Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL:
1. Inject the following prompt into the LLM:
Expand All @@ -19,7 +15,7 @@ This vulnerability can lead to reputational and financial damage of the company

1. Observe that the LLM returns sensitive data

## Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,10 @@
# Prompt Injection

## Overview of the Vulnerability

Prompt injection occurs when an attacker crafts a malicious prompt that manipulates a Large Language Model (LLM) into executing unintended actions. The LLM's inability to distinguish user input from its dataset influences the output it generates. This flaw allows attackers to exploit the system by injecting malicious prompts, thereby bypassing safeguards.

## Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

## Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL: {{URL}}
1. Inject the following prompt into the LLM:
Expand All @@ -19,7 +15,7 @@ This vulnerability can lead to reputational and financial damage of the company

1. Observe that the LLM returns sensitive data

## Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,10 @@
# Large Language Model (LLM) Security Misconfiguration

## Overview of the Vulnerability

Misconfigurations can occur across Large Language Model (LLM) within the setup, deployment, or usage of the LLM, leading to security weaknesses or vulnerabilities. These misconfigurations can allow an attacker to compromise confidentiality, integrity, or availability of data and services. Misconfigurations may stem from inadequate access controls, insecure default settings, or improper configuration of fine-tuning parameters.

## Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

## Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL:
1. Inject the following prompt into the LLM:
Expand All @@ -19,7 +15,7 @@ This vulnerability can lead to reputational and financial damage of the company

1. Observe that the LLM returns sensitive data

## Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,10 @@
# Training Data Poisoning

## Overview of the Vulnerability

Training data poisoning occurs when an attacker manipulates the training data to intentionally compromise the output of the Large Language Model (LLM). This can be achieved by manipulating the pre-training data, fine-tuning data process, or the embedding process. An attacker can undermine the integrity of the LLM by poisoning the training data, resulting in outputs that are unreliable, biased, or unethical. This breach of integrity significantly impacts the model's trustworthiness and accuracy, posing a serious threat to the overall effectiveness and security of the LLM.

## Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage if an attacker compromises the LLM decision making or accesses unauthorized data. These cirvumstances not only harm the company but also weaken users' trust. The extent of business impact depends on the sensitivity of the data transmitted by the application.

## Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL:
1. Enter the following prompt into the LLM:
Expand All @@ -19,7 +15,7 @@ This vulnerability can lead to reputational and financial damage if an attacker

1. Observe that the output from the LLM returns a compromised result

## Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
10 changes: 3 additions & 7 deletions submissions/description/ai_application_security/template.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,10 @@
# AI Application Security Misconfiguration

## Overview of the Vulnerability

Misconfigurations can occur in Artificial Intelligence (AI) applications, including but not limited to machine learning models, algorithms, and inference systems. These misconfigurations can allow an attacker to compromise confidentiality, integrity, or availability of data and services.

## Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

## Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL:
1. Inject the following prompt into the LLM:
Expand All @@ -19,7 +15,7 @@ This vulnerability can lead to reputational and financial damage of the company

1. Observe that the LLM returns sensitive data

## Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,22 +1,18 @@
# Aggregation Bias

## Overview of the Vulnerability

Aggregation bias occurs in an AI model when systematic favoritism is displayed when processing data from different demographic groups. This bias originates from training data that is skewed, or that has an under representation of certain groups. Outputs from AI models that have an aggregation bias can result in unequal treatment of users based on demographic characteristics, which can lead to unfair and discriminatory outcomes.

## Business Impact
**Business Impact**

Aggregation bias in this AI model can result in reputational damage and indirect financial loss due to the loss of customer trust in the output of the model.

## Steps to Reproduce
**Steps to Reproduce**

1. Obtain a diverse dataset containing demographic information
1. Feed the dataset into the AI model
1. Record the model's predictions and decisions
1. Compare outcomes across different demographic groups
1. Observe the systematic favoritism displayed by the model toward one or more specific groups

## Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,20 +1,16 @@
# Processing Bias

## Overview of the Vulnerability

Processing bias occurs when AI algorithms make biased decisions, or predictions, due to the way that they process data. This can be a result of the algorithm's design or the training data it has been trained on. Outputs from AI models that have a processing bias can result in discrimination, reinforcement of stereotypes, and unintended consequences such as amplification or polarization of viewpoints that disadvantage certain groups.

## Business Impact
**Business Impact**

Processing bias in this AI model can result in reputational damage and indirect monetary loss due to the loss of customer trust in the output of the model.

## Steps to Reproduce
**Steps to Reproduce**

1. Input the following benchmark dataset into the AI model: {{Benchmark data set}}
1. Split the dataset into two sets. One is to act as the training dataset and the other as the testing dataset.
1. Examine the model's predictions and note the following disparity exists: {{Disparity between Group A and Group B}}

## Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:

Expand Down
10 changes: 3 additions & 7 deletions submissions/description/algorithmic_biases/template.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,17 @@
# Algorithmic bias

## Overview of the Vulnerability

Algorithmic bias occurs in an AI model when the algorithms used to develop the model produce biased outcomes as a result of inherent flaws or limitations in their design. This bias originates from assumptions made during algorithm development, selection of inappropriate models, or the way data is processed and weighted. This results in AI models that make unfair, skewed, or discriminatory decisions.

## Business Impact
**Business Impact**

Aggregation bias in this AI model can result in reputational damage and indirect financial loss due to the loss of customer trust in the output of the model.

## Steps to Reproduce
**Steps to Reproduce**

1. Select an AI algorithm known to have potential biases
1. Train the algorithm on a dataset that may amplify these biases
1. Test the algorithm's decisions or predictions on a diverse dataset
1. Identify and document instances where the algorithm's output is biased

## Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,18 +1,14 @@
# Application-Level Denial of Service Causes Application to Crash via Malformed Android Intents

## Overview of the Vulnerability

Application-level denial of service (DoS) attacks are designed to deny service to users of an application by flooding it with many HTTP requests. This makes it impossible for the server to respond to legitimate requests in any practical time frame.
Application-level Denial of Service (DoS) attacks are designed to deny service to users of an application by flooding it with many HTTP requests. This makes it impossible for the server to respond to legitimate requests in any practical time frame.

There is a local application-level DoS vulnerability within this Android application that causes it to crash. An attacker can use this vulnerability to provide empty, malformed, or irregular data via the Intent binding mechanism, crashing the application and making it unavailable for its designed purpose to legitimate users.

## Business Impact
**Business Impact**

Application-level DoS can result in indirect financial loss for the business through the attacker’s ability to DoS the application. These malicious actions could also result in reputational damage for the business through the impact to customers’ trust.

## Steps to Reproduce
**Steps to Reproduce**

1. Navigate to {{url}}
1. Navigate to the following URL: {{URL}}
1. Use the following payload:

{{payload}}
Expand All @@ -21,10 +17,10 @@ Application-level DoS can result in indirect financial loss for the business thr

{{parameter}}

1. Observe that the payload causes a denial of service
1. Observe that the payload causes a Denial of Service

## Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot below demonstrates the denial of service:
The screenshot below demonstrates the Denial of Service:

{{screenshot}}
Loading

0 comments on commit 431f8aa

Please sign in to comment.