Skip to content

Commit

Permalink
Changed headings from H4 to bold
Browse files Browse the repository at this point in the history
  • Loading branch information
RRudder committed Sep 29, 2024
1 parent 4596c68 commit e872fbd
Show file tree
Hide file tree
Showing 444 changed files with 1,326 additions and 1,326 deletions.
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
Excessive agency or permission manipulation occurs when an attacker is able to manipulate the Large Language Model (LLM) outputs to perform actions that may be damaging or otherwise harmful. An attacker can abuse excessive agency or permission manipulation within the LLM to gain access to, modify, or delete data, without any confirmation from a user.

#### Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage if an attacker compromises the LLM decision making or accesses unauthorized data. These cirvumstances not only harm the company but also weaken users' trust. The extent of business impact depends on the sensitivity of the data transmitted by the application.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL:
1. Enter the following prompt into the LLM:
Expand All @@ -15,7 +15,7 @@ This vulnerability can lead to reputational and financial damage if an attacker

1. Observe that the output from the LLM returns sensitive data

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
Insecure output handling within Large Language Models (LLMs) occurs when the output generated by the LLM is not sanitized or validated before being passed downstream to other systems. This can allow an attacker to indirectly gain access to systems, elevate their privileges, or gain arbitrary code execution by using crafted prompts.

#### Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL:
1. Inject the following prompt into the LLM:
Expand All @@ -15,7 +15,7 @@ This vulnerability can lead to reputational and financial damage of the company

1. Observe that the LLM returns sensitive data

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
Prompt injection occurs when an attacker crafts a malicious prompt that manipulates a Large Language Model (LLM) into executing unintended actions. The LLM's inability to distinguish user input from its dataset influences the output it generates. This flaw allows attackers to exploit the system by injecting malicious prompts, thereby bypassing safeguards.

#### Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL: {{URL}}
1. Inject the following prompt into the LLM:
Expand All @@ -15,7 +15,7 @@ This vulnerability can lead to reputational and financial damage of the company

1. Observe that the LLM returns sensitive data

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
Misconfigurations can occur across Large Language Model (LLM) within the setup, deployment, or usage of the LLM, leading to security weaknesses or vulnerabilities. These misconfigurations can allow an attacker to compromise confidentiality, integrity, or availability of data and services. Misconfigurations may stem from inadequate access controls, insecure default settings, or improper configuration of fine-tuning parameters.

#### Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL:
1. Inject the following prompt into the LLM:
Expand All @@ -15,7 +15,7 @@ This vulnerability can lead to reputational and financial damage of the company

1. Observe that the LLM returns sensitive data

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
Training data poisoning occurs when an attacker manipulates the training data to intentionally compromise the output of the Large Language Model (LLM). This can be achieved by manipulating the pre-training data, fine-tuning data process, or the embedding process. An attacker can undermine the integrity of the LLM by poisoning the training data, resulting in outputs that are unreliable, biased, or unethical. This breach of integrity significantly impacts the model's trustworthiness and accuracy, posing a serious threat to the overall effectiveness and security of the LLM.

#### Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage if an attacker compromises the LLM decision making or accesses unauthorized data. These cirvumstances not only harm the company but also weaken users' trust. The extent of business impact depends on the sensitivity of the data transmitted by the application.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL:
1. Enter the following prompt into the LLM:
Expand All @@ -15,7 +15,7 @@ This vulnerability can lead to reputational and financial damage if an attacker

1. Observe that the output from the LLM returns a compromised result

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
6 changes: 3 additions & 3 deletions submissions/description/ai_application_security/template.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
Misconfigurations can occur in Artificial Intelligence (AI) applications, including but not limited to machine learning models, algorithms, and inference systems. These misconfigurations can allow an attacker to compromise confidentiality, integrity, or availability of data and services.

#### Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL:
1. Inject the following prompt into the LLM:
Expand All @@ -15,7 +15,7 @@ This vulnerability can lead to reputational and financial damage of the company

1. Observe that the LLM returns sensitive data

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
Aggregation bias occurs in an AI model when systematic favoritism is displayed when processing data from different demographic groups. This bias originates from training data that is skewed, or that has an under representation of certain groups. Outputs from AI models that have an aggregation bias can result in unequal treatment of users based on demographic characteristics, which can lead to unfair and discriminatory outcomes.

#### Business Impact
**Business Impact**

Aggregation bias in this AI model can result in reputational damage and indirect financial loss due to the loss of customer trust in the output of the model.

#### Steps to Reproduce
**Steps to Reproduce**

1. Obtain a diverse dataset containing demographic information
1. Feed the dataset into the AI model
1. Record the model's predictions and decisions
1. Compare outcomes across different demographic groups
1. Observe the systematic favoritism displayed by the model toward one or more specific groups

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
Processing bias occurs when AI algorithms make biased decisions, or predictions, due to the way that they process data. This can be a result of the algorithm's design or the training data it has been trained on. Outputs from AI models that have a processing bias can result in discrimination, reinforcement of stereotypes, and unintended consequences such as amplification or polarization of viewpoints that disadvantage certain groups.

#### Business Impact
**Business Impact**

Processing bias in this AI model can result in reputational damage and indirect monetary loss due to the loss of customer trust in the output of the model.

#### Steps to Reproduce
**Steps to Reproduce**

1. Input the following benchmark dataset into the AI model: {{Benchmark data set}}
1. Split the dataset into two sets. One is to act as the training dataset and the other as the testing dataset.
1. Examine the model's predictions and note the following disparity exists: {{Disparity between Group A and Group B}}

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:

Expand Down
6 changes: 3 additions & 3 deletions submissions/description/algorithmic_biases/template.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
Algorithmic bias occurs in an AI model when the algorithms used to develop the model produce biased outcomes as a result of inherent flaws or limitations in their design. This bias originates from assumptions made during algorithm development, selection of inappropriate models, or the way data is processed and weighted. This results in AI models that make unfair, skewed, or discriminatory decisions.

#### Business Impact
**Business Impact**

Aggregation bias in this AI model can result in reputational damage and indirect financial loss due to the loss of customer trust in the output of the model.

#### Steps to Reproduce
**Steps to Reproduce**

1. Select an AI algorithm known to have potential biases
1. Train the algorithm on a dataset that may amplify these biases
1. Test the algorithm's decisions or predictions on a diverse dataset
1. Identify and document instances where the algorithm's output is biased

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ Application-level Denial of Service (DoS) attacks are designed to deny service t

There is a local application-level DoS vulnerability within this Android application that causes it to crash. An attacker can use this vulnerability to provide empty, malformed, or irregular data via the Intent binding mechanism, crashing the application and making it unavailable for its designed purpose to legitimate users.

#### Business Impact
**Business Impact**

Application-level DoS can result in indirect financial loss for the business through the attacker’s ability to DoS the application. These malicious actions could also result in reputational damage for the business through the impact to customers’ trust.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL: {{URL}}
1. Use the following payload:
Expand All @@ -19,7 +19,7 @@ Application-level DoS can result in indirect financial loss for the business thr

1. Observe that the payload causes a Denial of Service

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot below demonstrates the Denial of Service:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ Application-level Denial of Service (DoS) attacks are designed to deny service t

There is a local application-level DoS vulnerability within this iOS application that causes it to crash. An attacker can use this vulnerability to provide empty, malformed, or irregular data via a URL scheme, crashing the application and making it unavailable for its designed purpose to legitimate users.

#### Business Impact
**Business Impact**

Application-level DoS can result in indirect financial loss for the business through the attacker’s ability to DoS the application. These malicious actions could also result in reputational damage for the business through the impact to customers’ trust.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL: {{URL}}
1. Use the following payload:
Expand All @@ -19,7 +19,7 @@ Application-level DoS can result in indirect financial loss for the business thr

1. Observe that the payload causes a Denial of Service

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot below demonstrates the Denial of Service:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ Application-level Denial of Service (DoS) attacks are designed to deny service t

There is an application-level DoS vulnerability within this iOS or Android application that causes it to crash. An attacker can use this vulnerability to exhaust resources, making the application unavailable for its designed purpose to legitimate users.

#### Business Impact
**Business Impact**

Application-level DoS can result in indirect financial loss for the business through the attacker’s ability to DoS the application. These malicious actions could also result in reputational damage for the business through the impact to customers’ trust.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL: {{URL}}
1. Use the following payload:
Expand All @@ -19,7 +19,7 @@ Application-level DoS can result in indirect financial loss for the business thr

1. Observe that the payload causes a Denial of Service that has high impact or medium difficulty to be performed

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot below demonstrates the Denial of Service:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ Application-level Denial of Service (DoS) attacks are designed to deny service t

There is an application-level DoS vulnerability within this application that has critical impact or is easily performed. An attacker can use this vulnerability to exhaust resources, making the application unavailable for its designed purpose to legitimate users.

#### Business Impact
**Business Impact**

Application-level DoS can result in indirect financial loss for the business through the attacker’s ability to DoS the application. These malicious actions could also result in reputational damage for the business through the impact to customers’ trust.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL: {{url}}
1. Use the following payload:
Expand All @@ -19,7 +19,7 @@ Application-level DoS can result in indirect financial loss for the business thr

1. Observe that the payload causes a Denial of Service that has critical impact or is easy to perform

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) proof of the vulnerability:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
Injection occurs when an attacker provides inputs to a Large Language Model (LLM) which causes a large amount of resources to be consumed. This can result in a Denial of Service (DoS) to users, incur large amounts of computational resource costs, or slow response times of the LLM.

#### Business Impact
**Business Impact**

This vulnerability can lead to reputational and financial damage of the company due an attacker incurring computational resource costs or denying service to other users, which would also impact customers' trust.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL: {{URL}}
1. Inject the following prompt into the LLM:
Expand All @@ -15,7 +15,7 @@ This vulnerability can lead to reputational and financial damage of the company

1. Observe that the LLM is slow to return a response

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot(s) below demonstrate(s) the vulnerability:
>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ Application-level Denial of Service (DoS) attacks are designed to deny service t

There is an application-level DoS vulnerability within this application that an attacker can use to exhaust resources, making the application unavailable for its designed purpose to legitimate users.

#### Business Impact
**Business Impact**

Application-level DoS can result in indirect financial loss for the business through the attacker’s ability to DoS the application. These malicious actions could also result in reputational damage for the business through the impact to customers’ trust.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL: {{URL}}
1. Use the following payload:
Expand All @@ -19,7 +19,7 @@ Application-level DoS can result in indirect financial loss for the business thr

1. Observe that the payload causes a DoS condition

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot below demonstrates the vulnerability:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ Application-level Denial of Service (DoS) attacks are designed to deny service t

There is an application-level DoS vulnerability within this application that has high impact or medium difficulty to be performed. An attacker can use this vulnerability to exhaust resources, making the application unavailable for its designed purpose to legitimate users, but not take down the application for all users.

#### Business Impact
**Business Impact**

Application-level DoS can result in indirect financial loss for the business through the attacker’s ability to DoS the application. These malicious actions could also result in reputational damage for the business through the impact to customers’ trust.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL: {{url}}
1. Use the following payload:
Expand All @@ -19,7 +19,7 @@ Application-level DoS can result in indirect financial loss for the business thr

1. Observe that the payload causes a Denial of Service that has high impact or medium difficulty to be performed

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot below demonstrates proof of the vulnerability:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ Application-level Denial of Service (DoS) attacks are designed to deny service t

There is an application-level DoS vulnerability within this application that an attacker can use to exhaust resources, making the application unavailable for its designed purpose to legitimate users.

#### Business Impact
**Business Impact**

Application-level DoS can result in indirect financial loss for the business through the attacker’s ability to DoS the application. These malicious actions could also result in reputational damage for the business through the impact to customers’ trust.

#### Steps to Reproduce
**Steps to Reproduce**

1. Navigate to the following URL: {{URL}}
1. Use the following payload:
Expand All @@ -19,7 +19,7 @@ Application-level DoS can result in indirect financial loss for the business thr

1. Observe that the payload causes a Denial of Service

#### Proof of Concept (PoC)
**Proof of Concept (PoC)**

The screenshot below demonstrates the Denial of Service:

Expand Down
Loading

0 comments on commit e872fbd

Please sign in to comment.