|
1 | 1 | ---
|
2 | 2 | title: Confidential AI
|
3 |
| -description: Confidential Containers for AI |
| 3 | +description: Confidential Containers for AI Use Cases |
4 | 4 | categories:
|
5 | 5 | - use cases
|
6 | 6 | tags:
|
|
9 | 9 | weight: 60
|
10 | 10 | ---
|
11 | 11 |
|
12 |
| -Coming soon |
| 12 | +## Federated Learning |
| 13 | + |
| 14 | + |
| 15 | + |
| 16 | +## Attestations |
| 17 | +- Federated Learning requests multi-SDK attestation |
| 18 | +- FL Servers needs to verify all client’s trustworthiness |
| 19 | +- Attestation at different points, self and cross verifications via attestation service |
| 20 | + |
| 21 | +## CC Policies: |
| 22 | +- Bootup policy – provided by hardware vendor |
| 23 | +- Self-verification CC policy – user defined |
| 24 | +- Cross-verification CC policy -- user defined |
| 25 | + |
| 26 | +## Protected Assets |
| 27 | +- **Code**: Training code (client), aggregation code (server) |
| 28 | +- **Data**: input data |
| 29 | +- **Model**: initial model, intermediate model, output model |
| 30 | +- **Workspace**: check points, logs, temp data, output directory |
| 31 | + |
| 32 | +## Key Broker Service |
| 33 | +- Key management depends on user case, global model ownership, key release management process |
| 34 | + |
| 35 | +## Bootstrap: |
| 36 | +- Need a process to generate the keys, policies and input them into key-vault to avoid tempering |
| 37 | + |
| 38 | +### Concerns when using federated learning |
| 39 | +- Trustworthy of the participants |
| 40 | +- Execution code tampering |
| 41 | +- Model tampering |
| 42 | +- Model inversion attack |
| 43 | +- Model theft during training |
| 44 | +- Private data leak |
| 45 | + |
| 46 | +### CoCo will not protect if |
| 47 | +- code is already flawed at rest |
| 48 | +- data is already poisoned at rest |
| 49 | +- model is already poisoned at rest |
| 50 | + |
| 51 | +## Multi-Party Computing |
| 52 | + |
| 53 | + |
| 54 | + |
| 55 | +Source: [https://uploads-ssl.webflow.com/63c54a346e01f30e726f97cf/6418fb932b58e284018fdac0_OC3%20-%20Keith%20Moyer%20-%20MPC%20with%20CC.pdf](https://uploads-ssl.webflow.com/63c54a346e01f30e726f97cf/6418fb932b58e284018fdac0_OC3%20-%20Keith%20Moyer%20-%20MPC%20with%20CC.pdf) |
| 56 | + |
| 57 | + |
| 58 | +### Requirements |
| 59 | +- Attestation-cross-verification |
| 60 | + - In MPC or FL cases, we need to explicitly verify all participants to be trustworthy according to the cross verification CC policy. |
| 61 | +- Periodic self-check test and cross-verification |
| 62 | + - a long running workload can ensure that the TEE is still valid |
| 63 | +- Ad hoc attestation verification for given participation node |
| 64 | + - From CLI or API, application wants to know what’s the current status (trustworthiness) of the specified node |
| 65 | +- Attestation audit report |
| 66 | + - each party would like to get details on when attestation was performed by the TEE and relevant details |
| 67 | +- Connecting to multiple KBSes |
| 68 | + - the workload needs to connect to different KBS each belonging to a specific party for getting access to the keys |
| 69 | +- Support multiple devices |
| 70 | + - One participant may has only one type of device (CPU), but need to verify other participant’s devices including different CPU and GPU |
| 71 | + |
| 72 | +## Retrieval Augmented Generation Large Language Models (RAG LLM) |
| 73 | + |
| 74 | + |
| 75 | + |
| 76 | +Further RAG Use Case Information |
| 77 | +- [NVIDIA Generative AI Examples](https://github.com/NVIDIA/GenerativeAIExamples) |
| 78 | + |
| 79 | +### Threat Model |
| 80 | +Potential Threats: [OWASP Top 10 for LLMs](https://owasp.org/www-project-top-10-for-large-language-model-applications/) |
| 81 | + |
| 82 | + |
0 commit comments