Skip to content

Latest commit

 

History

History
26 lines (26 loc) · 4.17 KB

ml-security-industry-references.md

File metadata and controls

26 lines (26 loc) · 4.17 KB
Reference URL
US White House EO 2023 → https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/, 2022 → https://www.whitehouse.gov/ostp/ai-bill-of-rights/
EI AI Act https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206
Canada AI and Data Act https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act
UK AI Bill https://bills.parliament.uk/bills/3519
MITRE ATLAS https://atlas.mitre.org/matrices/ATLAS
EU Digital Services Act (DSA) https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020PC0825
NIST AI RMF Playbook https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook
OWASP Top 10 for LLM Security https://genai.owasp.org/llm-top-10/
OWASP ML Security Top 10 https://mltop10.info/
ThalesGroup Secure Machine Learning https://github.com/ThalesGroup/secure-ml
National Cyber Security Center (GCHQ) https://www.ncsc.gov.uk/files/NCSC-Machine-learning-principles.pdf
Gartner AI TRiSM https://www.gartner.com/en/articles/what-it-takes-to-make-ai-safe-and-effective
Databricks AI Security Framework https://www.databricks.com/resources/whitepaper/databricks-ai-security-framework-dasf
IBM Securing Gen AI https://www.ibm.com/blog/announcement/ibm-framework-for-securing-generative-ai/
Google Secure AI Framework (SAIF) https://safety.google/intl/en_us/cybersecurity-advancements/saif/
ENISA https://www.enisa.europa.eu/publications/artificial-intelligence-cybersecurity-challenges and https://www.enisa.europa.eu/publications/securing-machine-learning-algorithms
ETSI Secure AI ETSI GR SAI 004: This specification describes the problem of securing AI-based systems and solutions, with a focus on machine learning, and the challenges relating to confidentiality, integrity and availability at each stage of the machine learning lifecycle. https://www.etsi.org/deliver/etsi_gr/SAI/001_099/004/01.01.01_60/gr_SAI004v010101p.pdf
NIST IR 8269 NIST AI 100-1: This framework provides a process for identifying, assessing, and managing AI risks. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf and NIST AI RMF: https://www.nist.gov/itl/ai-risk-management-framework and Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations: https://csrc.nist.gov/pubs/ai/100/2/e2023/final and https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2023.pdf
IEEE P3119: Standard for the Procurement of Artificial Intelligence and Automated Decision Systems: https://standards.ieee.org/ieee/3119/10729/
PDPC Singapore https://file.go.gov.sg/aiverify.pdf and https://www.imda.gov.sg/-/media/Imda/Files/News-and-Events/Media-Room/Media-Releases/2022/05/Annex-B---Background-on-SG-AI-Governance-Work.pdf
ISO ISO/IEC 27090: This standard provides a framework for managing AI and ML security risks. https://www.iso27001security.com/html/27090.html and ISO/IEC 27091: This standard provides a framework for managing AI and ML privacy risks. https://www.iso27001security.com/html/27091.html and ISO/IEC 23894: This standard provides guidance on how organizations that develop, produce, deploy or use products, systems and services that utilize artificial intelligence (AI) can manage risk specifically related to AI. https://cdn.standards.iteh.ai/samples/77304/cb803ee4e9624430a5db177459158b24/ISO-IEC-23894-2023.pdf and ISO/IEC 42001: An international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. https://cdn.standards.iteh.ai/samples/81230/2890a4958ba9484ba795f02dbe7f5407/ISO-IEC-FDIS-42001.pdf
Microsoft and Harvard University Failure Modes in Machine Learning: https://docs.microsoft.com/en-us/security/engineering/failure-modes-in-machine-learning
The Linux Foundation LF AI and Open-Source Security (AIOSS) Project: This project is developing a framework for securing AI and ML systems that are built on open-source software. https://lfaidata.foundation/projects/
NCC Group https://research.nccgroup.com/2022/07/06/whitepaper-practical-attacks-on-machine-learning-systems/