Please enable JavaScript.
Coggle requires JavaScript to display documents.
JOINT HIGH-LEVEL RISK ANALYSIS ON AI - Coggle Diagram
JOINT HIGH-LEVEL RISK ANALYSIS ON AI
TYPES OF AI ATTACKS
EXTRACTION
EVASION
POISONING
RISK SCENARIOS INVOLVING AN AI SYSTEM
LATERIZATIONS VIA INTERCONNECTIONS BETWEEN AI AND OTHER SYSTEMS
HUMAN AND ORGANIZATIONAL FEATURES
SUPPLY CHAIN ATTACK
MALFUNCTION IN AI RESPONSES
COMPROMISING AI HOSTING AND MANAGENENT INFRASTRUCTURE
GUIDELINES FOR AI USERS,
OPERATORS AND DEVELOPERS
Implementing a process to anticipate major technological and regulatory changes and identify potential new threats
Mapping of the AI supply chain, including both AI components and other hardware and software components, as well as datasets
Continuously monitoring and maintaining AI systems, to ensure that they work as intended, without bias or vulnerability
Keeping track of the interconnections between AI systems and the rest of the information system
Training and raising awareness internally on the challenges and risks of AI, including executives to ensure that high-level decision-making is well informed
Adjusting the autonomy level of the AI system to the risk analysis, the business needs, and the criticality of the actions undertaken
GUIDELINES FOR POLICY-MAKERS
Continue promoting best cybersecurity practices to ensure the secure deployment and hosting of AI systems
Foster dialogue between cyber and AI actors
Support the development of security evaluation and certification capacity based on shared standards
Continue dialogue beyond the AI Summit
Support research relevant to these risks
RECOMMENDATIONS FOR THE SECURE IMPLEMENTATION OF AN AI SYSTEM
Recommended Self-Assessment
Explicit Purpose
Have I defined and documented the explicit and legitimate purposes of the AI system from the design phase
Regulatory Compliance
Have I integrated regulatory aspects into the design process and ensured compliance with applicable laws?
Access Control
Who has access to the AI system at different stages of its lifecycle?
Least Privilege
Is the principle of least privilege applied to ensure security and integrity?
Dependency Chain
What is the AI system's dependency chain?
Supplier Reputation
What is the reputation and financial health of my suppliers?
Vendor Cybersecurity Standards
Do my vendors meet necessary cybersecurity standards?
Cloud Solution Necessity
Is a cloud solution necessary, and have I conducted a global risk assessment?
Reversibility Clause
Do I have a reversibility clause in my service agreement with data-manipulating providers, and is it feasible?
AI Impact on Business
Can AI malfunctions endanger my organization?
Security at Every Stage
Is there a security foundation at each stage of the AI system life cycle?
Model Confidentiality
Should my AI models be protected in confidentiality due to their value to my organization?
Data Protection
Have I integrated privacy-by-design measures to protect personal data and metadata, including AI models?
Checklist of Recommended Actions
General Recommendations
Limit AI automation for critical actions on other systems.
Ensure thoughtful AI integration into critical processes with safeguards.
Perform dedicated risk analysis across the entire organizational context.
Study the security of every phase of the AI lifecycle.
Conduct a data protection impact assessment (if required).
Identify, track, and protect AI-related assets.
Infrastructure and Architecture Recommendations
Define the modalities of AI system usage and integration into decision-making processes.
Apply cloud-specific measures and regulations.
Implement outsourcing recommendations if applicable.
Use secure administration practices for AI systems.
Leverage controlled access for critical AI components.
Have a deployment plan
Design AI architecture to scale without compromising security.
Apply DevSecOps principles throughout the project.
Design AI with privacy by design, ensuring data protection throughout its lifecycle.
Ensure the pseudonymization or anonymization of data where necessary
Take the need-to-know issue into account when designing the AI system
Take into account data confidentiality issues
Resource Vigilance
Use secure formats for obtaining, storing, and distributing AI models.
Implement integrity checks for model files before loading.
Assess trustworthiness of libraries and plugins used in the system.
Ensure external data quality and confidence in its sources.
Ensure traceability of actions taken on the AI system.
Collect data ethically for system development and operation.
Reliable Application
Implement multi-factor authentication for AI administration tasks.
Ensure the confidentiality and integrity of inputs and outputs.
Use security filters to detect malicious instructions.
Maintain up-to-date data, metadata, and annotations.
Continuously evaluate model accuracy and performance.
Organizational Strategy
Document design choices.
Supervise AI system operations.
Oversee subcontractor use.
Implement a risk management strategy.
Provide for degraded operations without AI.
Establish policies for generative AI usage.
Monitor AI system-specific vulnerabilities and technical developments.
Implement a data management system and secure deletion methods.
Preventive Measures
Regularly train staff on AI-related security risks.
Conduct regular security audits of the AI system.
Anticipate potential issues related to intellectual property and data protection in training data or models.
Secure and Harden the Learning Process
Implement strict access policies for sensitive data.
Secure access and storage of training data.
Assess security of learning and re-learning methods.
Ensure data, metadata, and model integrity, including pseudonymization where necessary.