Please enable JavaScript.
Coggle requires JavaScript to display documents.
OCED framework - Coggle Diagram
OCED framework
1. Framework overview
Purpose of the framework
To characterise the application of an AI system (deployed in a specific project and context). It classifies AI systems and applications along the following dimensions (below)
Dimensions of the framework
People & Planet
Economic context
Data & Input
AI model
Task & Output
Uses for the framework
Understanding AI Better
Promote Understanding: The framework helps various people involved with AI—from policymakers to business leaders—understand what AI does and its key features. This shared understanding is essential so they can make better rules and decisions specific to different AI technologies.
Describing AI Systems
Registry of AI Systems: The framework can be used to describe AI systems in detail in registries or lists that track automated decision-making systems. This is especially useful in places like government or large organizations that need to keep track of different AI technologies they use or regulate.
Specialized Applications
Detailed Guidelines for Specific Sectors: The framework can be adapted to create more specific guidelines for AI in particular areas like healthcare or finance. For instance, in the UK, healthcare regulators use it to decide how new AI technology should be evaluated before it's used in hospitals or clinics.
Risk Assessment Tool
Identifying and Minimizing Risks: It helps in building tools that assess the risks involved with AI systems. This means identifying potential problems an AI might cause and figuring out ways to reduce these risks before they become actual issues.
Guidance for AI Management
Mitigation and Compliance: The framework guides how to manage AI systems throughout their life—from creation to retirement—ensuring they are used safely and ethically. This includes making sure AI behaves as it should and adhering to laws and regulations.
Key Elements
People & Planet
users -> What is the level of competency of users who interact with the system?
stakeholders -> Who is impacted by the system (e.g. consumers, workers, government agencies)?
optionality -> Can users opt out, e.g. switch systems? Can users challenge or correct the output?
human rights -> Can the system’s outputs impact fundamental human rights (e.g. human dignity,
privacy, freedom of expression, non-discrimination, fair trial, remedy, safety)?
well-being, society & environment -> Can the system’s outputs impact areas of life related to well-being (e.g. job quality,
the environment, health, social interactions, civic engagement, education)?
{displacement} -> Could the system automate tasks that are or were being executed by humans?
Economic Context
business function & model
business model -> Is the system a for-profit use, non-profit use or public service system?
business fuction -> What business function(s) is the system employed in (e.g. sales, customer service)?
criticality -> Would a disruption of the system’s function / activity affect essential services?
{scale & maturity}
breadth of development -> Is the AI system deployment a pilot, narrow, broad or widespread?
technical maturity -> How technically mature is the system (Technology Readiness Level –TRL)?
industrial sector -> Which industrial sector is the system deployed in (e.g. finance, agriculture)?
Data & Input
collection
detection & collection -> Are the data and input collected by humans, automated sensors or both?
provenance of data & input -> Are the data and input from experts; provided, observed, synthetic or derived?
dynamic nature -> Are the data dynamic, static, dynamic updated from time to time or real-time?
rights & identifiability
rights -> Are the data proprietary, public or personal data (related to identifiable individual)?
identifiability of personal data -> If personal data, are they anonymised; pseudonymised?
structure & format
structure of data & input -> Are the data structured, semi-structured, complex structured or unstructured?
format of data & metadata -> Is the format of the data and metadata standardised or non-standardised?
{scale} -> What is the dataset’s scale?
{data quality & appropriateness} -> Is the dataset fit for purpose? Is the sample size adequate? Is it representative and
complete enough? How noisy are the data?
AI model
model characteristics
model info availability -> Is any information available about the system’s model?
AI model type -> Is the model symbolic (human-generated rules), statistical (uses data) or hybrid?
rights associated -> Is the model open-source or proprietary, self or third-party managed?
discriminative or generative -> Is the model generative, discriminative or both?
single or multiple models -> Is the system composed of one model or several interlinked models?
model building
model-building from machine or human knowledge -> Does the system learn based on human-written rules, from data, through supervised
learning, through reinforcement learning?
model evolution in the field -> Does the model evolve and / or acquire abilities from interacting with data in the field?
central or federated learning -> Is the model trained centrally or in a number of local servers or “edge” devices?
model inference
model development/maintenance -> Is the model universal, customisable or tailored to the AI actor’s data?
eg: Universal models are broadly applicable, like the spell-check in word processors; customizable models allow user-specific adjustments, such as voice recognition software adapting to accents; and tailored models are specifically designed for particular applications, such as fraud detection systems customized for individual banks.
deterministic & probabilistic -> Is the model used in a deterministic or probabilistic manner?
eg: Distinguishes whether the model operates in a deterministic (always produces the same output from the same input) or probabilistic (outcomes have probabilities associated with them) manner.
transparency & explainability -> If information available to users to allow them to understand model outputs?
Task & Output
tasks
tasks of the system -> What tasks does the system perform (e.g. recognition, event detection, forecasting)?
{combining tasks & action into composite systems} -> Does the system combine several tasks and actions (e.g. content generation
systems, autonomous systems, control systems)?
action autonomy -> How autonomous are the system’s actions and what role do humans play?
application area(s) -> Does the system belong to a core application area such as human language
technologies, computer vision, automation and / or optimisation or robotics?
{evaluation methods} -> Are standards or methods available for evaluating system output?
2. Dimensions in detail
AI system lifecycle
Planning and Design: This initial stage involves defining the system's purpose, designing its architecture, and specifying the tasks it will perform.
Data Collection and Processing: Data essential for training and operating the AI system is gathered and prepared. This includes data selection, cleaning, labeling, and ensuring it meets the necessary quality standards.
Building the Model: During this phase, the AI model is developed, trained, tested, and validated. Depending on the model, this might involve machine learning, rule-based algorithms, or a hybrid approach.
Deployment: The AI system is implemented in a real-world environment where it begins to perform its intended tasks.
Operation and Monitoring: This ongoing phase involves the regular supervision of the AI system to ensure it operates as intended, and making adjustments as needed based on performance data and evolving conditions.
Maintenance and Updating: The AI system may need updates or retraining to adapt to new data or changes in its operating environment. This phase ensures the system remains effective and secure over time.
Interaction with the Classification Framework Dimensions
People & Planet: Throughout the lifecycle, considerations related to human and environmental impacts are assessed. For example, during the planning phase, the potential effects of the AI system on various stakeholders and the environment are considered to guide design choices that promote ethical AI usage.
Economic Context: The economic implications of the AI system are considered during the planning and deployment phases. This includes evaluating the sectors and markets the AI will impact and understanding the economic benefits and disruptions it may cause.
Data & Input: This dimension is crucial during the data collection and processing phase. It involves ensuring that data used is representative, ethically sourced, and respects privacy standards. This phase also assesses how data inputs will affect the behavior and outputs of the AI system.
AI Model: This dimension is integral during the model building phase. It includes decisions on whether the model should be discriminative or generative, the transparency of the model, and how the model’s evolution is managed once deployed.
Task & Output: This dimension focuses on the deployment and operation phases, where the tasks performed by the AI system and their outputs are monitored for accuracy, fairness, and effectiveness. Adjustments are made based on this monitoring to ensure the system meets its intended goals without causing unintended harm.
This section has tables/checklists that help check if the system we're building checks which boxes and thus gives us the tools to define our system (for instance, for people & planet, specifically for the human rights field, it has a table/checklist with various rights and then a column to mark if that right is impacted with the AI and column to mark if it has no impact in that human right)
Answering the questions for each dimension to see how the system can be described -> some categories explained
Business functions (Economic Context)
Business functions:
For-profit use – subscription fee model:
Explanation: In this model, users pay a recurring fee, typically monthly or annually, to access the AI system.
Example: Software as a Service (SaaS) platforms that charge businesses for access to their AI tools, such as customer relationship management (CRM) systems or data analytics services.
For-profit use – advertising model:
Explanation: The AI system is offered to users for free or at a reduced cost, and revenue is generated through advertisements shown to the users.
Example: Social media platforms and search engines that use AI to target ads based on user data and behaviors.
For-profit use – other model:
Explanation: Any other profit-driven business model not covered by subscription or advertising. This could include pay-per-use, licensing, or freemium models where basic services are free but advanced features are paid.
Example: A cloud computing service charging per computation or storage usage, or a mobile app offering basic features for free and charging for premium features.
Non-profit use (outside public sector) – voluntary donations and community models:
Explanation: AI systems operated by non-profit organizations rely on voluntary donations from individuals, grants, or community funding to support their operations.
Example: An open-source AI project funded by community donations or a non-profit using AI to provide social services, supported by grants and donations.
Public service:
Explanation: AI systems used by government or public sector entities to provide services to the public without direct charges. These services are usually funded by taxpayer money.
Example: AI used in public health for disease outbreak prediction, in law enforcement for crime prevention, or in public administration for improving citizen services.
Other:
Explanation: Any other business model that does not fit into the above categories. This could be hybrid models or innovative approaches tailored to specific needs and contexts.
Example: AI systems used in a cooperative business model where users have ownership and control over the AI system and its outputs.
Structure of data (Data & Input)
Unstructured Data
Definition: Data that do not adhere to a specific data model or are not organized in a predefined way.
Characteristics: This includes data such as text, images, audio, and video. Unstructured data can also include sensor data or data from social media platforms that don't have a regular or easily predictable format.
Challenges: They are often full of irregularities and ambiguities, making them difficult to process using traditional software that relies on structured input.
Examples: Video files, audio recordings, free-form text documents, social media posts.
Semi-structured Data
Definition: A mix of structured and unstructured data elements.
Characteristics: Semi-structured data does not fit into a rigid structure but contains tags or other markers to separate semantic elements and enforce hierarchies of records and fields.
Examples:
Emails (which contain both free-form text and structured elements like headers)
JSON files (which represent objects in a structured yet flexible format)
HTML documents (which have structured tags but contain lots of unstructured text)
Structured Data
Definition: Data that are organized in a defined manner and usually stored in databases.
Characteristics: Structured data is highly organized and formatted in a way that is easy to search and manipulate. It typically includes relational databases where each column of a table represents a category of data and each row contains a data value for the corresponding category.
Examples:
Database tables
Spreadsheets
Any data that can be easily entered, queried, and analyzed in relational databases
Complex Structured Data
Definition: Structured data that includes models or schemas that represent more complex relationships between data elements.
Characteristics: These data are often outputs from one AI system and inputs into another, featuring rich interconnections or dependencies.
Examples:
Ontologies that provide a structured representation of knowledge within a particular domain
Knowledge graphs that map relationships among entities
Complex algorithms like those used in adversarial learning or reinforcement learning
Data quality & appropriateness (Data & Input)
Identifiability of data (Data & Input)
Rights associated with data & input
3. Applying the framework