Responsibility from design applied to AI
Artificial Intelligence has the potential to transform the way we interact with technology, as well as redefining the boundaries of what is possible. AI has demonstrated its ability to generate cross-cutting benefits in many areas, from improving human wellbeing and health to driving environmental sustainability.
Its application in the operations of organisations, companies and industrial sectors enables new business models. It also changes the way we research and innovate, improves the efficiency and sustainability of production processes and supply chains, and redefines our capabilities and ways of working.
However, along with the opportunities it offers, with AI also come ethical challenges and responsibilities that must be proactively addressed to responsibly harness the potential and benefits offered by Artificial Intelligence.
The European Parliament says Artificial Intelligence can boost European productivity by 11-37% by 2035
In this sense, the responsibility by design approach applied to Artificial Intelligence is a strategy that ensures that AI systems and models are developed and used from their conception in an ethical, transparent, and accountable way.
What is the 'responsibility by design' approach to AI?
The principle of responsibility by design is a methodology that is applied in different sectors and industries, from scientific research to urban planning to the design of common products and services.
In the field of Artificial Intelligence, responsibility by design refers to the integration of ethical and responsible considerations from the early stages of design to anticipate and address potential ethical, legal and social issues that may arise with the use of AI.
This implies that AI designers and developers must take into account aspects such as transparency, fairness, privacy, security and social impact, both of the models and algorithms as well as the data used, in order to build a trustworthy Artificial Intelligence.
Accountability by design is not only applied from the beginning of the process, but also throughout the entire development lifecycle to assess possible impacts and consequences in different evolutions, scenarios, cultures, and contexts.
AI responsible by design
An accountability by design approach based on existing methodologies (e.g., privacy and security) raises some considerations to be taken into account from the initial conceptualisation and development phases, and throughout the entire model cycle, to ensure ethical and responsible development of AI projects, including:
- Having multidisciplinary and diverse teams (including experts in ethics, legislators, industry, civil society...) to incorporate different perspectives and knowledge, avoiding bias and discrimination in the results.
- Self-assess throughout its development and operation the possible undesirable effects it may have on users, society and the environment.
- Define security measures in the handling of information and personal data to ensure user privacy and regulatory compliance.
- Understanding at all times and throughout the development, learning and deployment process how it works and how it makes decisions, allows to understand and explain why it makes decisions and helps to detect biases.
- Constantly monitor throughout its lifecycle how it works to identify potential impacts as it evolves and is applied in different domains and contexts.
Our Artificial Intelligence principles
In this regard, at Telefónica Tech we adopt the ethical principles for AI that in Telefónica Group we defined in 2018 and updated in 2024. They form part of a broader methodology focused on responsibility by design and commit us to develop and use Artificial Intelligence that is:
- Fair, so that the models do not generate results with discriminatory or unfair biases or impacts.
- Transparent and explainable, disclosing the data we use to train the models and their purpose, ensuring that their decisions are understood.
- People-centred, respectful of Human Rights and aligned with the UN Sustainable Development Goals.
- Respectful by design of the privacy of people and their data, and information security.
- Truthful in its logic and in the data used, including by our suppliers.
We further require our suppliers to have similar AI principles or to adopt our own, with the aim of building, together, AI models that harness the potential and benefits offered by AI while protecting human rights, democracies and the rule of law.
Featured image: This is Engineering RAEng / Unsplash.