Trending Techies: four presentations by experts in data and responsible AI
A new Trending Techies event was held yesterday, a face-to-face meeting organised by Telefónica Tech to generate community and conversation between professionals, students, companies and the public interested in new generation digital technologies.
The event, hosted by Manu de Luna, data scientist at Telefónica Tech, was attended by four professionals and experts who spoke and discussed with attendees about the responsible use of data in Artificial Intelligence models.
AI systems transparency
The first presentation was given by Cristina Contero, from Aphaia, with her talk "Mirror, mirror, what's behind this AI".
Cristina spoke about the different categories of Artificial Intelligence and how all of them are already subject to regulation through, among other laws prior to any regulation that may apply, the General Data Protection Regulation (GDPR).
 (1).jpg)
In this sense, Artificial Intelligence (or 'automated decision-making processes', for the purposes of the GDPR) is intrinsically related to data. Whether the data used to train AI models includes personal information or whether automating decisions affects the fundamental rights of individuals.
"If our AI uses personal data today, it already has to meet a number of transparency requirements," she explained.
AI cannot be understood without widespread adoption. Therefore, it is the developers themselves who are looking for confidence in these systems.
AI-guided digital workers
In his presentation "Digital Workers guided by AI: a challenge with many benefits", Marty Mallavibarrena, Senior Solutions Consultant at SS&C Blue Prism, explained how RPA (Robotic Process Automation) platforms work.
These platforms allow software (non-physical) robots to be created and put to work to perform repetitive tasks. This can include, for example, reading ID card data when someone sends it to register for a service, or when filing a claim with the insurance company.
.jpg)
These RPA platforms are increasingly applying Artificial Intelligence techniques to make these "digital coworkers" more effective and efficient and meet market demands.
In his presentation, Marty explained some use cases and thought-provoking examples, and discussed the ethical and technical challenges for the responsible application of AI in this area. Among them, how to control biases in data and models at source and data compliance.
Keys to harnessing AI tools in journalism
Teresa Mondría, product manager at Igeneris, shared a talk on how to use new technologies to innovate in journalism. In this case, the use of AI as a journalistic tool affects both those who produce information and the people who consume it.

Teresa explained with specific examples the real applications of AI in the news production process, such as in translations, verification (debunking hoaxes, checking data...) or in the research and analysis of large volumes of documentation. "Tasks where AI brings a differential value and can be very useful."
It is also applied to news tagging, social listening, or content recommendation. Teresa shared the keys for journalists to incorporate Artificial Intelligence in the production of information:
Respond to a target Treat the AI as a source (and sources are verified). Understand how and by whom the data is processed Give assurances to journalists and readers Be transparent
Unbiased data: preventing bias
The challenge of impartiality in AI in investigative journalism and data' was the fourth and final presentation of the meeting. Delivered by Antonio Delgado, co-founder of Datadista. He highlighted how AI helps to process and analyse data, especially when it comes to large volumes of information.
.jpg)
What used to be slow clustering processes are now tasks that are done in seconds thanks to AI.
However, he highlighted the importance of data fairness when training and using AI models. When it comes to data journalism, tools like ChatGPT bring significant ethical and practical challenges around fairness."
Language models such as GPT have been trained by extracting data from the internet and can therefore learn and reproduce biases in that data.
On the contrary, "when using an AI model trained with our own data and information sources, they allow us to perform tasks such as searching and summarising information in large volumes of documents efficiently, analysing large datasets to discover patterns and trends, and prototyping content quickly."
In this case, nevertheless, "we must be aware and critical of the potential biases that we ourselves can introduce into our training data, to ensure the impartiality and accuracy of our work."
Antonio shared three keys to avoid introducing biases into our models:
Algorithms must be transparent and understandable. Apply ethical codes of journalism Verify the facts and data obtained (AI as a source)
If you would like to participate or attend the next Telefónica Tech Trending Techies join the Cyber Security Techies or Data Techies meetup communities.