Nacho Palou

Nacho Palou

Higher Technician in Electronics, but a wordsmith at heart. Marketing & Comms at Telefónica Tech. I'm interested in both disruptive and obsolete technologies. When I'm not writing, I'm taking photos.

Connectivity & IoT
Ambient IoT and AI: the fusion enabling intelligent environments
Content Ambient IoT + AI = intelligent and autonomous environments: Battery-free sensors powered by ambient energy enable real-time, human-free decision-making. Real-world applications in industry and cities: From supply chain traceability to traffic management and air quality monitoring in smart cities. Key challenges and standards for adoption: Interoperability, Cyber Security, and energy efficiency shape the path toward scalable and sustainable ecosystems. What is Ambient IoT? Ambient Internet of Things (Ambient IoT) extends traditional IoT by using interconnected IoT devices that operate autonomously, without conventional batteries and powered by energy harvesting. In this way, Ambient IoT integrates seamlessly into the environment, minimizing human intervention and enabling decentralized decision-making. It allows for real-time data processing and analysis without centralized servers, enhancing security, reducing latency, and improving both efficiency and sustainability. This results in greater scalability and sustainability, driving adoption across sectors such as logistics, urban management and industrial automation. It also applies to agriculture, where self-powered sensors can monitor soil moisture and weather conditions with minimal maintenance needs. ■ The concept of Ambient IoT, coined by 3GPP in 2023, has gained momentum with the creation of the Ambient IoT Alliance, which promotes its global adoption and standardization. Invisible Ambient Intelligence (IAI) Invisible Ambient Intelligence (IAI) emerges from the integration of AI into these smart devices and connected sensors designed to operate discreetly and without human intervention. Similarly, IAI blends into the environment without altering user experience, working transparently and without requiring explicit interactions. The goal of this convergence is to create interconnected ecosystems where data is collected, processed and applied autonomously. Based on principles of ubiquitous computing, machine learning and ambient energy, IAI delivers intelligent solutions for urban, industrial and domestic environments. IAI is enabled by combining low-power connectivity, optimized IoT devices, and energy harvesting techniques that eliminate the need for batteries. In this context, Edge AI and Edge Computing systems process data collected or generated by IoT devices to deliver real-time, automatic and adaptive responses. This facilitates use cases from smart city traffic optimization to predicting failures in industrial systems—enhancing efficiency and reducing costs. Ambient IoT standards and protocols To ensure Ambient IoT works efficiently and at scale, it is essential to adopt low-power communication protocols such as NB-IoT, which enable data transmission with minimal energy consumption. The evolution of this technology is grounded in solutions like passive RFID, widely used for battery-free identification. This is commonly applied in ID cards, stock management or public transport access, enabling efficient interaction without internal power sources. Leading tech organizations are aligning around this connectivity technology, including: 3GPP optimizes mobile networks for low-power IoT devices, including: NB-IoT (Narrowband IoT) is a narrowband connectivity solution for IoT devices requiring low energy consumption, wide coverage (even indoors), and low-cost connectivity. LTE-M (LTE for Machines) offers greater bandwidth than NB-IoT, supports voice, mobility and lower latency. It suits Ambient IoT scenarios requiring fast or mobile data transfers. IEEE develops protocols to optimize data transmission in IoT networks. UWB (Ultra-Wideband) is a short-range radio technology offering high spatial precision (centimeter-level) and accurate data communication, ideal for Ambient IoT applications needing exact positioning. Bluetooth Low Energy (BLE) is well-suited for Ambient IoT due to its low power usage and compatibility with mobile devices. RFID (Radio Frequency Identification) uses electromagnetic fields to automatically identify and track passive, battery-free tags attached to objects. MQTT and CoAP protocols enable efficient data transmission in resource-constrained devices, ensuring that information flows without excessive power consumption. ■ NB-IoT is key to Ambient IoT as it supports long-range connectivity with ultra-low power usage, enabling devices to operate without batteries or with multi-year autonomy. Applications transforming industries and cities Establishing technology standards is essential to enable applications and use cases capable of transforming entire sectors—from supply chain optimization to the development of smart cities. Use case: Ambient IoT in the supply chain Ambient IoT has impacted supply chain management through the integration of self-powered sensors into products and packaging to provide constant monitoring of key conditions such as: Temperature and humidity: Sensors in the pharmaceutical and food industries ensure that goods are transported under appropriate conditions. Location and traceability: IoT devices enable real-time shipment tracking, enhancing logistics and reducing losses. Fraud prevention: Detection of tampering in cargo helps fight product counterfeiting. ■ IAI enables Ambient IoT solutions to be predictive rather than merely reactive, anticipating and responding to events. This is crucial in urban management or industrial automation, where failure prevention can reduce downtime, cut costs and boost efficiency. Ambient IoT's impact on smart cities Ambient IoT is transforming urban management through self-powered sensors that monitor the environment in real time, integrating technology invisibly to enable smart automation of urban processes without human intervention, including: Air quality monitoring with sensors detecting pollutants and providing accurate data to improve environmental regulations. Traffic optimization using sensors in traffic lights and roads to gather real-time data and adjust traffic flow, reducing congestion. Energy efficiency through distributed sensors that optimize power usage in public lighting systems and smart buildings. Challenges in implementing Ambient IoT While Ambient IoT offers great benefits for smart cities, its large-scale adoption faces technical, operational and regulatory challenges. A functional ecosystem requires addressing hurdles such as energy harvesting, interoperability and Cyber Security. The Ambient IoT Alliance (AIoTA) supports standardization efforts to ensure secure adoption aligned with existing standards. AIoTA aims to create an open, standards-based ecosystem for the efficient interconnection of battery-free Ambient IoT devices using technologies such as NB-IoT, Bluetooth or 5G, considering: Energy harvesting: Efficiently capturing and storing ambient energy is critical for the continuous operation of devices. Interoperability: To ensure compatibility, the variety of manufacturers and technologies requires clearly defined standards. Security and privacy: Data generated by Ambient IoT devices raises concerns around Cyber Security and information protection. ■ Low-power devices may have limited capacity to implement robust security protocols, making them vulnerable to attacks. Additionally, the distributed nature of these systems increases the potential attack surface. Conclusion Ambient IoT is transforming the interaction between the physical and digital worlds by enabling self-powered devices to collect and transmit data in real time. Combined with AI, it gives rise to Invisible Ambient Intelligence (IAI), which opens up new possibilities for creating smart environments where process automation optimizes resource use and boosts efficiency. Use cases in the supply chain and smart city management showcase the impact of Ambient IoT. Self-powered sensors ensure product traceability, monitor air quality and regulate traffic without human intervention. The future of Ambient IoT relies on overcoming key challenges: ensuring interoperability between devices from different vendors, optimizing energy harvesting, and safeguarding data security and privacy. The goal is to build a digital ecosystem where devices are autonomous, communicate efficiently, and protect user privacy. Despite these challenges, initiatives like the Ambient IoT Alliance are promoting the standardization and adoption of solutions that merge low-power wireless connectivity, optimized electronics and energy harvesting techniques. This convergence enables the development of more resilient and autonomous digital ecosystems, paving the way for a future where technology integrates invisibly into our surroundings, promoting sustainable development and a higher quality of life.
August 14, 2025
Cyber Security
Cyber Security for SMEs: Strategies, best practices, and solutions against cyberattacks
Small and medium-sized enterprises (SMEs) face an increasing threat from cyberattacks. According to ENISA, over 80% of surveyed SMEs stated that cybersecurity issues would have a serious negative impact on their business, with 57% believing they would likely become bankrupt or go out of business. Many of these businesses lack the resources and budgets for robust cybersecurity measures, making them prime targets for cybercriminals. So, how can SMEs bolster their digital security and protect themselves from these attacks? Recognizing the threat SMEs often underestimate cyberattack threats, relying too heavily on perceived digital security. This underestimation exacerbates the issue, as a significant proportion of attacks specifically target SMEs, many of which end up out of business within months of a security incident. To avoid becoming another statistic, SMEs must take proactive measures to protect themselves. Cyber Security is not just a concern for large corporations; it is essential for businesses of every size and sector. Key cyberthreats facing SMEs The most common threats SMEs face include: Ransomware: Malware that encrypts company data, demanding a ransom for its release. This can paralyze operations by locking access to critical information until the ransom is paid or alternative solutions are found. Phishing: Attempts to obtain confidential information by posing as trusted entities, such as in CEO fraud schemes. Cybercriminals use fraudulent emails or messages to trick employees into revealing credentials or sensitive data. Malware: Malicious software designed to infiltrate or damage IT systems. It can enter through insecure downloads, compromised websites, or suspicious email attachments. Once inside, malware can steal data, spy on activities, or corrupt files. Social Engineering: Techniques to manipulate individuals into divulging confidential information. Attackers exploit trust and lack of cybersecurity awareness to access protected systems or sensitive data. Supply Chain Attacks: Targeting suppliers to gain access to company systems. These attacks leverage trust between companies and their vendors, compromising security via third parties. DDoS Attacks: Distributed Denial of Service (DDoS) attacks overwhelm a company’s servers with massive traffic, causing systems to crash or operate extremely slowly. This can completely disrupt online operations, harm service availability, and damage reputations. Security strategies for protecting SMEs To effectively guard against these threats, SMEs should adopt a series of strategies and best practices. 1. Invest in Cyber Security Although it might seem costly, cybersecurity investment is necessary. This includes acquiring up-to-date security software, hiring specialized personnel, and providing continuous training for employees. Challenge: Limited budgets and resources are among the main reasons SMEs are vulnerable to cyberattacks. 2. Training and awareness Employee cybersecurity training is critical. Staff should understand potential threats and recognize and avoid them. Regular training programs can include phishing simulations and workshops on best practices for digital security. Impact: Proper training significantly reduces errors risk, as human factors are often the weakest link in the security chain. 3. Implement basic security measures SMEs, including micro-businesses and self-employed professionals, can enhance their digital security through simple yet effective measures: Use strong passwords and update them regularly. Enable two-factor authentication (2FA) for all accounts. Keep all systems and software updated to protect against known vulnerabilities. Perform regular backups of critical data and store them securely. Develop and maintain an incident response plan to manage and mitigate damages in case of an attack. Benefict: Even a few basic security measures can significantly reduce cyberattack risk. Cyber Security services and solutions for SMEs For SMEs seeking comprehensive and cost-effective protection, managed security solutions offer robust defenses against a wide range of cyber threats without significant investment in infrastructure or specialized personnel. Some tailored solutions include: Secure business platform: A centralized platform enabling companies to manage their Cyber Security needs, offering protection against malware, data breaches, and continuous expert support from dedicated Security Operations Centers (SOC). Email protection services: These services, such as our Clean Email Business solution, filter malicious and unwanted emails, reducing the risk of phishing and email-based attacks while enhancing business communications security. Advanced connection security: Tools that safeguard employees from viruses, malware, phishing, and other threats, as well as provide web content filtering to control access to harmful sites. Cyberattacks are increasingly numerous, complex, sophisticated, and fast-moving. The high prevalence of cyberattacks targeting Spanish SMEs underscores the urgent need to enhance cybersecurity protection and training. By investing in security measures, educating employees, and adopting best practices, SMEs can strengthen their defenses against cyberattacks and ensure operational continuity in an environment where threats continue to evolve rapidly. ■ We at Telefónica Tech offer the best professionals, capabilities, and comprehensive specialized support to SMEs, so that any company, regardless of size and sector, can effectively protect itself against cyber threats. ______ Cyber Security How Clean Email and Security Edge protect your supply chain? October 10, 2024
August 7, 2025
Telefónica Tech
AI & Data
The human factor in the age of AI: digital humanities and computational linguists
AI is commonly associated with computer science and engineering. However, the humanities also play a crucial role in shaping its development and ethical application. This humanistic perspective is essential to ensure that AI integration into daily life is carried out responsibly and in a way that benefits individuals and society as a whole. Disciplines such as philosophy, linguistics, sociology, and communication help us understand human behavior, ethical values, and social structures. They contextualize AI's impact and guide how it should be managed. The humanities offer tools for critical judgment and AI system enhancement. Their insights are therefore vital to fostering a reflective and holistic approach to AI—whether it's assessing algorithmic bias, understanding how machines interpret and generate human language, or bringing these technologies closer to society. We’ve compiled a selection of humanities-related professions that have particularly strong connections to AI, and explored how their knowledge is helping to shape the convergence between technology and humanity. Philosophy Philosophy tackles fundamental questions about consciousness, ethics, morality, epistemology (how we know), ontology (what exists), and values through critical reasoning. Relationship with AI AI Ethics: Philosophers can help define ethical boundaries in AI development and use, addressing dilemmas such as algorithmic bias, privacy, machine autonomy, and the social impact of labor displacement. Consciousness and Intelligence: They explore the nature of consciousness and intelligence—key to understanding whether AI might one day be truly sentient or whether it should always be regarded as a sophisticated tool. Epistemology and Machine Learning: Philosophy contributes to evaluating how machine learning models “know” the world and to what extent we can trust their outputs. —Example: Explainable AI (XAI) reveals how AI models work and makes their decisions more transparent and accessible for users—fostering trust and supporting the development of ethical and governance frameworks for AI. Linguistics The scientific study of human language—its structure, use, acquisition, and evolution at all levels (phonetics, phonology, morphology, syntax, semantics, and pragmatics)—is key to advancing natural language models. Relationship with AI Natural Language Processing (NLP): Linguistic expertise is essential for improving NLP algorithms' accuracy and capability, which enable AI to interpret, generate, and interact with human language across applications like machine translation, sentiment analysis, chatbots, and content generation. Contextual Understanding: Linguists help AI systems grasp how words and phrases change with context, a core requirement for natural human-machine interaction. User Interface Design: Their insights contribute to designing language-based user interfaces that feel intuitive and accessible. —Example: Enhancing NLP systems and developing algorithms capable of comprehending human language in all its complexity leads to more natural interactions between people and machines. Sociology The scientific study of society—its structures, institutions, relationships, and transformations—helps us understand social behavior, collective dynamics, and how technologies like AI shape human life. Relationship with AI Social Impact of AI: Sociology examines how AI affects employment, inequality, privacy, democracy, and other societal factors. Algorithmic Bias: It uncovers how social biases—including gender bias—are embedded in training data, and offers solutions for mitigating such distortions. Public Understanding: Studying public attitudes and beliefs about AI is crucial for fostering social acceptance and responsible adoption. —Example: Investigating the social impact of AI and its effects across different sectors or demographics generates actionable insights to reduce inequality and ensure a fair distribution of AI’s benefits. History The study of the human past—including events, ideas, cultures, and social transformations—offers a temporal lens for examining the evolution of technologies, systems, and societies. Relationship with AI Lessons from the Past: History reveals how previous technological shifts were handled and the societal consequences they brought, offering valuable guidance for today’s AI landscape. Cultural Context: Understanding the cultural context in which AI is developed helps avoid imposing dominant values on other communities and encourages adaptation to local needs. —Example: Studying historical technological transitions—such as electrification and the inequality it exposed—can yield analogies that help us anticipate AI’s broader effects in the present. Psychology The scientific study of the mind, behavior, and mental processes provides crucial insights into cognition, emotion, and human interaction. Psychology also explores how we perceive and adapt to disruptive technologies like AI. Relationship with AI Human-AI Interaction: Psychology informs the design of AI systems that are accessible, intuitive, and responsive to individual needs and preferences. Cognitive and Emotional Impact: It investigates how AI influences users’ mental well-being, behavior, and cognitive abilities—and how negative effects can be mitigated. Trust and Adoption: Behavioral psychology examines how to build trust in AI systems, a cornerstone of widespread adoption. —Example: Responsible, user-centered design enables the creation of emotionally intelligent chatbots and virtual assistants that account for cultural diversity, foster empathy, reduce the Eliza effect, and lower user frustration. Communication In the AI context, communication studies how messages are created, transmitted, and interpreted—and how media shapes public understanding—making it vital to educate, inform, and promote awareness around emerging technologies. Relationship with AI Communicating with AI: Understanding how to effectively communicate with AI—and how it communicates with us—is crucial for improving the user experience and breaking down technological barriers. Misinformation: Communication research examines AI's role in spreading false information and devises strategies to combat it. AI Narratives: Communication experts help craft narratives that support ethical and equitable AI uses. They also contribute to developing AI models capable of creating useful and creative content. —Example: Educating the public about how AI works—and what it means—requires science communicators and media professionals who can explain complex information clearly and accessibly, and lead initiatives that raise awareness of AI’s uses and challenges. Bridging the humanities and technology gap There are numerous resources and strategies to help humanities professionals transition into tech and AI. One high-profile option is 42 Fundación Telefónica, a free programming school network recognized as one of the world’s ten most innovative universities by the WURI ranking. At campuses in Madrid, Urduliz Bizkaia, Barcelona, and Málaga, students gain highly sought-after digital and cross-disciplinary skills through a gamified, collaborative, 100% hands-on learning model—completely free of charge. No prior technical knowledge is required, making it an ideal option for humanities graduates. It’s also useful to develop foundational skills in technologies such as Python, R, SQL, or data analysis methodologies through online courses and tutorials. When paired with the critical and ethical mindset of the humanities, even basic technical literacy can make a significant difference. Generative AI opens up new opportunities for continuous, self-directed learning by offering flexible environments, on-demand content, detailed explanations, and tailored feedback. Joining interdisciplinary groups or communities focused on AI and its social implications is another effective way to learn, collaborate with researchers or companies, and contribute valuable perspectives. ■ Technical skills like Python and R programming and natural language processing algorithms design are increasingly valued. The report also emphasizes the importance of soft skills like teamwork and organizational ability, as well as the increasing collaboration between academia and industry to align training with labor market needs. ■ Complementing technical skills, the World Economic Forum’s Future of Jobs Report 2025 underscores the growing global relevance of creative thinking, resilience, flexibility, agility, curiosity, and continuous learning. Conclusion AI is not the sole domain of the science-based sciences. The humanities offer a vital lens for examining AI's ethical, social, cultural, and human dimensions. Their role is to ensure that AI is used in beneficial and responsible ways, and that the technological infrastructure we build reflects our shared values—promoting justice, inclusion, and societal well-being. To make this possible, we need updated and forward-thinking education policies that encourage multidisciplinary training—integrating science, technology, and the humanities into academic programs. These efforts should include collaborative projects and problem-based learning to prepare future generations for AI's complex dilemmas. Governments, companies, and academic institutions have a responsibility to drive this transformation by creating learning environments that connect humanistic critical thinking with technological solutions. Multidisciplinary approaches are essential to building AI systems that address urgent societal challenges—from inequality to sustainable development. The demand for humanities professionals in AI continues to rise as businesses, organizations, and policymakers increasingly recognize the need for ethical, social, cultural, and human-centered frameworks for technological progress. These professionals bring the insight required to confront today's challenges—such as inclusion and bias mitigation—and play a key role in crafting the governance structures needed to guide AI toward an equitable and sustainable future grounded in democratic values and fundamental human rights. Achieving this will require a collective effort to redefine how we conceive and develop technology—ensuring it answers global needs with responsibility and fairness.
April 17, 2025
AI & Data
A salesman, a Chinese postman, and AI to optimize delivery routes
Route optimization in logistics has historically been a complex challenge due to the vast number of variables involved in the process. Factors such as traffic, delivery time windows, the distance between drop-off points, and vehicle capacity have all made the search for efficient solutions considerably difficult. Traditionally, these challenges were tackled through heuristic approaches and business rules built from experience at first, and later with specialized software grounded in mathematical optimization algorithms. Route planning complexity has been mathematically modeled through classic problems such as the Travelling Salesman Problem (TSP) and the Chinese Postman Problem (CPP). Mathematical models in logistics The Travelling Salesman Problem and the Chinese Postman Problem emerged from graph theory and combinatorial optimization in the mid-20th century. Both have since served as foundational models for the development of advanced algorithms applied to logistics and distribution. These problems describe common scenarios in transportation, mobility, and logistics planning. The Travelling Salesman Problem seeks the most efficient route to visit a set of points and return to the starting location, ensuring the path is optimal in terms of time, distance, and cost. —It's used in planning and optimizing deliveries across dispersed locations, including courier services, transport routes, or even manufacturing for production planning. The Chinese Postman Problem focuses on efficiently covering every street or segment of a network while minimizing total travel distance to save time and operational costs. —It's applied in sectors where uniform coverage is essential, such as postal delivery, waste collection, infrastructure maintenance, urban cleaning fleet management, mail distribution, or surveillance and patrolling. With the rise of e-commerce and the need for faster, more efficient, and more sustainable logistics systems, AI has become a key tool for tackling both mathematical problems. Advanced AI solutions for logistics optimization While traditional algorithms can produce valid solutions to both TSP and CPP, their application at scale often becomes inefficient. The complexity of these problems tends to grow rapidly—often exponentially—as the number of variables increases. This is where AI introduces advanced approaches that yield far more practical approximations in a fraction of the time: Deep learning: By leveraging neural networks trained on large volumes of historical data, AI identifies patterns and improves decision-making, effectively learning and accumulating courier practical knowledge. Reinforcement learning: This allows AI systems to explore multiple combinations and fine-tune their strategies through trial and error, improving route efficiency with each iteration. By using these advanced techniques, AI can identify optimal routes quickly and with greater flexibility, adapting in real-time to dynamic variables like traffic conditions, weather changes, or fluctuating demand. Integrating AI into logistics enhances operational efficiency and redefines distribution planning and execution. AI’s impact on logistics beyond deliveries Beyond route optimization and better delivery planning, AI transforms logistics across multiple layers. Its influence extends to demand forecasting, predictive maintenance, and process automation, enabling a logistics sector that is more agile, accurate, and sustainable: Demand forecasting: AI analyzes sales data, events, weather, and mobility trends to anticipate demand and allocate storage and delivery resources accordingly. It synthesizes data from a wide range of sources, including real-time population movement, tourism, and events, offering more precise and responsive forecasts. Warehouse optimization: Algorithms organize product layouts more effectively, reducing processing times and boosting operational throughput. Predictive maintenance: IoT sensors paired with AI detect early signs of equipment failure or wear in vehicles and machines, minimizing costs, unplanned downtime, and disruptions. Automation and computer vision: Technologies like smart glasses and camera systems streamline goods handling and improve warehouse safety protocols. Sustainability: Reducing unnecessary routes and optimizing energy usage lower logistics sector environmental impact. Given that logistics accounts for 30% of EU's final energy consumption, AI plays a critical role in minimizing travel and maximizing resource efficiency. Success Story: Smart Steps in Infra Brasil In logistics, understanding mobility is crucial to streamlining distribution and improving operational performance. Analyzing the movement of people, vehicles, and goods enables companies to quickly adapt to changing conditions and anticipate market demands. At Telefónica Tech, we worked with Brazilian logistics firm Infra SA to optimize their transport network using our Smart Steps platform. By analyzing mobility data, Infra SA reduced operational costs and CO₂ emissions, significantly improving logistics efficiency and sustainability. This case illustrates how integrating AI and Big Data solutions can meaningfully transform logistics management, unlocking economic, operational, and environmental benefits. Learn more → Conclusion AI has become a powerful tool for solving complex logistical problems. While it may not always find mathematically perfect solutions to Travelling Salesman's or Chinese Postman's problems, it delivers efficient and adaptable routing strategies. Its influence reaches far beyond route planning, including demand prediction, warehouse automation, and the sector's overall sustainability. The future of logistics will be shaped by AI's continued evolution, with increasingly sophisticated models capable of integrating real-time data and optimizing ever-more complex transport networks. In a world where productivity, efficiency, and sustainability are no longer optional but imperative, AI is already transforming how goods move across the supply chain—from manufacturing and inter-process transitions to last-mile delivery.
April 10, 2025
Cloud
AI & Data
Edge Computing and the future of Distributed AI
Our experts in infrastructure and digital technologies recently shared a comprehensive vision on Edge Computing and its role in transforming AI during the presentation 'Beyond the cloud: edge computing and the future of distributed AI'. Their knowledge and practical experience deploying these technologies enable us to explore this paradigm shift in computing. The speakers highlighted that Edge Computing and distributed AI are redefining not only how businesses and end-users access advanced AI capabilities but also how this technology affects privacy, equity, and access to innovation. This shift promises increased efficiency and data control, yet also raises technical, ethical, and social considerations to address. AI evolution has moved data processing from the cloud toward decentralized infrastructures. ■ Edge Computing enables data processing closer to the point of generation instead of relying on a centralized data center, using distributed computing devices and nodes at the network's edge, known as 'edge nodes,' which may include routers, gateways, local servers, or IoT devices. This reduces latency and enhances system efficiency. Edge Computing Made Simple → Edge Computing infrastructure as a fundamental pillar Deploying distributed nodes within a telecommunications infrastructure brings computational capability closer to users. In Spain, the development of 5G and fiber optic networks, with coverage exceeding 92% of the population, has been essential in ensuring the required bandwidth and latency for these solutions. Initiatives like the European project IPCEI-CIS facilitate the deployment of these distributed nodes and their interoperability at the European level. In addition to connectivity, decentralized data center development provides the necessary resources for local storage, computation, and execution of AI models. This reduces dependency on large centralized data centers and improves businesses' operational autonomy. Artificial Intelligence at the Edge (Edge AI): real-time inference AI evolution has moved through various stages, from traditional machine learning to deep learning and the current Generative AI. At this juncture, distinguishing between training workloads, requiring specialized infrastructure in large data centers, and inference tasks, efficiently executable on Edge AI nodes, is essential. During the presentation 'Beyond the Cloud: Edge Computing and the Future of Distributed AI' at MWC 2025. With Edge Computing, executing AI models on distributed nodes allows critical applications to process data in real-time without sending it to the cloud. A key aspect of this approach is efficient resource management through the Operator Platform, an orchestration system enabling three core functionalities: Smart Federation enables networks from different operators or segments within a single operator to federate and interconnect. Nodes can thus be shared and used by different owners or operators, expanding the Edge network's reach and capabilities beyond traditional boundaries. Smart Allocation facilitates deploying applications based on specific criteria such as latency, computational resources, or data sovereignty requirements. Applications can be dynamically distributed according to customer demand or statically deployed depending on use-case needs, optimizing available resource usage. Smart Discovery allows customers to request and connect to the most suitable Edge node based on their specific requirements. For example, a mobile client can automatically connect to the nearest Edge node meeting their service requirements, ensuring the best experience. These capabilities, centrally managed through Operator Platform, transform a network of independent nodes into an intelligent, cohesive system dynamically adapting to user and application demands. ■ The concept of Cloud Continuum complements these functionalities, seamlessly integrating cloud and Edge nodes, enabling developers to deploy applications without concern for resource physical location. Data sovereignty and security in distributed AI A driving factor behind decentralized AI's rise is the necessity to guarantee data sovereignty and protect user privacy. Companies and governments seek solutions to ensure sensitive information isn't processed outside their jurisdiction, mitigating legal and regulatory risks. The concept of Sovereign AI is particularly relevant in Europe, where data storage location and operating jurisdiction matter significantly. Deploying AI at the edge enables organizations to control data flow, avoiding exposure to external infrastructures. This ensures data remains within the desired jurisdiction and complies with regulations like GDPR. Applications of Edge AI The impact of distributed AI is materializing in three major application areas: Computer Vision: In industrial environments, computer vision is used for quality control and process optimization in factories, leveraging Edge Computing capabilities to process images in real-time. Digital twins and industrial process monitoring represent prominent use cases requiring local processing and low latency. Local computer vision for inventory monitoring in retail, protecting customer privacy, is an Edge AI use case. Photo: Sony. Conversational Agents: Implementation of chatbots and virtual assistants based on Retrieval-Augmented Generation (RAG) provides internal technical support within companies by leveraging existing corporate information. These agents access local databases and internal documentation to deliver accurate, contextual responses while preserving sensitive information privacy. Big Data and Deep Learning: Advanced applications such as autonomous driving and drone control require real-time data processing and instantaneous decision-making. Edge Computing processes large data volumes near the source of data, reducing latency and enhancing operational safety. Moreover, solutions are being developed enabling companies to deploy generative AI agents with one-click integration, facilitating connections to corporate data sources like Salesforce or SharePoint while maintaining data privacy and sovereignty. Technical challenges of Edge AI While Edge Computing offers significant advantages for distributed AI, implementing it presents technical challenges requiring effectiveness solutions. Resource limitations are common in Edge devices due to computing power, memory, and energy compared to cloud servers. Developing more efficient and lightweight AI models is crucial for Edge deployment without compromising accuracy and performance. Techniques like quantization, pruning, and knowledge distillation can reduce AI model size and complexity for Edge deployment. Model consistency is essential to maintain coherence across Edge devices, involving version management, update synchronization, and resolving conflicts in divergences. Techniques such as federated learning and transfer learning can maintain model consistency and ensure uniform performance across the Edge network. Security and privacy are critical, as data processing and storage occur on distributed devices vulnerable to attacks. Robust security measures, including data encryption, device authentication, and network segmentation, must protect data confidentiality and integrity. Privacy-preserving techniques, such as federated learning and differential privacy, are also essential to protect user privacy and comply with data protection regulations. Managing and orchestrating AI applications in distributed Edge environments can be complex. Centralized tools and platforms like the Operator Platform are needed to monitor Edge devices, deploy and update AI models, and optimize performance. Workload orchestration for AI on Edge must consider resources, latency, and security requirements. Interoperability among Edge platforms and devices is essential to ensure AI application portability and flexibility. Adopting open standards and common communication protocols facilitates component integration and collaboration among providers and developers. Future perspectives and social responsibility The future of Edge Computing and distributed AI depends on technological advancements and addressing social and ethical challenges. AI-driven automation at the Edge can impact employment, necessitating reskilling and training programs for workers. We must ensure this technology is accessible to everyone, regardless of geographical location or socioeconomic level, bridging the digital divide. Transparency and accountability in automated decision-making will be crucial. Auditing and tracking mechanisms for decisions made by AI models at the Edge must evolve with technology, establishing clear responsibilities and ensuring progress benefits society as a whole. Conclusion Edge Computing and AI are reshaping how businesses implement and utilize technology. Supported by robust telecommunications infrastructures and intelligent distributed resource management, this evolution promises increased efficiency and technical capabilities while ensuring data sovereignty and user privacy. The success of this transition depends on overcoming technical challenges and ethical and social implications. Collaboration among developers, businesses, and regulators is essential to ensure technology evolves responsibly, protecting privacy, promoting equity, and facilitating responsible innovation. Edge Computing and distributed AI aren't merely technological trends; they signify a fundamental shift in our relationship with digital technology, bringing AI closer to data generation and decision-making points, promising to transform industries, services, and communities with accessible, secure, and privacy-respecting AI capabilities. ■ You can access here Beyond the cloud: edge computing and the future of distributed AI →
March 24, 2025
Cyber Security
Cyber Security automation with AI to anticipate and neutralize threats
Cyberattacks are a matter of when, not of if. The rise of cyber threats has turned cybersecurity into a critical challenge for businesses and organizations. This is driven by digitalization, the expansion of the attack surface, the increasing sophistication of attackers, and the proliferation of hacktivism and cybercrime-as-a-service. Every day, millions of intrusion attempts, phishing campaigns, and targeted attacks threaten the integrity of corporate and governmental data and operations. In this context, conventional defense strategies are insufficient, and cybersecurity talent shortage limits response capabilities. This makes it essential to automate cybersecurity with advanced technologies to protect digital assets and assist in threat detection, analysis, and mitigation. Automation involves deploying intelligent systems that identify suspicious patterns in real time, block attacks before or as they happen, and efficiently manage incidents using Artificial Intelligence (AI) and machine learning. Through automation, we can reduce response times, minimize human error, and increase efficiency in protecting digital assets. Benefits of Cyber Security automation Cybersecurity evolves alongside emerging threats. Automated cybersecurity arises from the need to respond swiftly to vulnerabilities and sophisticated attacks. Automation enhances security and allows human teams to focus on strategic tasks. Its key benefits include: Rapid threat detection and response, reducing the impact of attacks such as ransomware and phishing. Minimization of human errors attackers could exploit by eliminating repetitive and complex manual tasks. Efficient management of large volumes of security data and alerts, enabling security teams to focus on critical threats rather than routine activities. Cost savings in monitoring and incident response, reducing the need for constant oversight by trained personnel. Implementation of proactive AI-driven mechanisms to simulate complex cyberattacks and enhance defensive strategies based on predictive scenarios. Next-Generation SOCs and their role in automation Security Operations Centers (SOC) have long been the first line of defense for many organizations. However, as cyber threats grow in sophistication, next-generation SOCs must evolve towards more automated models. Unlike traditional SOCs, which rely heavily on manual analysis, modern SOCs leverage automation, AI, and machine learning to detect and respond proactively to threats. In these advanced SOCs, incident responses can be executed in real time with less human intervention, improving detection accuracy and reducing response times. Next-generation SOCs integrate prediction, automation, and orchestration capabilities, enabling seamless interaction between diverse security tools. As the number of cybersecurity solutions increases, so does the management burden. Without advanced orchestration and automation mechanisms, security operations can become costly and inefficient, making real-time incident response increasingly difficult. Despite automation, cybersecurity professionals remain crucial in overseeing and refining these systems, ensuring strategic implementation. This allows experts to focus on developing advanced security strategies and mitigating upcoming threats. As the complexity of security solutions grows, orchestration and automation mechanisms ensure efficiency and operational effectiveness. Key tools for cybersecurity automation Some of the essential tools for cybersecurity automation include: SIEM (Security Information and Event Management): Collects and analyzes security data in real time, providing visibility and alerts on potential threats. SOAR (Security Orchestration, Automation, and Response): Automates incident response by coordinating actions across security systems. XDR (Extended Detection and Response): Extends SIEM and SOAR capabilities, integrating multiple security layers (network, endpoint, cloud, applications) for more effective detection and response. AI and Machine Learning: Improve threat detection and enable predictive automated responses. AI identifies behavioral anomalies and detects malicious patterns in network traffic while automating alert classification and prioritization. Deception technology: Creates decoy environments (honeypots) to attract and detect attackers, automating pattern recognition and malicious behavior identification. AI can analyze these honeypots to uncover attack tactics, techniques, and procedures (TTPs). ■ Honeypots are essential for early threat detection. They divert attackers' attention and allow security teams to study their tactics before critical systems are compromised. Integrating AI capabilities into modern SOCs In next-generation SOCs, AI is transforming cybersecurity by enhancing efficiency, reducing workload, and lowering entry barriers for analysts, accelerating threat response. AI enables: Automation of repetitive and critical processes, reducing manual workload and optimizing analysts’ time to focus on complex security challenges. Breaking down silos between security tools, integrating signals from multiple sources (endpoints, network, cloud, identity systems) into a unified data model. Real-time analysis of vast datasets, identifying patterns and anomalies that might be overlooked by human analysts. This accelerates detection and response, minimizing dwell time—the period an attacker remains undetected within a network. Correlation of isolated security signals, helping analysts determine whether seemingly unrelated events form part of a coordinated attack. AI prioritizes relevant threats and reduces false positives. AI-assisted security queries via natural language processing (NLP), allowing analysts to interact with security models without needing expertise in query languages. This reduces the learning curve and improves decision-making efficiency. Lowering the entry barrier for new security analysts, providing AI-driven recommendations on incident response and real-time threat context. Automated detection of suspicious access attempts, compromised credentials, and identity misuse. ■ AI in next-generation SOCs relies on large-scale data ingestion and processing. To reduce bias and enhance accuracy, AI models need up-to-date and diverse data to identify new threats. Strategies for implementing Cyber Security automation To maximize automated solutions' effectiveness, they must be seamlessly integrated into existing security processes and teams. Some key steps include: Assessing and prioritizing critical areas for automation, such as early threat detection and vulnerability management. Modular automation scripts to create scalable processes that adapt to evolving threats. Ensuring interoperability between automated tools and legacy systems to avoid compatibility issues. Continuous monitoring and fine-tuning with threat intelligence to improve detection accuracy and minimize irrelevant alerts while ensuring critical threats are not overlooked. Adopting advanced models like Zero Trust, combining behavioral analysis and automation. Training security teams to interpret alerts generated by automated systems, preventing alert fatigue and reducing false positives. "Automation and AI are powerful tools against cyber threats, but they should never replace human oversight and judgment." — Bruce Schneier Managed Security Services (MSSP) Given the growing complexity of cybersecurity, many organizations lack the in-house resources to manage security effectively. As a result, Managed Security Services Providers (MSSP) have become a key option for enhancing protection without large infrastructure investments. MSSPs offer solutions such as: SIEM, SOAR, and XDR management Automated incident response Security audits and vulnerability assessments 24/7 cybersecurity monitoring by experts These services help businesses reduce risks, improve operational efficiency, and ensure compliance without compromising incident response capabilities. Conclusion Cybersecurity automation is essential for safeguarding digital assets against increasingly persistent and sophisticated threats. Automated tools and strategies enhance incident response speed and strengthen proactive defense measures. By integrating automation, AI, and predictive analytics, organizations can build advanced security models that anticipate and neutralize cyber threats before they become real dangers. However, this also introduces ethical and operational challenges that must be managed through a balance of technology and human oversight. By integrating automation with AI and predictive analytics, threats can be anticipated and neutralized before they become a real threat. In this context, next-generation SOCs are crucial for integrating advanced automation and AI technologies, enabling more effective security monitoring and response. The combination of these innovations with human expertise ensures a more resilient, adaptive cybersecurity model.
March 17, 2025
AI & Data
AI Agents and their impact on business automation
What is an AI Agent? An AI Agent is an application designed to achieve specific objectives by observing its environment and proactively interacting with it, using specialized tools such as API interfaces, language models, or information processing modules. These agents can receive general instructions and autonomously determine the most effective strategies to fulfill their purpose, combining learning, planning, and execution in a continuous improvement cycle. Their ability to operate autonomously, without constant human intervention, enables them to make decisions even without explicit instructions. Additionally, they can plan and execute tasks independently, leveraging real-time data and continuously optimizing processes. —For example, in the banking sector, an AI agent can integrate with a risk management platform to analyze a client's financial transactions in real time, detect potential fraud, and take or recommend preventive measures. This enhances security, reduces financial fraud, and streamlines critical decision-making. ■ AI Agents vs. Agentic AI. While AI Agents execute autonomous tasks through language models and specialized tools, Agentic AI is a broader approach aimed at developing AI systems capable of reasoning, planning, and acting autonomously across diverse contexts. This approach encompasses both individual agents and collaborative architectures and multi-agent systems. AI Agents represent a significant advancement in how businesses approach automation and information management. Components of an AI Agent An AI agent is built on three essential pillars that form its cognitive architecture: Language Model (LLM): Functions as the agent’s "brain," responsible for processing information and making decisions. It can be a single model or a combination of specialized models using reasoning frameworks such as: ReAct (Reasoning + Acting): Combines reasoning and action to interact iteratively with the environment. Chain-of-Thought: Allows the model to break down complex problems into a logical sequence of steps. Tree-of-Thoughts: Explores multiple reasoning paths in parallel, evaluating different solutions before making a decision. Interaction Tools: Enable the agent to connect with the external world, whether through web APIs, databases, or enterprise systems. These tools significantly expand the base model’s capabilities, allowing integration with extensions, user-defined functions, and data stores that facilitate access to up-to-date and relevant information. Orchestration Layer: Manages the flow of information between the model and the tools, ensuring a continuous cycle of reasoning, planning, and action. This layer also handles memory management and learning from experience, improving complex decision-making by aggregating results from multiple sources. Frameworks: Various open-source and standardized frameworks, such as LangChain, LlamaIndex, and CrewAI, provide tools for workflow management, language model interaction, and tool integration. Evaluation: The interaction between a planning agent and an evaluation agent (which can be the same) is essential for refining response quality. An evaluator-optimizer workflow enables an LLM to generate a response, have another evaluate it, and provide feedback for improvement. Continuous evaluation is critical to agent performance. Fallback: Agents must have robust mechanisms to handle queries beyond their scope or where tools are unavailable. Ensuring that agents "know when they don’t know" is essential to prevent false or hallucinated responses and maintain reliability. An AI Agent combines acquired knowledge, logical reasoning, and access to external information and tools to autonomously execute tasks. Types of AI Agents AI-based systems fall into two main categories: Workflows: Structured systems with predefined code paths, ideal for repetitive and well-defined tasks requiring high consistency. These are efficient in environments where operational variables are stable and predictable. Autonomous Agents: Dynamic systems where language models manage their own processes, adapting in real time to changing conditions. These are ideal for complex environments requiring flexibility and responsiveness in dynamic and unpredictable data-driven contexts. ■ Thanks to LLMs, with a well-crafted prompt, you can instruct an agent in writing and explain your expectations verbally. This eliminates the need to program an extensive decision tree. Use cases AI Agents revolutionize various business sectors through practical applications such as: Customer Service: Integration with intelligent assistants or chatbots that access customer histories, handle complex requests, and automate processes like refunds or ticket updates, improving operational efficiency and customer satisfaction. Software Development: Agents capable of writing, testing, and debugging code autonomously, managing multiple repositories, and optimizing development cycles through continuous error analysis and automated solutions. Information retrieval and analysis: Agents that gather data from multiple sources, analyze market trends, and generate personalized executive reports, facilitating data-driven decision-making. Financial task automation: Processing transactions, automatic audits, and financial report generation, reducing human errors and enhancing efficiency in financial data management. Recommendation systems: Personalization of commercial offers based on customer behavior analysis and purchase history, using advanced algorithms to predict preferences and optimize marketing strategies. ■ AI Agents can integrate with Business Applications to optimize finance, HR, supply chain, and customer experience management. These applications provide a structured environment where agents can automate tasks and enhance decision-making with real-time data. The ability of AI Agents to learn, adapt, and act autonomously enhances operational efficiency and unlocks new opportunities for business innovation. AI Agent tools Tools are the mechanisms that enable agents to transcend their inherent knowledge limitations and interact with the external world, such as: API Extensions: Facilitate connections with external services, from e-commerce platforms to ERP and Business Applications, allowing agents to interact efficiently with third-party applications. Custom functions: Code modules designed for specific tasks, such as data formatting or information validation, executed on the client side for increased control over data flow. Data Stores: Enable access and manipulation of large volumes of structured and unstructured data, facilitating retrieval-augmented generation (RAG) architectures to improve response accuracy. Security in AI Agents AI Agents introduce new risk layers that require careful attention. To ensure security, agents must operate under the principle of least privilege, receiving only the necessary access for their tasks. Additionally, the risks of indirect access to sensitive information through AI Agents must be anticipated, and appropriate security measures implemented. —For example, an employee without access to payroll documents stored in the company’s ERP could indirectly obtain this information if an internal AI Agent has access to it. Thus, it is essential to implement strict authentication and authorization methods, client-side functions to better control sensitive data, and a systematic review of indirect access. Additionally, conduct detailed audits to track the Agents' actions. The introduction of AI Agents adds security risks that must be assessed, requiring well-designed tools and clear documentation. It is also crucial to maintain simplicity in agent design and prioritize transparency, explicitly displaying the AI Agent’s planning steps to identify and correct security issues. ■ Access the whitepaper Agents (PDF). — CONTRIBUTED BY Javier Coronado — IA & Data Humanity's Last Exam for a rigorous assessment of AI progress February 8, 2025
February 27, 2025
Cyber Security
Cyber Security professionals in demand in 2025
The shortage of skilled talent is part of the 'perfect storm' forming within the Cyber Security sector: a global deficit of specialized professionals amounts to approximately 40%. Meanwhile, cyber threats become increasingly complex and perilous. State actors and cybercriminals employ advanced tactics to conduct espionage, influence operations, and disrupt critical infrastructure. Meanwhile, cyberattacks continue to rise, including ransomware attacks, supply chain breaches, and persistent advanced threats. This combination endangers the daily operations of businesses, threatens resilience and continuity, and jeopardizes critical infrastructure. Last year cyberattacks targeting critical infrastructures such as energy and transportation increased by 35%. Against this backdrop, companies face a dual challenge: the urgent need to protect themselves against advanced threats and the difficulty of recruiting and retaining qualified talent. Cyber Security shortage talent problem Factors contributing to this talent shortage include the accelerating number of cyber threats and the constant evolution of attack techniques employed by adversaries. Additionally, the pace at which technology advances surpasses the capacity of educational programs to prepare adequately trained and certified cybersecurity professionals. Demand for experts has intensified, boosting labor market competitiveness as companies strive to attract and retain top talent. Consequences of talent shortage The lack of specialized cybersecurity personnel exacerbates risks posed by cyber threats, compelling companies to adopt more innovative approaches to recruitment and retention strategies. Additionally, implementing advanced technological solutions rooted in automation and artificial intelligence (AI) is essential to enhance cybersecurity teams' efficiency and effectiveness. An insufficient cybersecurity workforce can have severe consequences for businesses, including: Prevalence of specific problems caused by personnel shortage. Inadequate risk management: Without sufficient risk analysts, companies may fail to identify or mitigate risks promptly. Poorly configured systems: A lack of skilled personnel can lead to system misconfigurations, leaving vulnerabilities that attackers can exploit. Delayed patching: Inadequate patch application leaves businesses exposed for prolonged periods. Delays in deployment: Without sufficient resources, companies may struggle to deploy protective solutions quickly, compromising security from the start. Human error in procedures: A lack of skilled personnel can result in procedural errors in implementing safety policies and practices. Inability to stay informed on active threats: The vast array of cyber threats and their rapid evolution can cause companies to miss critical threats, increasing the risk of security breaches. Cyberresilience is crucial for businesses facing complex cybersecurity challenges. It combines talent, holistic strategies, and advanced technologies. Specialized Cyber Security profiles Recognizing the growing complexity of cyber threats, we have identified the most demanded specialized profiles in Cyber Security to address and respond effectively to security incidents. Among them are: Incident Response Expert: Manages and coordinates the response to security incidents, ensuring threats are contained, investigated, and mitigated efficiently. Threat Hunter: Actively searches for cyber threats within an organization's infrastructure using advanced techniques to detect and neutralize malicious actors before they cause significant damage. Intelligence Threat Analyst: Collects, analyzes, and notifies information on cyber threats to anticipate and prevent potential attacks, providing strategic insights to enhance organizational security posture. Digital Forensics Analyst: Conducts thorough digital evidence analysis to investigate incidents, identifying the root cause of attacks and aiding in incident recovery and fortification efforts. Malware Expert: Studies and analyzes malicious programs to understand their functionality, developing countermeasures and protecting organization systems from infections and cyberattacks. Cyber Security Strategic human talent development in Cyber Security December 5, 2024 Conclusion Cybersecurity is a strategic and business priority requiring collaboration among multiple specialized experts. Cyberresilience not only safeguards an organization's assets but also ensures customer confidence and business sustainability in increasingly complex environments. Implementing robust policies, continuously training the workforce, and adapting to new technologies and regulations are essential for maintaining an effective and proactive security posture. While budget constraints may limit a company's ability to manage cybersecurity measures, efficient implementation of protective strategies is crucial to preserve defense quality, maintain business continuity, and ensure cyberresilience.
February 26, 2025
Telefónica Tech
AI & Data
Turn notes and reports into quick study guides with NotebookLM's AI
It is often said that many jobs in 2030 have not been created yet. In an era where information grows exponentially, continuous learning is key to adapting and advancing—both personally and professionally. However, learning is not always easy. The overwhelming amount of information available can be daunting, and finding the time and the right methods to organize and absorb new concepts is a challenge for many. Continuous learning ensures mastery of new skills while fostering the right mindset to seize emerging opportunities. This is where AI reveals its potential, offering more effective ways to acquire knowledge and optimize learning processes. What is Google’s NotebookLM? In this context, Google’s NotebookLM is an AI-powered learning tool that transforms extensive documents—such as academic papers, technical manuals, or class notes—into more dynamic and accessible study materials. NotebookLM uses AI to analyze and restructure document content based on individual needs. This facilitates learning by removing language and resource barriers while adapting information depth to each user’s knowledge and experience. Thanks to its analytical and synthesis capabilities, this free tool helps extract key concepts, suggest summaries, and generate interactive formats that enhance a deeper and more engaging comprehension of information. AI has become a powerful tool for learning. How to use NotebookLM for learning To make the most of NotebookLM, the first step is to upload one or more documents (PDFs, Word files, Google Docs, spreadsheets, videos, podcasts, web pages…) related to the subject you want to process or learn about. This document could be a specialized whitepaper on cybersecurity or a scientific article on microbiology. Once integrated into NotebookLM, the tool becomes an ‘expert’ on that topic through the user’s Notebooks. Each Notebook can hold different documents, including handwritten notes that can be converted into text. The AI analyzes all the information and extracts its main sections, automatically transforming it into study material in the preferred language. For example, you can generate a study guide with the most relevant points or a briefing that summarizes the core content. You can also request an FAQ with the most frequently asked questions derived from the text. This will make it easier to retain information and clarify doubts, or create a glossary of key terms for quick reference. ■ The interactive mode in the audio summary feature is also useful for learning, as it allows voice interaction with content—similar to participating in a class. Key educational features of NotebookLM Briefing Doc generates clear and precise summaries, allowing you to quickly identify key points without reading the entire content—but checking that everything that needs to be there is there. —When working with the whitepaper Generative AI Agents: Architecture, Tools, and Applications, NotebookLM can synthesize key concepts, summarizing the architecture of AI agents, the tools used, and key use cases, offering a quick global overview. Study Guide creates personalized learning paths, breaking down complex topics into manageable sections. You can request specific explanations or guidance on how to delve deeper into a subject. —Following the same example, you can ask NotebookLM to design a learning guide to understand how different prompt engineering frameworks, such as ReAct, Chain-of-Thoughts, and Tree-of-Thoughts, influence an AI agent’s reasoning and decision-making abilities. Frequently Asked Questions (FAQ) A dynamic interaction feature that allows users to ask specific questions based on their documents. The AI responds with clear and precise answers, acting like a 24/7 personal tutor. You can also request a list of frequently asked questions about a topic —Continuing with the same content example, you could ask: 'What is the main difference between AI agents and language models?' to receive an answer directly derived from the document. Add Notes allows you to incorporate personal annotations and link them to the original documents, keeping key ideas centralized and well-organized. —While studying the same whitepaper as previous examples, you could expand on the use cases mentioned. You could link them to specific sections where they are described, and turn those notes into additional sources for the model. ■ You can add (or remove) additional sources for the model at any time, in different formats. Continuing with the same example, you might want to incorporate the publication Building Effective Agents into the initial whitepaper. This will expand on different approaches or implementation examples. Timeline Depending on the case and available information, NotebookLM can generate a chronological sequence of key events and a list of the main figures involved. —If you are studying a historical event, you can visualize how different events unfolded and who played a role in the process. Interactive Mind Maps (March 2025 update) NotebookLM now includes the ability to generate interactive mind maps that automatically transform information into organized visual diagrams. This new feature enhances the existing toolkit: Instant visualization: Click the 'Mind Map' button located below the chat summary after using 'Summarize Document' to generate an interactive visual diagram. Dynamic exploration: Expand nodes by clicking the arrows to reveal subtopics and deeper layers of information. Flexible export: Download maps as PNG images for easy sharing or future reference. Full integration: Seamlessly works with the rest of NotebookLM’s features, allowing you to add specific notes to each node. Continuous learning: the key to expanding knowledge Continuous learning keeps our curiosity alive and encourages us to explore fields that once seemed unfamiliar or unreachable. Tools like NotebookLM, or even using other language models locally on your own computer for the same purpose, not only accelerates content absorption but also allows for a customized learning experience. Combining technology and education reinforces the idea that knowledge is a shared resource—one that everyone can access with the right tools. IA & Data Humanity's Last Exam for a rigorous assessment of AI progress February 8, 2025
February 15, 2025
AI & Data
Humanity's Last Exam for a rigorous assessment of AI progress
In previous articles, we discussed the importance of evaluating language models' progress to understand their capabilities and limitations and to guide their development appropriately. It is not just about measuring their performance in specific tasks, but also the depth of what we call their knowledge and reasoning. As we saw then, language models are already beginning to surpass many traditional benchmarks, such as MMLU and GPQA, reaching performance levels that hit 90% on the measurement scale in some cases. This has generated the need to develop enhanced, more demanding and representative assessment approaches. Beyond conventional benchmarks Today there is no universal framework for evaluating AI models. While organizations like NIST and ISO work on defining standards, the industry relies in practice on a set of de facto benchmarks, such as MMLU, GSM8K (for mathematics), and HumanEval (for code generation). However, each developer combines metrics and tests in a particular way, making direct comparisons between models difficult. ■ One of the challenges in evaluating AI is the phenomenon of ‘test training’, where models are optimized to excel in evaluation tests without this implying a true development in their reasoning capacity, as in the Dieselgate case. Humanity’s Last Exam From this need arises the proposal for Humanity’s Last Exam (HLE), a method designed to scrutinize AI academic knowledge from a deeper perspective; measuring skills such as advanced reasoning, thematic understanding, and confidence calibration. Instead of focusing on multiple-choice questions or mere information retrieval, HLE evaluates knowledge through 3,000 closed-ended questions spanning over a hundred academic disciplines. Its goal is to better reflect LLM cognitive abilities in expert academic contexts, including: Humanities and Classics: History, literature, philosophy, and languages of ancient civilizations, such as Greece and Rome. Natural sciences: Physics, biology, and chemistry, with detailed analysis problems. Mathematics and logic: Questions that test formal reasoning and abstract problem-solving abilities. Ecology and cife sciences: Assessments of biological interactions and organism adaptations to their environment. —For example, a question in the Classics theme might require the translation of a Roman inscription in Palmyrene writing, provided the evaluated model is multimodal. Image: Humanity's Last Exam. A relevant aspect of HLE development is the participation of a thousand experts from over 500 institutions in 50 countries. These experts include professors, researchers, and postgraduate degree holders. This variety of perspectives aims for high quality and depth questions, faithfully reflecting academic challenges while incorporating different scientific and cultural perspectives. HLE questions seek to assess LLMs' deep understanding beyond mere text generation. Calibration confidence as the cornerstone of assessment LLMs sometimes generate incorrect answers, and it is important that they recognize when this happens and they are not sure of their own answer. HLE emphasizes the need for models to recognize their own uncertainty instead of confidently generating hallucinations: answers that are incorrect or invented generated with excessive confidence. A well-calibrated model should indicate lower confidence in its response when there is a high probability of it being incorrect. To evaluate this, HLE requires that each generated response be accompanied by an indicator of its confidence level expressed on a scale from 0% to 100%. If it predicts a response with 90% confidence and that answer is incorrect, the model has a calibration problem and overestimates its certainty. If a model is well calibrated, when it indicates it has 80% confidence in its response, then its answers will be correct, on average, 80% of the time. This seeks to detect when a model truly understands an answer and when it is just generating a prediction without foundation. Knowing this reliability parameter is also fundamental to understanding its limitations, especially if the model is to be implemented in critical areas such as medical diagnosis. HLE emphasizes the need for models to recognize uncertainty instead of generating incorrect answers with excessive confidence. Limitations and improvement opportunities The first results obtained with HLE show that advanced and widely used models, such as GPT-4o and Grok-2, have difficulties correctly answering very specialized questions. In HLE they achieve significantly lower success rates and scores (less than 4%) than in previous tests. Image: Humanity's Last Exam. Additionally, HLE has identified issues with solving mathematical problems and handling abstract concepts. These deficiencies reflect that many models are better optimized for sequential word prediction than for deep structural understanding. These findings confirm the need to develop rigorous evaluation methods. Tests that measure not only text generation abilities but also reasoning and generalization abilities as well. ■ Calibration and interpretability: A well-calibrated model better reveals its own limitations and degrees of uncertainty, allowing for a better understanding of how and why it generates certain results or makes certain decisions. Greater transparency improves practical utility and reliability. Conclusion The development of revised evaluation metrics reflects both the accelerated evolution of language models and the need to improve how their progress is measured. HLE, with its focus on difficulty, thematic diversity, calibration, and global collaboration, aims to take a step further in assessing LLM academic knowledge. HLE results reveal persistent deficiencies in LLM mathematical reasoning and conceptual understanding. Despite advances in LLM performance, size, and efficiency, their modest success rates, even in the most advanced versions, indicate that AI still has ample room for improvement to meet the rigorous and precise demands of the academic and professional fields. The HLE proposal reaffirms the need for tests that holistically and rigorously evaluate models' capacity, to ensure model development aligns with human needs. Telefónica Tech IA & Data You can run Generative AI models on your computer: Step-by-step instructions to install LM Studio January 8, 2025
February 8, 2025
Telefónica Tech
AI & Data
How to install and use DeepSeek on your PC
In a previous post, we explored how to install LM Studio on a standard personal computer to run locally LLMs (more or less modest) without relying on an internet connection or cloud-based services, as you need when using ChatGPT. Language models and LLMs have democratized access to advanced AI, making it more accessible to individual users and small businesses. These models are becoming increasingly capable of generating coherent and creative text, writing code, analyzing data, and, in some cases, generating visual content using specialized models. Returning to LM Studio and similar software, these programs allow users to run language models on relatively modest personal computers if they meet certain requirements, such as a modern processor and sufficient RAM—ideally 16 GB or more. ■ This transforms these language models into personal AI assistants that help with tasks such as translation, text summarization, code review, or content creation. What is DeepSeek R1 Distill The startup DeepSeek has developed advanced language models like R1, V3 o Janus, which stand out for their efficiency and performance with less training resources than similar models. These models are open source and non-proprietary. This means that you could download DeepSeek R1 and install it on your computer, assuming it meets the hardware requirements. It also means that you can create distilled versions (distill) of the R1 model, smaller and with lower hardware requirements. You can run these versions on a regular PC (and computers Copilot+) allowing you to run locally AI applications efficiently. How to distill an LLM model? Distilled models are smaller, more efficient versions of large language models created through a process known as distillation. This method reduces the model's size and resource consumption while preserving much of its key knowledge. The distillation process works by having a large, complex model to train a smaller model that learns to replicate its behavior using significantly lower computational demands. As a result, distilled models can run on consumer-grade computers without requiring specialized hardware. However, distillation has the main drawback of losing certain details, specialized knowledge, and complex reasoning abilities. This can impact their accuracy or capacity to handle sophisticated or domain-specific queries. Installing DeepSeek R1 Distill on your computer with LM Studio Once you have LM Studio (or a similar program) installed and updated to the latest version, follow these steps: Open LM Studio and access the model catalog by clicking the magnifying glass icon. Look for the model by typing 'DeepSeek R1' into the search bar. —The model DeepSeek R1 Distill (Qwen 7B) is well-suited to a modest personal computer. Select the model and click Download. Once the download is complete, switch to the Chat tab in the sidebar. In the Select a model to chat with dropdown menu, choose DeepSeek R1 Distill (Qwen 7B) or any other DeepSeek model you have downloaded. Start a conversation with DeepSeek in the Chat window. Although, as we have mentioned, distillation inevitably leads to some loss of detail and reasoning ability, having access to advanced models like DeepSeek R1 Distill on personal computers enables users to run powerful models in more diverse settings. Furthermore, it allows individuals to gain a deeper understanding of how these models work, explore their limitations, and experiment with their adaptation and customization. This opens new possibilities for personal and professional applications, including education, learning, and research. ■ DeepSeek in Perplexity. If you are a Perplexity Pro user you can use the full version of DeepSeek by selecting 'Reasoning with R1' from the available models. —You can use it for a variety of applications, such as data analysis, summarize, translate and text generation or programming assistance. Telefónica Tech IA & Data Improving LLMs performance using the 'Method Actor' technique December 10, 2024
February 1, 2025
AI & Data
How AI enhances accessibility and breaks down language barriers
Imagine the frustration of encountering digital content you cannot consume. Millions of people remain excluded from digital content due to disabilities, learning differences, or language barriers, limiting their educational, professional, and social opportunities. AI and machine learning are opening up a world of possibilities to make content and knowledge accessible to everyone. These technologies have the potential to personalize and adapt the user experience, reducing the gaps faced by individuals with disabilities, learning differences, or varying abilities. The transformative role of AI in accessibility AI has enabled the development of tools and resources that facilitate access to information in different formats and languages. Text, images, audio, and video translation makes it easier to access technical information, courses, entertainment, books, and more. It also expands the possibilities for consuming internet and social media content, navigating in unfamiliar places, and interacting in physical spaces. Accessibility technologies benefit everyone: they expand access to content, enhance social interaction and inclusion, and enrich user experiences. Some of the most impactful technologies breaking down these barriers include: Natural Language Processing (NLP): Allows machines to comprehend and generate human language, including sentiment analysis, information extraction, and contextual understanding. Language models: Ranging from basic statistical models to large language models (LLMs), these systems handle complex contexts and generate coherent responses. Computer vision: Analyzes and interprets visual content, such as object recognition, scene analysis, text recognition in images, and assisted navigation. Text-to-Speech (TTS) with neural synthesis: Converts written text into natural, expressive audio, delivering appropriate intonation and rhythm. Neural machine translation: Enables real-time adaptation and interpretation of content between languages. Automatic speech recognition (ASR): Accurately converts spoken audio into text, accommodating diverse accents and contexts. In addition to these, emerging technologies like haptic interfaces, eye-tracking systems, gesture-based controls, and even neural interfaces are enhancing accessibility in unprecedented ways. Multilingual video dubbing One significant breakthrough is the automatic dubbing of videos into multiple languages. Advanced AI technologies can now recreate the speaker's original voice in different languages, preserving its tonal characteristics and authenticity. Lip synchronization algorithms further enhance this experience by matching the audio with the speaker’s lip movements, creating a viewing experience as seamless as watching the original version. This not only makes content more accessible to global audiences but also preserves the original emotional and performative nuances, allowing viewers to connect more intimately with the characters. Success Story: The Prado Museum Prado Babel initiative by the Museo Nacional del Prado is a shining example of this technology. Through AI, the museum has translated and dubbed its video presentations into multiple languages. In the video, museum director Miguel Falomir retains his voice and image while speaking in over a dozen languages. With the support of Telefónica, its technological partner, the museum offers visitors the opportunity to listen to explanations about its identity and collections in their own language. This fosters a more personal, inclusive, and accessible experience. For this project, we at Telefónica Tech evaluated various tools to achieve the required dubbing and lip-sync quality. ElevenLabs ensured expressivity and emotional fidelity in voice reproduction, while HeyGen preserved the natural synchronization of lip movements with AI-generated audio. ✅ By using these technologies, the museum ensured that dubbed versions in diverse languages, such as German, Chinese, or Greek, maintained the authenticity of the original, creating a seamless audiovisual experience that reflected Falomir’s personality. Automatic subtitles and Speech-to-Text transcription Automatic video subtitles, widely available on platforms like YouTube and social media, significantly enhance accessibility. These systems contextualize translations and expand the number of supported languages, generating subtitles in real time. By analyzing video audio, these technologies transcribe speech and synchronize it with visual content, providing written captions. This feature is invaluable for individuals with hearing impairments and also benefits those consuming content in another language or in silent environments. — Automatically generated YouTube subtitles. Speech-to-text transcription enables the quick capture of conversations or speeches, converting them into written text. This tool is helpful for people with hearing disabilities and in contexts like courses, conferences, or meetings. It is commonly integrated into virtual assistants, voice memo apps, language models, translators, and word processors, improving accessibility and productivity. ■ VideoLAN recently showcased a demo of its VLC video player, which, using open-source AI models, generates subtitles automatically and translates them into over 100 languages in real time as the video plays. 'Universal' web page translation Real-time website translation is becoming increasingly common. Tools like Google Translate and Microsoft Translator, integrated into browsers, translate content into multiple languages. Furthermore, these technologies are now native to web development, enabling browsers to interpret and translate text within images. —Current web browsers can identify, interpret, and translate text in images on a web page (original page in English translated into Spanish). This capability has the potential to eliminate language barriers and 'universalize' access to information and knowledge, offering seamless, accurate translations that make content navigation and understanding effortless for users. Text-to-Speech (TTS) TTS technology automatically converts text into speech and has been significantly enhanced by AI. Modern TTS systems leverage NLP to analyze text and generate speech with appropriate intonation, rhythm, and emotion, handling pauses, accents, and pronunciation nuances specific to each language. TTS applications span virtual assistants, GPS systems, educational tools, language learning apps, and audiobook narration. It is also critical for automatic video dubbing. For many, this technology transforms their interaction with the world. It empowers individuals with disabilities—especially those with visual impairments—to use mobile devices, computers, and other tools, enabling communication and access to previously inaccessible content. ■ Coqui TTS is an open-source voice synthesis engine that supports multiple languages and allows voice cloning with minimal audio samples. It benefits individuals with visual impairments, reading difficulties, and minority languages, as it can be trained to support new languages. Computer Vision for environmental understanding Computer vision has unlocked new possibilities for accessibility through advanced recognition of visual content using algorithms that analyze and comprehend images captured or viewed through devices like mobile cameras. These systems analyze and describe the content of an image or video in real-time, providing descriptions in text or speech to individuals with visual impairments. This enables users to understand the context and meaning of visual media that was previously inaccessible, such as what is happening in a movie scene. Additionally, this technology identifies and describes text within images, which benefits individuals with dyslexia or other reading challenges, including those related to language differences. With applications like Google Lens, users can point their cameras at signage, restaurant menus, documents, or other text, and the app overlays the translation directly onto the original image. This allows anyone—travelers, students, or people living in multilingual environments—to gain an instant, accessible understanding of their surroundings. ■ Models like Cloud Vision or Ollama OCR, which can be implemented in various projects and developments, are capable of precisely extracting text from photographs, images, and complex documents. Computer vision enhances accessibility for people with visual impairments or reading difficulties while also enabling interactive tools useful for anyone. Challenges and opportunities While progress is undeniable, several challenges remain: Digital divide: Many lack access to the technology or the knowledge required to use these tools. —A person with a hearing impairment may benefit from this technology but cannot access it due to the lack of a suitable device or the necessary digital skills. Bias in AI models: Insufficient linguistic diversity in training datasets can lead to errors in understanding regional accents, idiomatic expressions, or cultural nuances. —A poorly trained system might translate (from Spanish) “Me tomas el pelo” as “You take my hair” instead of the correct idiomatic equivalent, “You’re pulling my leg” losing its intended meaning. Cost and technological dependence: While many AI tools are technically accessible, they may require significant investments in infrastructure, training, and maintenance. —Accessing AI-based translation tools might require owning and knowing how to use a smartphone or computer, which isn’t feasible for everyone. Overcoming these challenges requires multidisciplinary collaboration among developers, educators, end-users, and public institutions. Continuous evaluation of AI systems is essential—not only in terms of technical performance but also of their social and ethical impact. ■ Ethical guidelines are imperative to respecting user privacy and digital rights. These tools must operate efficiently, transparently, and minimize risks such as unauthorized data collection. Conclusion AI presents an unprecedented opportunity to build a more accessible and inclusive digital world. However, implementation is not without its challenges. Addressing ethical, technical, and social issues proactively is essential to ensuring these tools benefit everyone, regardless of their abilities or circumstances. The journey toward digital accessibility is a continuous and collective effort, demanding commitment, innovation, and sensitivity. These technological advancements close gaps, improve lives, and open the door to a future accessible to all.
January 29, 2025
Telefónica Tech
AI & Data
ELIZA Reanimated lets you chat with a 'ChatGPT' from 1965
In the 1960s, at MIT laboratories, computer scientist Joseph Weizenbaum developed the program ELIZA, which is now considered the first chatbot. Weizenbaum created ELIZA using the MAD-SLIP programming language, marking one of the earliest attempts to simulate human conversation. Weizenbaum’s goal in developing ELIZA was to explore how humans react when interacting with a machine that appears to handle natural language, even without understanding the meaning of the words. To encourage conversation, ELIZA took on the role of an emotional therapist, responding to users by reformulating their own words. This type of interaction often resulted in users having the illusion of genuine understanding, a phenomenon now known as the ELIZA effect. ■ MAD-SLIP (Michigan Algorithm Decoder Symmetric List Processor) was a programming language based on MAD, extended with symbolic list manipulation capabilities. It was specifically designed to facilitate natural language processing and complex data structure handling on computers of that era. The ELIZA restoration project The ELIZA Reanimated restoration project began with the recovery of the original source code from Weizenbaum's personal archives at MIT. This material included the DOCTOR script, an early version of the MAD-SLIP code, and several supporting functions written in obsolete programming languages such as MAD and FAP. Restoring ELIZA's original code, which consists of 420 lines, posed several challenges—from manually transcribing code due to 6-bit BCD character encoding, to reimplementing critical missing functions. Additionally, unexpected errors were found in the code that needed to be fixed to preserve the authenticity of the experience when running the program on an IBM 7094 emulator, ensuring the software operated in an environment as close to the original as possible. Moreover, during the restoration process, it was discovered that the original version of ELIZA lacked some functionalities described in Weizenbaum’s 1966 paper. This, in practice, limited some of the capabilities attributed to ELIZA. On the other hand, the original code contained a hidden feature called 'teacher mode,' which allowed users to edit and save ELIZA’s scripts to modify and add new rules, effectively expanding ELIZA’s conversational capabilities. Although Weizenbaum mentioned this teaching functionality in 1966, it was never documented. Despite ELIZA’s popularity in its original form, its broad reach was achieved through cloned versions. A version developed in Lisp by Bernie Cosell quickly spread across ARPANet, the precursor to today's internet, while a BASIC version from 1977 enabled its adoption by early personal computer enthusiasts. These variants eventually overshadowed the original MAD-SLIP version, which faded into obscurity until its recent rediscovery. ■ The restoration process is thoroughly documented in ELIZA Reanimated: The world's first chatbot restored on the world's first time-sharing system → How to chat with ELIZA today Thanks to the restoration efforts, ELIZA is now available in two ways, allowing users to experience what it was like to interact with this pioneering chatbot from 60 years ago: Through a web browser on the Try ELIZA page, offering an easy and quick way to access it without installation or configuration. As an open-source project available on GitHub, which can be downloaded and run in an emulated IBM 7094 environment on operating systems such as Unix and macOS. Unlike modern chatbots like ChatGPT, which use advanced AI models capable of understanding context, generating complex responses, and learning from vast amounts of data, ELIZA relied on simple pattern recognition rules to simulate human conversation. While modern systems analyze the deeper meaning of language and can adapt to different conversational styles, ELIZA merely responds with predefined phrases, reformulating the user’s keywords without truly understanding their meaning. However, ELIZA’s impact on AI development was significant, as it demonstrated the potential for human-machine interaction, influencing the creation of the language models we use today.
January 25, 2025
Cyber Security
Cloud
Connectivity & IoT
AI & Data
Industry 4.0 in 2025: Trends, challenges, and technological solutions
In the era of Industry 4.0 digitalization has become a cornerstone of competitiveness and efficiency in industrial enterprises. The integration of digital technologies such as AI, the Internet of Things (IoT), Cloud and Edge Computing, and Cybersecurity is transforming how factories operate and connect. This analysis highlights key trends and challenges in the industrial sector in 2025, emphasizing strategies that enable comprehensive digitalization and drive sustainable, secure growth through efficiency, productivity, and resilience. Current trends in the industry Digitalization and factory automation Connected factories are a cornerstone of Industry 4.0. The interconnection of all processes and devices within a plant enables real-time monitoring and more efficient resource management, enhancing production and reducing operational costs. The adoption of digital technologies and automation allows factories to operate with higher precision, optimize production, improve product quality, predict failures, and minimize downtime through predictive maintenance. Connectivity and data analytics IoT technology connects a wide variety of sensors, components, robots, and devices that generate vast amounts of data. Advanced analytics and AI transform this data into actionable insights, enabling timely and accurate decision-making. Advanced analytics foresee problems before they occur, optimizing processes and reducing unforeseen disruptions and downtime. Additionally, data analysis supports informed decision-making and improves business and production operations. The global industrial automation market is projected to reach $265 billion by 2025. Smart Data Path This comprehensive approach to factory and production process digitalization spans from data collection to action based on intelligent analysis while incorporating cybersecurity throughout the process. Smart Data Path encompasses critical stages such as securing, connecting, processing, analyzing, and acting, creating a robust and secure environment for data flow and storage. This total integration enables factories to optimize resource usage, enhance operational efficiency, and promote sustainable growth. ■ Digital Twins: Virtual replicas of physical processes, products, and systems allow real-time monitoring, analysis, and optimization. They improve efficiency, reduce downtime, and facilitate scenario simulations and preventive planning. ■ Augmented Reality (AR) and Virtual Reality (VR): These technologies are transforming workforce training, collaboration, and operations through immersive experiences that enhance understanding and operational capabilities in complex environments. Digital technologies also reduce energy consumption, lowering the environmental impact and operational costs. Key challenges in the industry Cybersecurity and operational resilience Industrial systems' digitalization increases the attack surface and cyber risks. Protecting critical data and systems is essential to prevent disruptions and ensure operational continuity and supply chain resilience. This requires implementing robust cybersecurity measures, adhering to regulatory standards, and continuously training staff to identify and respond to potential threats. Adapting and integrating legacy infrastructures Modernizing existing infrastructures without compromising production is a significant challenge. Migration to advanced systems must be gradual and safe, tailored to each manufacturer’s circumstances, objectives, and needs. A detailed industrial digitalization plan is essential, addressing hardware, software, and security protocol updates to ensure a smooth transition without disrupting daily operations. Managing large data volumes Massive data generation in industrial plants necessitates efficient storage and processing solutions. Maintaining data integrity, security, and sovereignty while maximizing its value is critical. Companies must adopt appropriate technologies and strategies to effectively manage large data volumes, enabling more informed and precise decision-making. Advanced automation improves production accuracy and speed, reduces errors, and enhances final product quality. Innovative solutions through digital technologies The integration of digital technologies offers advanced solutions to overcome upcoming challenges in the industrial sector: AI and Advanced Analytics: AI systems and machine learning (ML) techniques allow manufacturers to predict demand, optimize production, and improve product quality. Advanced analytical models anticipate problems and support proactive decision-making. Industrial IoT: IoT technology connects devices and systems in real time, enhancing visibility and control over operations. This interconnection of cyber-physical systems facilitates predictive maintenance and efficient supply chain management. Cloud and Edge Computing: Cloud migration offers scalability and flexibility, enabling industrial enterprises to adapt quickly to market changes. The cloud also provides advanced security and data analytics solutions. Cybersecurity: The industrial sector must implement robust cybersecurity strategies to protect its data, systems, and infrastructure. Network segmentation and segregation, IT/OT convergence, and continuous monitoring and prevention help mitigate risks and potential threats, ensuring the resilience of industrial operations and safeguarding critical systems. ■ Open Gateway provides standardized APIs that enhance connectivity and automation in industry and manufacturing. Telefónica’s Open Gateway program offers partner networks and a Developer Hub with technical tools and documentation, fostering innovation and development across supply chains and other industrial sectors. Conclusion Industrial digitalization through the integration of digital technologies like AI, IoT, Cloud and Edge Computing, and Cybersecurity provides a clear pathway to increased efficiency and competitiveness for manufacturers. These innovations enable process optimization and improved product quality, offering essential tools for informed and proactive decision-making. Strategies like Smart Data Path ensure secure and efficient information flows, allowing industrial enterprises to transform and thrive in the context of Industry 4.0. Successfully managing large data volumes, enhancing cybersecurity, and modernizing legacy infrastructures are critical tactics for achieving digital transformation in the industrial sector. Adopting these technologies and strategies is imperative to remain competitive and represents an opportunity to reduce production costs, enhance operational resilience, and advance toward a more innovative and robust industrial future. Cloud Why Edge Computing is key to strategic sectors July 29, 2024
January 21, 2025
AI & Data
What’s for dinner? How to use Generative AI to optimize meal planning
Food waste is a global issue with a significant impact on household budgets and the environment. At home, poor meal planning is the main culprit. We often buy more than we need or forget about items that end up in the trash. It’s a common, costly problem, both economically and environmentally. Generative Artificial Intelligence (Gen AI), through accessible tools like ChatGPT, Perplexity, or Bing, offers practical and effective solutions to this issue by optimizing how we buy, consume, and utilize food. These tools leverage advanced language models trained on vast amounts of data, enabling them to analyze available ingredients, suggest recipes, and tailor menus to fit individual preferences and needs. Generative AI is a powerful and accessible tool that makes us more efficient in all areas of life. AI-powered menu planning Generative AI simplifies creating weekly meal plans. For instance, using tools like ChatGPT, you can ask a straightforward question: > What can I cook with the ingredients I have? This allows AI to process your food inventory and generate recipes that minimize additional shopping, cutting down on food waste. It’s not just about answering “What’s for dinner?”, but doing so in a way that maximizes efficiency. The true strength of these tools lies in their ability to customize responses based on our preferences. For example, if you ask, > Act as a chef and create a weekly healthy, high-protein menu for two adults and two kids, including the necessary quantities. AI can produce detailed and tailored plans. ✅ The more detail you provide in your prompt, the better the results. A general request like "Create a weekly menu" is a good start, but adding specifics such as the number of people, their ages, or dietary restrictions yields far more accurate solutions. This precision improves meal planning and minimizes food waste. Effective conversations with generative AI models → By using AI we can use what we already have and save money. Utilize ingredients and suggest substitutes Another major benefit of generative AI is its ability to make the most of your available ingredients and suggest substitutions when something is missing. Imagine you’re trying to make guacamole but don’t have avocados. Instead of running to the store, you could ask: > Can I make guacamole without avocados? The AI will provide practical alternatives, encouraging you to experiment with what you already have at home —reducing waste in the process. This approach not only resolves immediate problems, like missing a key ingredient, but also fosters a more efficient management of your food supply. Making better use of what you have decreases the need for additional shopping trips and minimizes waste. You can also adapt recipes to local ingredients: > Can you adapt the Gyudon recipe using common ingredients in [region/country]? Provide a shopping list for 4 adults. And even to your preferred kitchen equipment: > What would the process be to prepare the recipe in an [food processor or multifunction pot model]? Beyond meal planning and recipe adjustments, generative AI has broader implications for efficient food management. By offering personalized suggestions based on the ingredients you already have, these tools help you make smarter purchasing decisions and better organize your meals. This is particularly valuable for busy households, as AI provides quick, tailored answers that fit each family’s unique circumstances. ✅ If you have ingredients nearing their expiration date, AI can suggest recipes to use them up.It can also generate reverse shopping lists, highlighting only the items needed to complete recipes based on what you already have —once again reducing unnecessary purchases and food waste. This tool's potential is in its ability to personalize responses based on our preferences. AI’s impact on the food industry Outside the home, AI is being embraced across various sectors of the food supply chain to optimize logistics and the distribution of perishable goods. For instance, companies use AI to predict product demand years in advance, preventing overstocking and large-scale waste. Restaurants can adapt their daily menus based on ingredient availability and customer preferences, cutting costs and reducing waste. Generative AI is a powerful and accessible tool that enhances efficiency at every level. From households to production sectors, Generative AI also optimizes food resource management with effective solutions that reduce food waste's environmental and economic impacts.
January 15, 2025
Cyber Security
Intelligent Workplace
Microsoft 365 Copilot: A guide to protecting business data
Data protection has become a priority for businesses, and adopting advanced tools like Microsoft 365 Copilot requires understanding and mitigating its associated risks. Knowing and implementing protection strategies and security policies is essential for businesses to leverage Microsoft 365 Copilot effectively, maximizing both its utility and security. Achieving this requires implementing robust Cyber Security measures and keeping compliance regulations up to date. Additionally, continuous employee training on secure practices and the proper implementation, management, and updating of advanced technologies are essential to minimize vulnerabilities and protect business data integrity. Components of Microsoft 365 Copilot Microsoft 365 Copilot is a tool designed to enhance workplace efficiency through advanced AI models, as we explored in additional detail in What is Copilot Microsoft 365 and how it can help you in your work. Integrated with Microsoft 365 applications, Copilot is a powerful assistant that uses natural language to transform words into content, analyses, or productive actions. Microsoft 365 Copilot has four key components designed to improve productivity and collaboration in a business environment: Language models (LLM): These AI systems enable Copilot to understand and respond to user requests. User data (Microsoft Graph): Includes emails, calendars, chats, and documents accessible to the user. Microsoft 365 applications: Such as Exchange, SharePoint, Teams, and OneDrive, which serve as information sources for Copilot. Internet: To supplement information and provide more comprehensive answers. 📎 Microsoft Graph allows Copilot to access and analyze all this user information (in accordance with access permissions and information classification) to understand requests and provide accurate, useful, personalized, and contextually relevant responses. Microsoft 365 Copilot enhances productivity but requires a comprehensive security approach to protect data and ensure compliance. Security, compliance, and privacy Microsoft has built Copilot on a solid commitment to security, protecting user data and information at all times. Responsible AI usage policies, alongside robust security measures, meet compliance and privacy standards. Risks to information Copilot adoption involves various information risks that must be appropriately managed to avoid compromising data integrity and confidentiality, including: Information overexposure: Facilitating direct access to a large amount of data may lead to sensitive information exposure without proper classification. Leakage of personal or sensitive information: Improper classification and incorrect permissions can compromise confidentiality and document integrity. Misinformation: Inappropriate data handling may result in inconsistent or erroneous responses. Proper version management and information classification are crucial to avoiding such issues, commonly called AI hallucinations. ⚠️ LLMs can compromise privacy by generating information consistent with personal data, even without storing that complete information. They may also produce incorrect or misinterpreted content about individuals, affecting their reputation and personal data integrity. Learn more → With appropriate protection and governance policies, it is possible to maximize the benefits of this tool while minimizing associated risks. Data protection strategies in Microsoft 365 Copilot To mitigate the risks associated with adopting Copilot, it is essential to implement robust data protection and governance policies. Here are some key strategies: Access protection: Managing permissions to adhere to the principle of least privilege is crucial. This helps prevent oversharing and protects sensitive information. Information protection: Knowing, identifying, and labeling data based on its sensitivity is imperative. Security measures like encryption and Data Loss Prevention (DLP) ensure information remains protected. Data lifecycle management: Streamlining the creation and deletion of unnecessary data silos is essential to maintaining data security and management efficiency. Auditing and monitoring data usage also help detect inappropriate behaviors and enable corrective actions. Information protection lifecycle Adopting Copilot requires a thorough review of security configurations and the implementation of appropriate measures to protect data throughout its lifecycle, including: Configuration review: Assessing default collaboration permissions and Teams and SharePoint management policies to ensure sensitive data is not exposed. Classification and protection: Conducting landing activities to build the information lifecycle according to the organization’s context. Implementing security measures ensures that Copilot accesses only permitted information, reducing security risks. Monitoring and response: Determining the tools and processes needed to maintain a lifecycle aligned with organizational needs and adapting to new situations to respond effectively to incidents. Proactive security management and proper information classification are essential for a safe and efficient work environment. Telefónica Tech’s proposal for data protection in Microsoft 365 Copilot adoption Every client has specific needs, and data protection strategies must adapt to those needs. At Telefónica Tech, our data protection proposal for Copilot adoption includes: End-to-end involvement: From identifying needs to executing remediation. Expert team: Specialists in Microsoft 365 and cybersecurity with experience in security assessments for Copilot adoption. Adaptation to support needs: Aligning infrastructures and models with client business case execution requirements. Conclusion Adopting Microsoft 365 Copilot presents both opportunities and challenges in terms of data protection. With appropriate protection and governance policies, it is possible to maximize this tool's benefits while minimizing associated risks. The key lies in proactive security management, proper information classification, employee training, and robust protection measures to ensure a safe and efficient work environment.
January 14, 2025
Telefónica Tech
Cyber Security
AI & Data
Cyber Security trends for 2025
It is often said that cybersecurity is constantly evolving, and 2025 will be no exception. The growing incorporation of AI solutions into an increasing number of products, services, and business processes offers new opportunities while also anticipating challenges and cyber threats. To address these, companies must adapt and implement more robust, intelligent, and proactive cyber defenses. In 2025, advanced cybersecurity will be essential to protect information and digital assets. It will ensure operational and business continuity, maintain regulatory compliance, and preserve customers' and users' trust. From identity and access management to advanced AI solutions, both businesses and individuals need to understand key Cyber Security trends to anticipate threats and design effective protection strategies. We explore the key trends shaping the Cyber Security ecosystem in 2025, along with recommendations and measures businesses should adopt to reduce their exposure to risk. Understanding key Cyber Security trends is the first step toward preparing a solid and effective defense strategy. Data exfiltration: sophisticated tactics for common targets The theft of sensitive information remains one of the greatest risks in the digital environment. However, attackers’ tactics are evolving rapidly. In 2025, it will not suffice to protect traditional data channels and storage: every link in a complex digital value chain must be secured. This chain includes IoT devices, cyber-physical systems, cloud storage, third-party applications, APIs, and AI-based platforms. Why this is relevant In the digital economy, data is a production factor, and information is a valuable asset. Customer data, trade secrets, intellectual property, financial information, and medical records can all be exploited for fraud, industrial espionage, blackmail, or illegal commercialization. The shift to Cloud computing and the proliferation of connected devices expand companies' attack surfaces. Tactics and attack vectors Zero-day vulnerabilities: Attackers exploit unknown flaws in software, IoT devices, or cloud services, gaining access to data before developers can address the issues. AI-driven malware: AI enables attackers to create and enhance malicious software, adapting it to evade detection tools. Exploitation of insecure APIs: Poorly configured or unprotected application programming interfaces (APIs) present opportunities for data extraction, as many companies rely on third-party services with weak security controls. IoT devices as entry points: Sensors, cameras, peripherals, and other devices connected to corporate networks can serve as attack vectors to access internal systems and extract data. Protection recommendations Multi-layered security: Deploy next-generation firewalls (NGFWs), intrusion detection and prevention systems (IDS/IPS), Cloud security solutions (CASB), and identity and access management (IAM) tools in a coordinated manner. Comprehensive encryption: Protect data at rest and in transit with strong encryption, including post-quantum encryption, and ensure secure key management. Continuous monitoring and agile response: Early detection is critical. Invest in network monitoring systems, behavioral analysis, and a well-structured incident response to minimize impacts. Vendor and API assessment: Ensure third-party providers (and their subcontractors) comply with security standards and conduct regular audits to identify and address vulnerabilities. Network segmentation: Divide networks into smaller, isolated segments such as as IT/OT typologies to limit threat propagation and simplify tailored security policies. Threat intelligence tools: Use threat intelligence platforms to collect, analyze, and share information on emerging threats, enabling proactive defenses. Data governance: Implement comprehensive lifecycle management for data, from creation to deletion, ensuring secure handling and regulatory compliance to minimize risks and guarantee data integrity and privacy. Response and recovery plans: Business continuity and incident recovery plans are essential to mitigate operational and reputational impacts, expedite critical operations resumption, and strengthen resilience to cyber threats. AI as a double-edged sword: empowering both attackers and defenders AI is a powerful tool defenders use to anticipate, detect, and mitigate incidents more effectively. However, cybercriminals also use it to automate vulnerability scanning, craft highly deceptive phishing schemes, and develop malware. The emergence of AI on both fronts escalates the complexity of the Cyber Security arms race. Smart and hard-detecting attacks Hyper-realistic phishing: With AI, attackers can analyze social profiles, emails, images, and a victim's language patterns to craft fraudulent messages that are nearly indistinguishable from legitimate ones. Automated vulnerability scanning: While attackers previously examined systems manually, AI now allows them to analyze thousands of components at scale and identify weak points in record time. The role of AI-based anomaly detection Defenders also use AI to detect unusual patterns in network traffic, user behavior, or application and service functioning. Techniques like UEBA (User and Entity Behavior Analytics) analyze user and device behavior, raising alerts for deviations that may indicate a threat or a potential breach. To ensure security and respond effectively to threats, the quality of data used to train AI detection tools is critical. Biased or inaccurate data can lead to false positives or negatives, creating a false sense of security. Recommendations for businesses AI-integrated security tools: Adopt machine learning and behavioral analysis-based solutions to detect threats in real time. Continuous employee training: Cyber Security is not just a technological problem, but also an organizational one. Training employees to recognize phishing, practice safe authentication, and use tools correctly reduces attack risks. Human oversight is essential: Security analysts with strong cybersecurity knowledge and experience interpret AI results, identify false positives and negatives, and make informed decisions. AI is not infallible; the human factor remains essential. Partner with specialized technology providers: Many companies opt for managed security services (MSSPs) that use AI for incident monitoring and response, benefiting from specialized Cyber Security professionals without in-house investment. Specific attacks on AI: data poisoning, inference, model theft, and jailbreak AI is not just a tool for defenders and attackers, AI is also a target. In recent months, there has been a significant increase in attacks designed against large language models (LLMs) and AI systems, including: Data poisoning: Injecting malicious data into AI training datasets to bias or corrupt results. For example, facial recognition AI could be manipulated to misidentify certain individuals. Jailbreak: Direct manipulation of a model’s architecture or weights, causing it to fail in its primary task, produce biased outputs, or accept malicious instructions. Inference attacks: Cybercriminals deduce sensitive information about internal data from a model’s outputs, particularly concerning systems trained with private data like medical records or financial transactions. Model theft: Extracting the internal logic (parameters, architecture, weights) of an AI model enables attackers to replicate it without incurring training costs, facilitating precise attacks or compromising defensive systems. Why these attacks are dangerous AI is becoming the ‘processor’ of critical infrastructures, from financial systems and industries to traffic and energy control. A successful attack on AI systems can have significant consequences for both the targeted company and society when critical services are involved. Defensive measures and model robustness Encryption and access control for training data: Ensure only authorized personnel can modify datasets and protect them with encryption and strong authentication. Data integrity validation and cleansing: Techniques to detect and remove anomalous data bolster security during the training process. Robust machine learning: Develop resilient training methods and models, including adversarial techniques to prepare AI for enhanced attack resistance. Security testing for models: Conduct assesments and pentesting specific to AI models to ensure compliance with security specifications and identify vulnerabilities. MLOps security frameworks: Adopt and implement best practices for security at all stages of the machine learning lifecycle, integrating protective measures from design, development, deployment, and maintenance phases. Deepfakes: digital reality surpassing fiction Experts and governments are concerned about extremely realistic fake content and deepfakes. These techniques are becoming increasingly sophisticated, enabling the creation of videos, audios, and images that are nearly indistinguishable from genuine material. Deepfakes do not only affect public figures or celebrities; any individual or company can become a victim of misinformation, extortion, or fraud campaigns. Deepfakes impact on businesses Misinformation and manipulation: Deepfakes can spread false information that influences public opinion, altering perception and trust in companies. Extortion and corporate defamation: A fake video showing an executive making compromising statements can affect a company’s stock value, reputation, or lead to client loss. Fraud and identity theft: Using fake audios and videos, attackers can impersonate executives (CEO fraud) to authorize bank transfers or system changes that may lead to data leaks, financial losses, or even risk business continuity. Detection and countermeasures Authenticity verification technology: Tools that analyze metadata, pixel patterns, voice intonation, and other subtle indicators can detect fakes. Education and awareness: Informing employees, customers, and the public about deep-fake risks is key to fostering skepticism toward suspicious content and modifying protocols. Legal and regulatory frameworks: Governments and international organizations are addressing the malicious creation and dissemination of deepfakes. Collaboration between tech companies and social media platforms, like the CAI (Content Authenticity Initiative), helps identify and authenticate the integrity and origin of such content. Toward AI-driven Cyber Security: intelligent, effective, and proactive defense The natural response to the rise of AI-based threats and increasingly sophisticated attacks is to develop equally advanced defenses. The intersection between AI and Cyber Security is intensifying, with AI-driven Cyber Security expanding into more domains, products, and security services. Static solutions are no longer enough; systems capable of continuous learning, anticipating adversarial tactics, and reacting in real-time are required. Smart firewalls and secure AI pipelines AI-integrated firewalls: These systems analyze inbound and outbound traffic not only for known attack signatures but also to detect subtle patterns indicative of malicious activity. AI enables large-scale anomaly detection, distinguishing between legitimate traffic, noise, and real threats. Security by design in AI pipelines: The development and deployment of AI models should incorporate automated security testing, vulnerability analysis, threat hunting, and DevSecOps practices. This ensures defensive AI is reliable, scalable, and quickly adaptable to new threats. Real-time analysis and automated response AI-powered tools can process massive volumes of historical and real-time data, identifying even unknown threats (zero-day attacks) before they cause significant harm. Moreover, responses can be automated: when a system detects suspicious activity, it can isolate network segments, revoke compromised credentials, or block malicious IP addresses without immediate human intervention. Recommendations for adopting AI-driven Cyber Security Rigorous evaluation of solutions: Not all tools labeled as "AI" offer the same level of quality. Functionalities, field testing, references, and levels of technical support must be carefully evaluated before adopting a solution. AI skill development for security teams: Security analysts need to understand how AI solutions work, how to interpret their results, and how to fine-tune them to improve their effectiveness. Continuous monitoring and improvement: AI benefits from an iterative improvement cycle. As a tool gains experience in threat detection, it is essential to monitor, adjust, and enhance its performance to ensure it can tackle future challenges. Regulatory compliance as a pillar of Cyber Security strategy In addition to technical and operational considerations, organizations must account for a regulatory landscape that demands higher transparency and robustness in AI systems. In addition, organizations must account for stronger cyber resilience in essential services and critical infrastructures: The European AI Act: Establishes strict standards for transparency, reliability, and data quality used to train AI models. NIS2 Directive: Expands and deepens cybersecurity obligations for essential service providers and digital infrastructures, covering resilience to incidents and cross-border collaboration. National Cybersecurity Strategy (NCS) in the U.S.: Aims to protect critical infrastructures, deter malicious actors, strengthen public-private collaboration, foster innovation, and establish greater accountability in cybersecurity, promoting a proactive and coordinated approach to global threats. These regulatory frameworks, along with the EU’s General Data Protection Regulation (GDPR), promote the integration of security, privacy, and compliance to create more effective defense strategies and a safer, more reliable, and resilient digital ecosystem. ■ Cybersecurity strategies align with companies’ ESG goals by protecting data and systems to ensure sustainable and responsible operations. Implementing robust cyber defenses prevents disruptions affecting energy efficiency or resource management. It strengthens societal trust by protecting data and privacy, and complies with regulatory frameworks that ensure transparency and governance best practices. Conclusion The year 2025 promises to be marked by the exponential growth of complex cyber threats and an increasing reliance on AI-driven systems and defenses. Cybersecurity must therefore be addressed as a strategic priority at all levels: businesses, governments, organizations, and individuals must collaborate to tackle these challenges, address systemic cyber risks, and establish a safer digital ecosystem for all. Cyber resilience is an ongoing process of adaptation, improvement, and innovation. Understanding these trends and the challenges they present is the first step toward strong defenses. Investing in next-generation technologies, continuous training, and developing human talent in Cyber Security, fostering collaboration between public and private entities, promoting clear regulations, and advocating for ethical AI use will be decisive in 2025. Cyber resilience is not a single product or strategy: it is a continuous process of adaptation, improvement, and innovation. Companies that take a proactive stance, stay informed of trends, and integrate AI securely and responsibly will be better positioned to face challenges, protect their digital assets and business processes, and safeguard the trust of their customers, users, collaborators, suppliers, and partners. By being aware of these trends and equipped with the right recommendations and solutions, we can face 2025 with increased clarity. We can also be armed with the tools and capabilities needed to defend ourselves in an increasingly advanced and complex world. Success lies in the ability to anticipate, innovate, and respond quickly to changes and cyber threats. — CONTRIBUTED: Juan Campillo, Sergio de los Santos —
January 13, 2025
Telefónica Tech
AI & Data
You can run Generative AI models on your computer: Step-by-step instructions to install LM Studio
Recently David compiled a list of projects that allow us to run LLM models on modest machines, including personal computers. LLM models have transformed how we interact with and utilize AI. Now, these powerful models can be executed on personal computers, enabling tasks like translation, data extraction and insights, text analysis and summarization, or content creation. Some of the most exciting features of available LLMs include their ability to generate coherent and creative text, perform natural language processing tasks like reviewing and summarizing text, and develop intelligent applications that can understand and respond to users’ needs more efficiently. Step-by-step installation of LM Studio LM Studio is an outstanding tool for getting started and running LLM models locally on your computer, even on a relatively modest machine, provided it meets the minimum requirements. LM Studio is available for Mac, Windows, and Linux. 💻 For Mac users, LM Studio requires an Apple Silicon processor (M1/M2/M3/M4). Alternatively, Msty is a comprehensive option for running LLM models on Macs with Intel processors. This guide focuses on installation on a modest Windows computer, though the process is similar for other operating systems. Step 1: Check system requirements Before you start, ensure your computer meets the minimum requirements to run LM Studio. The two most critical requirements are processor and RAM. For a Windows PC, 16 GB of RAM and an AVX2-compatible processor are recommended for LM Studio to function correctly. Open System information. Look for Processor and note down its full name. Visit Intel’s ARK website (or the equivalent site for your processor brand, like AMD) and locate your processor using its full name or number. On Intel’s website, check under Advanced Technologies to see if your processor supports AVX2 in the Instruction Set Extensions section. For example, an Intel Core i5-1035G1 processor is compatible because it supports AVX2. ■ For small LLM models or initial testing, 8 GB of RAM might suffice. For medium and large models, you’ll need 16 GB, 32 GB, or more: the more RAM you have, the better LM Studio will perform. Step 2: Download LM Studio Use your web browser to visit the LM Studio website. Click on the version you want to download (in this case, Windows). Save the installation file to your computer, such as on the Desktop or in the Downloads folder. ■ If using browsers like Edge or Chrome, make sure your download settings don’t block executable files. Step 3: Install LM Studio Run the installation file to start the setup. Follow the installer instructions. You can leave the default installation options, including the suggested LLM model (Llama 3.2 1B by Meta) or download and install DeepSeek R1. Once the installation is complete, you can select the option to Start New Chat. Step 4: First launch of LM Studio When you open LM Studio for the first time, it welcomes you. By default it uses the model installed during setup. This model is a good starting point because it requires less space and RAM, making it ideal for initial testing. You can interact with the model in your preferred language. You can also add other available models, such as Granite by IBM, using the search icon (magnifying glass) on the left. Click on the model you want to download. Ensure you’ve selected the latest version (usually at the top). Once you’ve selected the correct model, click the Download button. The download will begin and you’ll see a progress bar. Wait for it to complete. Step 5: Run the downloaded model Once the model is downloaded, return to the Chat tab in the sidebar. In the dropdown menu Select a model to chat with from the downloaded models list if you have more than one. You can now type messages in the text box, and LM Studio will execute the model to generate responses. ■ Download additional models in LM Studio to compare results and explore capabilities. Remember that each model takes up disk space, and some require more processor power or RAM, which may affect performance depending on your computer's specifications. File uploads and RAG LLMs are limited to the data they were trained on and cannot incorporate new or private information independently. The Retrieval Augmented Generation (RAG) feature solves this by allowing users to upload personal and private documents from their computers, which the model can then reference to provide more precise and relevant answers. This enables more accurate and relevant responses, customized to the content you're providing. LM Studio allows up to 5 files at a time, with a combined maximum size of 30 MB. Supported formats include PDF, DOCX, TXT, and CSV. When asking LM Studio about these documents, be specific and include as many details as possible to help the model retrieve the most relevant information. LLM will analyze your query and documents to give you a response. Play around with different documents, queries, and prompts. —Example: If you upload a specific contract or private agreement in PDF format, you can attach it to your query and ask the model for details about its terms. Developer Mode and advanced options in LM Studio Developer mode in LM Studio offers more options for configuring your LLM model, such as integrating external applications, optimizing performance, or adjusting response accuracy. Local server: Run the model in a server environment for access from multiple devices or integrations. Temperature: Controls the randomness of model responses. Lower values generate more consistent and predictable outputs, while higher values introduce variability and 'creativity' (with an increased risk of incoherence). Top-K and Top-P: Determine how many options the model considers when generating text, balancing response variability and precision. Top-K produces stricter outputs, while Top-P introduces more flexibility. System Prompt: Customize the initial context or instructions given to the LLM, such as setting tone, style, roleplay or behavior ('You are an expert on [topic]', 'Provide concise answers', or 'Include examples in your responses') —Example: If you're using the model to summarize text, you can set the temperature to a low value to make the responses more concise and predictable. Or, if you use LM Studio to write professional emails, you can tell it to 'Act as a professional assistant. Make sure your messages are clear, formal, and action-oriented.' ■ Changes to these options may affect performance, optimizing RAM and CPU usage, for example. Also, they affect the quality of responses, improving relevance and adaptability. Advantages of running LLMs on your personal computer Privacy: Work with personal documents like contracts, notes, or letters without uploading them to external services, protecting sensitive information. Autonomy: No need for an internet connection or reliance on external servers, ensuring uninterrupted access. Optimization: You can customize the model for specific tasks, such as report automation or content generation related to your field. Cost savings: Avoid subscription fees or limitations of free-tier cloud versions if you don’t need extensive computational power. Learning: Experimenting with models and configurations enhances your understanding of how to leverage this technology effectively. Learning this technology can help you come up with creative ideas and solve problems. —Example: If you’re organizing emails or personal documents in PDF format, you can use LM Studio with RAG to quickly locate specific messages related to a topic or date, saving time and effort. o Experiment with other models available in LM Studio, but remember that larger models may require more RAM and processing power. o Adjust LM Studio settings (such as memory usage) based on your hardware and desired results. o Keep LM Studio updated to the latest version to benefit from new features and bug fixes. ______ IA & Data Creative AI in business: how to adapt ChatGPT (and similar) to my customer's needs August 14, 2024
January 8, 2025
Cyber Security
Cloud
Connectivity & IoT
AI & Data
Intelligent Workplace
The future of the Financial sector: transformation, innovation, and challenges
The Financial sector, including banking and insurance, is undergoing significant transformation driven by digitalization, evolving business models, and changes in consumer habits and needs. Digitalization has long been a fundamental driver of evolution in the Financial sector, with continuous growth in investment in digital and technological transformation. This includes key areas such as IT services, cybersecurity, business applications, and telecommunications. Next-generation digital technologies are redefining how financial institutions operate and connect with their clients. Investments in technology Key areas of technological investment for financial institutions include: Cybersecurity: With increasingly sophisticated attacks, information security has become a critical priority. Business intelligence and data analytics: The ability to analyze large volumes of data to extract valuable insights. Integration technology (APIs): Facilitates omnichannel approaches and interoperability between different systems and platforms. Cloud platforms: Provide flexibility and scalability to manage dynamic workloads. Application modernization: Upgrades legacy systems to improve efficiency and responsiveness. ■ IT investment as a percentage of total revenue has increased, reflecting technology's growing importance in business strategy. According to Gartner, by 2024, global IT spending per employee is projected to reach $31,300, and 77% of executives believe their organizations will achieve their digital transformation goals. Innovation and new business models Technological innovation is driving new business models in the Financial sector. Concepts like open banking and banking as a service (BaaS) are gaining traction, enabling banks to deliver more personalized and efficient services. These technologies allow financial institutions to collaborate more effectively with third parties, integrating external services through APIs and open platforms. The adoption of emerging technologies like AI and machine learning revolutionize decision-making and customer experience. These technologies enable real-time analysis of large data volumes, leading to a better understanding of customer needs and behaviors and allowing for financial services personalization. It is also transforming customer experience and decision-making. Fintech-as-a-service platforms allow startups and tech companies to offer innovative financial solutions without traditional banking infrastructure. This democratizes access to financial services and fosters competition in the sector. Mobile and digital payment solutions are also providing new ways for the Financial sector to collect and analyze large amounts of transactional data, enabling more personalized services, streamlined processes, and improved fraud prevention. 🧊 Blockchain technology plays a disruptive role in Financial sector transformation, offering greater transparency and security in transactions. It is driving the creation of revolutionary business models based on smart contracts and asset tokenization. Cyber Security and automation In an environment of increasingly complex, persistent, and sophisticated threats, automation has become an essential tool for cybersecurity. It enables financial institutions to efficiently manage their security infrastructures, reducing risks and improving service quality. AI and Cyber Security combine to offer more effective protection against advanced threats, improving detection and incident response. Business applications Business Applications help companies manage and automate commercial, financial, and operational processes. These applications integrate with existing and legacy systems to increase efficiency and productivity. Their benefits include: Operational efficiency: Automating and optimizing processes. Data management and analysis: Extracting valuable insights for decision-making. Customer personalization: Delivering tailored services and experiences that enhance loyalty. Integration and collaboration: Improving coordination and communication. Agility and adaptability: Enabling quick responses to market changes and client needs. Hyperautomation The combination of AI and automation (hyperautomation) for managing complex processes brings significant advantages to the financial sector, including: Error reduction: Minimizing human error risks. Increased efficiency: Efficient management of data and processes. Enhanced decision-making: Real-time analytics enable data-driven decisions. Scalability: Flexible adaptation to market demands. Future Workplace for Financial sector employees The concept of the future workplace is revolutionizing the working environment in the Financial sector, offering increased flexibility and enabling a healthier work-life balance, aligning professional talent with business goals. The implementation of unified communication and collaboration (UC&C) tools and smart workplaces facilitates remote work, providing access to information and tools that enhance employee productivity and satisfaction. AI integration in the workplace transforms daily tasks. In the Financial sector, AI not only automates repetitive tasks but also provides predictive analytics, improving strategic decision-making and offering immediate access to dispersed information, enhancing productivity, customer service, and the quality of workplace interactions. These tools enable employees to work in an AI-augmented manner, focusing on high-value activities like innovation and developing new products and services, contributing to talent attraction and retention. ■ Bank branches have also been digitalized as part of the sector's transformation. — Implementing advanced technological systems in branches enhances customer experience and optimizes internal management. —Solutions include dynamic marketing systems, shift managers, and behavioral analytics applications that increase efficiency and reduce costs. Companies can improve their efficiency and competitiveness by using these tools, overcoming organizational and technical challenges. Challenges in the Financial sector The Financial sector faces challenges that drive the need for transformation and modernization. The most significant include: Complex environments: Managing diverse access systems and complex infrastructure administration requires integrated solutions to streamline operations and improve efficiency. Sophisticated cyberattacks: Increasingly advanced attacks demand robust defenses and cutting-edge solutions to protect sensitive information and digital assets. Cost optimization: Institutions continually seek ways to reduce costs without compromising service quality. Automation and process digitalization are key strategies. Evolution of financial services and competition: Collaborative business models and fintech-as-a-service platforms revolutionize financial service delivery. Integration and interoperability are vital to competitiveness. Digital transformation: Modernizing systems and software is essential for enhancing customer experience and operational efficiency. Talent acquisition and retention: Strategies like flexible work policies and AI-augmented environments are key to maintaining a committed workforce. Case study: Virgin Money Our client Virgin Money, a UK-based banking and financial services provider, partnered with Telefónica Tech to digitize and streamline its client onboarding process. This included integrating various systems, automating decision-making, and significantly improving the customer experience. Results include: A 580% increase in client acquisition, with conversion rates rising from 5% to 34%. Reduced application processing times from days and weeks to an average of 10–25 minutes, enhancing customer satisfaction. Automation and fewer manual processes increase operational efficiency, saving time and resources. ✅ Transformations like this help companies compete more effectively and position themselves as leaders in innovation and digital services. Conclusion The Financial sector, including banking and insurance, is undergoing significant transformation driven by technological innovation and consumer expectations changes. Adopting new technologies enables financial institutions to enhance operational efficiency, strengthen customer relationships, and develop creative business models that are reshaping the global financial landscape. Investments in Cyber Security, data analytics, Cloud platforms, and application modernization are advancing the Financial sector toward a more efficient, inclusive, secure, and innovative future. Organizations that embrace these technologies and approaches will be better prepared to address challenges and seize new business opportunities.
January 2, 2025
Intelligent Workplace
Unified communication and collaboration in 2025: Trends, challenges, and opportunities
Unified Communication and Collaboration (UC&C) has become a fundamental pillar of organizational competitiveness and efficiency. Digitalization and hybrid work trends are shaping the next era of work that companies must address with innovative strategies and next-generation technology. This trend offers numerous advantages but also presents significant challenges that businesses must overcome to ensure a smooth and effective transition. As we approach 2025, organizations will face a landscape filled with both challenges and opportunities in adopting cutting-edge technologies that promise to transform how their workforce enhances capabilities, communicates, and collaborates. Our Telefónica Tech's experts have provided us with a comprehensive overview of the emerging trends in communications convergence, the challenges companies will have to overcome, and the opportunities that lie ahead. Trends in 2025 The main emerging trends shaping the future of unified communication and collaboration that companies can leverage to transform their operations include: Unified Communication as a Service (UCaaS): This will continue to gain traction as an integrated solution for businesses, combining various communication and collaboration tools (voice, video, instant messaging, email, etc.) into a single platform accessible from any device. ⇾ This trend improves productivity and provides a more cohesive and streamlined user experience. Artificial Intelligence (AI) and automation: AI will expand significantly within corporate communications: virtual assistants, real-time translations, chatbots, and predictive analytics tools will enable more efficient, inclusive, and personalized corporate communication, customer service, and decision-making. ⇾ AI will also streamline workflow management and problem-solving, reducing manual workloads and optimizing internal processes. Security and Cyber Security: Corporate communications security is becoming increasingly critical, prompting companies to invest in advanced Cyber Security solutions to protect their data and communications against increasingly sophisticated threats. ⇾ Multifactor authentication, end-to-end encryption, and AI-driven threat detection will be key to safeguarding information integrity and confidentiality. Challenges for businesses While emerging trends present numerous opportunities to transform corporate communications, they also bring challenges that organizations must address to successfully implement these new technologies. Some of the key challenges include: System interoperability: Ensuring interoperability between different communication systems and platforms is a major challenge. As companies adopt these new technologies, they also need them to work in conjunction with existing infrastructures. ⇾ Interoperability deficiencies can lead to isolated information silos, inefficiencies, and fragmented user experiences. Change management: New technologies always require a shift in corporate culture and processes. Effective change management is essential for the success of communication convergence. ⇾ Companies must invest in training and development to ensure employees are equipped to use and maximize the potential of new tools and platforms. Implementation costs: The costs associated with implementing new communication technologies can be significant, so companies need to carefully evaluate return on investment (ROI) and develop strategies to minimize costs while maximizing benefits. ⇾ Strategies like adopting pay-as-you-go models and partnering with reliable and recognized technology integrators are key to efficient and sustainable adoption. Data privacy: Managing and protecting data privacy is an ongoing challenge. Companies must comply with increasingly strict privacy regulations and ensure their data handling practices are transparent and secure. ⇾ Trust of customers, employees, and suppliers heavily depends on how businesses manage and safeguard sensitive information. Opportunities in 2025 Despite these challenges, communication convergence also opens a wide range of opportunities with significant impacts on various aspects of business operations: Innovation and differentiation: Communication convergence offers businesses the opportunity to innovate and stand out in the market. Organizations that effectively adopt these technologies can deliver more advanced products and services, enhance customer experiences, and gain a competitive edge. Productivity enhancement: New communication technologies have the potential to significantly improve productivity. System integration, process automation, and access to real-time information enable employees to work more efficiently and collaboratively. Additionally, the flexibility to work from anywhere and at any time fosters a more dynamic and adaptable work environment. Global expansion: Communication convergence facilitates business global expansion. With more advanced and accessible communication tools, organizations can establish and maintain connections more efficiently and inclusively with clients and suppliers worldwide. This opens up new market opportunities and allows companies to operate internationally easier. Conclusion Communications convergence within businesses is a rapidly evolving trend that promises to transform how organizations operate and communicate. By 2025, companies will need to address challenges, from system interoperability to change management and data privacy. However, the opportunities presented are equally significant, with potential for innovation, productivity improvement, and global expansion. Organizations that adopt these technologies strategically and proactively will be well-positioned to reap the benefits of communication convergence in an increasingly competitive, globalized, and transforming labor and business market. ■ At Telefónica Tech, we offer advanced and tailored solutions for unified communication and collaboration (UC&C). Our innovative approach enhances operational efficiency and team collaboration with strategies adapted to each client's specific needs. ______ Cloud IA & Data Future Workplace Evolution of communications: how companies adapt to boost efficiency and productivity November 18, 2024
December 31, 2024
Cyber Security
Protect yourself online: data you should never share
Sharing information on social networks and other online platforms has become a common and natural practice for many of us. We frequently post photos, status updates, or details about our daily lives on social media, enabling us to connect with friends and family, share special moments, and discover creative ideas. However, this habit can put our privacy and security at risk if our personal data falls into the hands of cybercriminals who can use it to commit fraud, identity theft, and other crimes. Therefore, being aware of the dangers and taking steps to protect our data allows us to enjoy technology without jeopardizing our privacy and reputation. We have compiled a list of personal data that you should never share online—along with the reasons why and our recommendations for keeping them protected. Understanding the importance of each will help you be better prepared to face risks. Email Address and Phone Number Although we use our email and phone number to register on online platforms, sharing them openly can expose us to risks such as spam, phishing (deceptive and fraudulent emails), and other social engineering-based attacks. Cybercriminals can use this information to access personal accounts or impersonate us. Use secondary email accounts for registrations and avoid publicly sharing your phone number, whether landline or mobile. Postal address and geographical location Sharing your real-time location or your home address can put you in danger, especially if someone with malicious intentions knows where you live or when you’re not at home—for example, due to an “out of office” message. Disable geolocation on social networks and avoid posting information that could reveal patterns in your daily routine. Photos of minors Posting photos of minors, whether your own children or relatives, is a serious risk. These images can end up in the wrong hands or be used without your consent. Carefully evaluate before sharing minor images on social networks and verify their use in educational centers. If you decide to do so, make sure your privacy settings are strict. Compromising photos Sexting, or sharing personal images, can be dangerous. These images could be used for crimes such as sextortion or cyberbullying. Never share intimate photos, not even with trusted individuals. Once published, you lose control of them. Personal documents Your ID card, employment contracts, or banking information are clear targets for cybercriminals. With this data, they can commit identity theft or financial fraud. Protect this information and avoid uploading it to the cloud without encryption. Opinions, complaints, and compromising comments The internet is a space for self-expression, but a single ill-considered comment can have negative repercussions. You could create conflicts, damage your reputation, or even become a victim of online harassment. Think carefully before posting. A message can be misinterpreted and have unexpected consequences. Private conversations Private conversations contain sensitive information that should never be exposed. If you share them, you could jeopardize both your own and others' privacy. Use secure messaging apps and avoid sending sensitive personal data through chats. Secure internet and social media access ● Configure your social networks’ privacy settings: Limit who can view your posts and access your profile. ● Use secure and unique passwords: Change your passwords regularly and avoid reusing them. ● Be wary of suspicious links: Always verify the source of emails and messages before clicking on a link. ● Consult cybersecurity experts: If in doubt, turn to trusted resources such as Incibe and the OSI. Conclusion The internet has transformed the way we communicate, work, and enjoy our leisure time. However, it has also introduced challenges that require our attention and responsibility. From compromising photos to personal documents and opinions, every piece of data we share can have serious repercussions if we don’t take the proper precautions. Therefore, being aware of the dangers of exposing our information, ensuring the safety of your personal data, and avoiding sharing details that could put you at risk is essential to enjoying a safe digital environment. Cyber Security IA & Data Things you shouldn't share with ChatGPT July 4, 2023
December 30, 2024
AI & Data
Make AI more creative so you can come up with better ideas
Innovation and creativity are essential for solving problems and developing new products and services, but generating fresh ideas can be challenging. Tools based on LLM models like ChatGPT or Copilot are helpful in this task and proven effective at generating structured ideas. However, they have certain limitations in producing truly diverse ideas and often generate overly similar proposals when not properly guided. These limitations arise because LLMs optimize their responses based on the most statistically probable patterns in their training data, often failing to explore a wide and varied set of solutions to generate unconventional ideas. To address this limitation, a group of researchers compared the ideas generated by an LLM (GPT-4) with those proposed by a group of students. The results show that while students naturally produce more varied ideas, LLMs can achieve a similar or even greater level of idea diversity when a well-designed prompt structure is employed. Prompting techniques allow for the exploration of more creative and innovative solutions, generating less conventional ideas. Prompting techniques to increase idea diversity The researchers evaluated a series of strategies designed to maximize LLM models' creative potential: Minimal prompt This approach provides basic instructions, such as 'Generate a list of ideas.' While simple and quick to implement, it tends to produce homogeneous results due to the lack of specific guidance. Models respond to common patterns without exploring unusual or complex ideas. Simulated personas This method involves instructing the model to adopt different perspectives, personalities, roles, or thinking styles, such as 'Think like a visionary scientist' or 'Imagine being an eccentric artist.' For example, emulating figures like Nikola Tesla, Joan Miró, or Ada Lovelace can lead to significantly varied creative results. In a business context, this technique can be applied to imagining solutions from multidisciplinary perspectives. Established methodological frameworks Incorporating creative techniques like Stanford’s Design Thinking into prompts provides a structured framework to guide the creative process. This improves diversity and ensures the coherence and feasibility of the proposed ideas by leveraging a widely recognized methodological approach to creativity. Chain of thought (CoT) The CoT approach breaks the main task into smaller, sequential subtasks, such as initial idea generation, idea refinement, and detailed description. From an algorithmic perspective, this segmentation allows the model to focus on each specific stage of the problem, with the possibility of integrating intermediate evaluations that continuously improve the results. This method increases thematic dispersion and the number of unique categories, fostering greater diversity and reducing generic patterns, proving highly effective at generating new ideas. ■ Applied to other fields, such as education, the CoT technique can first generate learning content, then refine it to adapt to different age groups or specific needs (such as children or older adults learning digital skills), and finally detail the activities, materials, or pedagogical approaches suitable for each group. The CoT prompting technique can be adapted to various industries and needs, enabling businesses to explore valuable and novel solutions. Practical example of applying the CoT technique For a practical example of how to apply the CoT strategy in a real-world context, researchers used the example of creating a product costing less than €50 and aimed at university students: First stage: initial generation Prompt: > Generate 50 ideas for physical products aimed at university students that can be sold for less than €50. Example response: Thermal mug. → This step leverages the model’s ability to provide quick and general responses. Second stage: idea refinement Prompt: > Make these ideas bolder and more creative. Refined result: Rechargeable battery mug that cools or heats beverages to user preference. → This step adds a distinctive element, ensuring unique or uncommon ideas. Third stage: a detailed description Prompt: > Describe each idea in a 40-80 word paragraph. Final example: Smart thermal mug with an integrated battery that adjusts the temperature to keep coffee hot or cool drinks during summer. → This step helps concretize and contextualize the idea, exploring its practical applications. Using these methods appropriately, LLMs can match or even exceed human idea generation capabilities. Analysis results The study concludes that the CoT technique (as applied to GPT-4 in this case) is particularly effective due to its ability to break down complex tasks into manageable subtasks, enabling models to explore a broader spectrum of solutions at each stage. Compared to other techniques, CoT takes advantage of sequential processes that facilitate the progressive refinement of ideas, increasing the likelihood of generating uncommon responses through iterative evaluations and restructuring based on feedback provided by the model. Additionally, combining multiple prompting techniques amplifies the range of solutions generated. Conclusion The strategic use of prompting techniques in LLM models transforms their capacity to generate ideas, enabling exploration of broader and more diversified creative territories across an increasing number of business, production, and academic domains. These methodologies enhance performance in specific tasks, and researchers conclude that, when properly utilized, AI can match or even surpass human capabilities in idea generation through the design of elaborate and detailed prompts that guide the model toward novel results. The key lies in designing clear and structured prompts that steer the model toward less frequent outcomes. This approach is not limited to creating products for university students. Instead, it can be adapted to various industries and needs, enabling companies to explore a vast territory of potentially valuable and innovative solutions. ■ For further reading, see the study Prompting Diverse Ideas: Increasing AI Idea Variance by Meincke, Mollick, and Terwiesch. IA & Data Are LLMs transforming the future? Will they overtake us? Key methods to measure their power (Part 1) October 8, 2024
December 24, 2024
Cyber Security
Your children’s toys are now connected
Smartwatches that track children’s locations, drones with cameras, robots that teach programming, or connected scooters that parents can monitor via an app are recent examples of how technology has become an integral part of toy stores. The digital age has transformed toys, turning them into tools that, beyond entertaining, foster technological skills from an early age. Connected toy brands acknowledge parents' growing need to protect their children while providing safe educational and entertainment tools. To meet this demand, they have developed products that combine technology, functionality, and security. In this way, they promote learning through innovative play experiences that complement traditional toys and provide peace of mind for parents. However, before connecting any toy to the internet, it is essential to understand its functions, configure it securely, review privacy policies, and plan to supervise its use. When connecting toys to the Internet, it is imperative to configure them safely, monitor their use, and review privacy policies. The educational role of connected toys Connected toys play a fundamental role in children's digital literacy. These devices allow kids to develop technological skills naturally and playfully. For example, children’s tablets, programmable robots, or interactive dolls with features like facial or voice recognition help familiarize children with technology from an early age. Moreover, they encourage specific skills such as programming, logical thinking, creativity, and teamwork. Programmable robots and devices with educational software offer a safe environment for children to explore technological concepts creatively. They also reinforce values such as problem-solving and collaboration. The importance of safety in smart toys The design, software complexity, and usability of toys are crucial to capturing the attention of children who are increasingly tech-savvy at younger ages. Making toys safe is essential to providing parents with peace of mind, as they recognize children's vulnerability in the digital environment. According to a recent study by Incibe (Spain’s National Cybersecurity Institute) on Cyber Security in connected toys, it is essential that these devices, aimed at children, feature robust security mechanisms. The study analyzed 26 smart toys with capabilities such as recording video or audio, Bluetooth or wifi connectivity, and mobile app integration. Cyber Security in connected toys is crucial because children use them and face a wide variety of potential threats. The results indicate that 58% of the tests yielded favorable results, while six toys failed to meet minimum Cyber Security requirements, posing risks to end users. Additionally, 23% of the tests provided inconclusive results due to lack of information from manufacturers, such as technical configurations or applied policies. It was also identified that 65% of the analyzed toys require a mobile app to function. This dependency increases technological interaction and also involves additional risks related to the handling of sensitive data, such as images, audio, usage habits, videos, or unauthorized access to personal data if the apps are not adequately protected. Improper handling of permissions in apps could allow sensitive information to be collected without parents' knowledge or consent. Many of these devices handle data in the cloud, which, if not accompanied by proper encryption mechanisms, can heighten exposure risk. ⚠️ Only 40% of toys offer some form of parental control, leaving many with significant security and trust gaps that rely solely on manual supervision or external tools. Recommendations for connected toys safety To ensure the safety of connected toys, INCIBE offers several recommendations for both users and manufacturers: Configure the toy securely by changing default passwords and using trusted Wi-Fi networks. Supervise toy usage and utilize parental control tools. Review privacy policies to ensure personal data is handled appropriately. Keep toys updated and turn them off when not in use. Purchase toys with “safe for children” Cyber Security certifications and read reviews from others. Additionally, The study highlights that most of the devices analyzed present an acceptable level of security. However, they have critical areas such as insecure default configurations, outdated firmware, and no encryption for data transmission. → These vulnerabilities can compromise children's privacy and security, making it crucial to stay informed about updates and security patches that address vulnerabilities. Users can subscribe to newsletters from manufacturers or specialized Cyber Security sources. It was also noted that some toys present authentication issues, such as default passwords (e.g., “admin”) or insufficient mechanisms to protect against unauthorized access. → It is always recommended to change default device passwords to more secure ones and activate all available security features to safeguard against unauthorized access. For instance, a secure password should include at least 12 characters combining uppercase and lowercase letters, numbers, and symbols. An example could be: Connect3d!T@y. On the other hand, the study urges manufacturers to implement measures such as disabling unnecessary services or protocols, improving default security settings, and ensuring transparency in data handling, aligning with regulations such as the Cyber Resilience Act (CRA). Recommendations for manufacturers include incorporating advanced parental control features and robust encryption in all data communications. ⚠️ Incibe advises families to consider toys' functionality and privacy impact, ensuring compliance with regulations. It also recommends carefully evaluating the risks associated with connectivity and choosing products that limit data exposure to what is strictly necessary. Conclusion Connected toys offer an exceptional opportunity to foster digital literacy and technological skills from an early age, enabling mastery of technological tools and preparing children for increasingly digital work environments. However, it is imperative that these devices are used safely and responsibly to protect children’s privacy and security. Cybersecurity, education, and entertainment equip children for safe technology use. Ultimately, safety must not be negotiable. Manufacturers have a responsibility to ensure their products are secure, while parents and caregivers must stay informed and vigilant. This way, these toys can become powerful tools for learning and creativity when used with the proper precautions. Cyber Security Technology and the young: how to turn dangers into opportunities March 27, 2024
December 23, 2024
Cyber Security
Cloud
Connectivity & IoT
AI & Data
Healthcare's digital transformation: challenges, needs, and benefits
Context and challenges in the Healthcare sector Healthcare faces significant challenges. Among the most prominent are population growth and aging, phenomena that increase the demand for health services and place a heavy burden on existing systems. Persistent issues such as overcapacity and unequal access to healthcare services exacerbate the situation. This population growth and aging, coupled with rising life expectancy, accelerate the need for a transformation toward a more resilient, agile, efficient, and sustainable healthcare model. New lifestyle habits and increased awareness of physical and mental health require better and more efficient access to health systems and basic care. Consequently, the healthcare sector must address the challenge of optimizing available resources to enhance care delivery efficiency. This involves adopting a more preventive approach to medicine, shifting away from the current curative model. This is to alleviate budgetary pressures and improve patients’ quality of life. The healthcare sector faces the challenge of optimizing its material, financial, and human resources to improve care services efficiency. These transformations necessitate the adoption of digital technologies such as IoT, AI, Big Data, and Cloud, which not only enhance the quality of care but also contribute to the sustainability of healthcare systems by optimizing processes and economic, human, and healthcare resources. Needs and trends in the Healthcare sector To address these challenges it is essential to reorient the focus toward prevention, self-care, and continuous improvement in treatments. This requires ensuring equitable access to healthcare services, regardless of patients' geographical location or socioeconomic status. Resource optimization and accessibility are key components of this transformation, as are healthcare data protection and security. Technologies involved in healthcare's digital transformation Digitalization of the healthcare sector involves leveraging various advanced technologies: Cybersecurity is essential for protecting infrastructure, sensitive patient information and ensuring medical data confidentiality. Connectivity and IoT Devices enable real-time remote monitoring of patients, improving care and reducing hospitalizations. Cloud and Edge Computing facilitate secure storage and access to large volumes of medical data, enabling more efficient collaboration among healthcare professionals. AI and Data Analytics enhance diagnostics, enable personalized treatments, and optimize resource management. These technologies improve care quality and contribute to healthcare systems' sustainability by streamlining processes and optimizing resources. Benefits of digitalizing the Healthcare sector Digital healthcare offers numerous benefits, ranging from more accurate diagnoses to better medical resource management. Key benefits include: More accurate diagnoses and better decision-making: Electronic medical records enhance communication between professionals and patients, reduce errors, and streamline administrative tasks. Remote monitoring with IoT: Enables constant patient monitoring, improving well-being and reducing unnecessary hospitalizations. Teleconsultation: Facilitates remote consultations, saving time and resources for patients and doctors. AI in Primary Care: Automates the evaluation and resolution of inquiries, reducing waiting times and travel for both doctors and patients. Personalized treatment: AI and Big Data enable treatment customization tailored to individual patient characteristics. Disease prediction and early diagnosis: Technology helps detect diseases before they manifest clinically, improving treatment success rates. Digital rehabilitation: Tele-rehabilitation combines at-home and clinic-based therapy, optimizing rehabilitation programs. Tele-surgery: Remote surgery using robotics and advanced connectivity such as 5G. Better resource management: Digitalization optimizes operating rooms and other hospital resources, increasing efficiency and reducing costs. At Telefónica Tech, with our extensive experience and expertise in eHealth, we are well-positioned to lead this digital transformation, providing comprehensive solutions that encompass infrastructure, security, and data management to drive a more agile, efficient, and patient-centered healthcare model. Use cases: How technology transforms Healthcare The integration of these digital technologies enhances healthcare efficiency and quality, while improving accessibility and inclusion through applications such as: Appointment scheduling: Enables patients to request and manage medical appointments efficiently and hassle-free. Data integrity: Ensures patient information is accurate and up-to-date, guaranteeing better medical care. Improved healthcare services: Enhancing accessibility and quality through innovative technological solutions. Precision medicine: Applying AI and data to genetic analysis enables the creation of specific, personalized treatments for each patient thanks to precision medicine, improving effectiveness and reducing side effects. AI for employment: Optimizes job search and matches opportunities in the healthcare sector. Resource monitoring and inventory: Tracks and manages available medical resources efficiently, optimizing their use. Image-based diagnosis: Enhancing medical imaging technologies to deliver faster, more accurate diagnoses. Mobile apps: Facilitate interaction between patients and healthcare professionals, improving user experience. Cybersecurity: Implement robust security measures in healthcare environments to protect medical information and infrastructure from cyberattacks and unauthorized access. Data protection: Guarantees patient privacy and medical records integrity in digital environments. Unified medical records: Centralizes all patient medical information on a secure platform, providing a complete and updated view of medical history and treatments. Success Story: Oxford University Hospitals We partnered with Oxford University Hospitals to address the rising number of patients by optimizing medical staff time and maximizing resources. By implementing the ServiceNow platform along with Cybersecurity and Cloud solutions, our client achieved efficient and self-service IT support, as well as a comprehensive view of critical projects. This implementation improved operational efficiency, allowing medical staff to focus more on patient care and thereby enhancing healthcare quality. In the first three months, clinical staff autonomously registered 87% of 11,000 hospitalizations. Conclusion Digitalization of the healthcare sector has the potential to revolutionize the way diseases are diagnosed and treated, while also optimizing the management of medical and material resources and improving healthcare systems accessibility and efficiency. Tele-surgery, digitalization, and precision medicine are just a few examples of their transformative potential. These advanced solutions optimize resources and reduce overall costs while placing patients at the center of the healthcare system with more personalized and effective care. At Telefónica Tech, with our knowledge and expertise, we position ourselves as the technological partner to accompany companies and organizations in the healthcare sector on this journey toward a more technological and human healthcare future, demonstrating that digitalization is key to building a more inclusive, efficient, and secure healthcare system. Cyber Security Cloud Connectivity & IoT IA & Data Smart Data Path, the road to Industry 4.0 December 4, 2024
December 18, 2024
AI & Data
Master Generative AI in 2025 with three must-have skills
Generative Artificial Intelligence (AI) is transforming how we work, learn, solve problems, and execute countless tasks. Unlike traditional AI technologies, designed to analyze existing data, Generative AI can create new content and simulate human capabilities based on learned patterns: writing articles, composing music, generating images, or programming software. But what does this mean for you? If you’re a graphic designer, Generative AI allows you to verbally describe the logo you have in mind, and the AI will present several options. If you’re a data analyst, you can ask it to summarize reports and highlight key insights in seconds. However, to fully leverage Generative AI, you need to develop what are called "fusion skills." As explained in a recent Harvard Business Review podcast, fusion skills are ways of interacting with large language models (LLMs) that enable you to work hand-in-hand with AI for better outcomes. Three essential fusion skills include: 1. Intelligent Interrogation: The art of asking the right questions Smart querying is like learning another language—in this case, the language of AI. It involves knowing how to naturally ask AI for what you need to accomplish or solve. In a way, it’s about using language as a new form of programming. This enables you to perform complex tasks with AI’s help without being a data scientist. It also helps uncover tacit knowledge—that is, things you already know but cannot articulate or codify. Techniques to improve your queries include: Breaking tasks into steps: Divide a complex task into smaller, more manageable steps. Example: If you’re using AI to help write a report, you can start by asking for a general outline and then request specific details for each section of the outline. Using prompts to explain processes: Ask for detailed, step-by-step explanations, which is especially useful for understanding and verifying AI outputs. Example: Instead of saying, “Solve this equation,” you could ask: Explain step-by-step how to solve the quadratic equation (ax² + bx + c = 0), providing the root values and detailing each stage of the calculation. You will get more useful and educational answers if you ask this way. Writing specific instructions rather than open-ended questions: Be concrete and direct. Example: Instead of asking, “How does solar energy work?” try: Explain, at a 12th-grade level, how solar panels convert sunlight into electricity and describe the main components involved in the process. Structuring queries clearly: Use clear and logical formatting, such as triple quotes, headings, or XML tags, to provide additional information like examples or context. Example: When researching a topic, structure the query with subtitles like #Introduction, #Development, #Examples, and #Conclusion to receive more organized responses. ■ By developing and practicing these techniques you can significantly enhance your interaction with Generative AI models, achieving more precise and useful outcomes. These techniques improve reasoning outputs and expand your cognitive abilities. By acquiring these skills, you can better utilize AI capabilities and potential. 2. Judgment integration: When humans and AI agree Judgment integration refers to the collaboration between AI-generated analysis and your critical evaluation as a human. Essentially, it’s a hybrid approach that maximizes AI and you's strengths. Your judgment is crucial in complex, specialized contexts where AI may produce valuable insights but requires your interpretation, validation, or adjustment. This ensures decisions based on AI outputs are not only technically correct but also culturally appropriate and ethically sound. Strategies include: Planned human intervention: Define specific points where you or someone else will review and validate AI-generated results to capitalize on both strengths. Training and upskilling: Learn to interpret and use AI tools to enhance your judgment and decision-making. Continuous evaluation: Provide feedback to AI models to refine and improve the system (and your judgment) over time. Example: In medicine, AI can analyze vast amounts of clinical data to identify patterns and suggest diagnoses. However, as a physician, you must evaluate these results based on your knowledge and experience before making a final treatment decision. ⚠️ Judgment integration also involves avoiding private data with public or third-party AI models, whether proprietary or open-source. This protects your privacy and sensitive information security, minimizing exposure risks. AI implementation must include robust cybersecurity measures to safeguard sensitive information and ensure data integrity. With knowledge of different Generative AI models (Claude in the image), you can choose the best one for your needs. 3. Reciprocal apprenticing: You teach, AI learns, and vice-versa Reciprocal learning is a methodology where you and a Generative AI system collaborate continuously to improve each other’s capabilities. This interaction not only enriches the AI’s abilities but also enhances your own, creating a continuous cycle of learning and improvement. Key aspects include: Continuous interaction: Your relationship with AI is an ongoing process of evaluation and adjustment, enhancing AI's accuracy and relevance. Critical context: You must provide knowledge and specific context that AI may lack—for example, your expectations, goals, or industry-specific nuances—to improve AI-generated results. Mutual training: While AI learns from your inputs and corrections, you learn how to interact with it, understand its responses, and optimize its outputs. How to practice reciprocal apprenticing: Provide AI with specific information and context for queries to ensure more relevant and useful results. Inform AI when it meets or falls short of your expectations, correct errors, and provide feedback to help refine its algorithms and capabilities. Continuously practice using AI, even with hypothetical scenarios, to gain a deeper understanding of its functionality, strengths, and limitations. Example: Imagine you’re a teacher using AI to create lesson plans. You teach the AI about your students and their specific needs (providing context and anonymizing individual cases). The AI then suggests new ways to present the same lessons, tailored to each student’s learning style. ■ By fostering efficient collaboration with AI you can boost your productivity, enhance AI’s personalization capabilities, and encourage continuous mutual learning and improvement. The human perspective combined with the analytical capabilities of AI can result in innovative and creative solutions. The importance of developing these skills Generative AI is an opportunity to enhance our abilities. Surveys show that the vast majority of workers recognize the potential value of working with these systems and are eager to learn new skills to harness their benefits. By developing specific skills for collaboration with Generative AI, you can maximize its potential. Given the rapid pace of innovation and technological evolution in the AI field, continuously developing collaboration-specific skills will enable you to make the most of AI’s capabilities and benefit from its possibilities. Learning these skills is not something to approach sporadically but consistently, internalizing them for long-term application. Conclusion Generative AI, accessible through services like Microsoft Copilot 365, ChatGPT, or Perplexity, among many others, has the potential to transform how we learn and work. Developing fusion skills makes us more productive, creative, and prepared for both the present and the future. As we’ve seen, to harness its potential, it’s essential to get used to asking AI the right questions, integrating our judgment, and participating in reciprocal and continuous learning. These fusion skills also require adopting a growth mindset to collaborate with Generative AI efficiently, productively, and innovatively. Practicing these fusion skills with Generative AI boosts your productivity and leads to better results. ■ Listen to the Harvard Business Review podcast The AI Skills You Should Be Building Now for further insights. Telefónica Tech IA & Data Improving LLMs performance using the 'Method Actor' technique December 10, 2024
December 16, 2024
Telefónica Tech
AI & Data
Improving LLMs performance using the 'Method Actor' technique
In the world of Generative AI, LLM models like Copilot, Perplexity, or ChatGPT have proven to be capable and highly useful tools for a wide variety of tasks. However, when it comes to more complex tasks, these models often fall short. This is because they lack the ability to effectively contextualize and break down problems into manageable parts, similar to what a person would do when facing the same task. This is where the 'Method Actor' approach comes into play—a technique that, according to a recent study, significantly improves LLMs' performance in tasks that require more complex processes. What is the 'Method Actor' approach applied to LLMs The 'method actor' approach is a mental model for guiding prompt engineering and architecture for LLMs. This approach considers LLMs as actors, prompts as scripts, and responses as performances. The central idea is that, just as method actors in film or theater immerse themselves deeply in the character they portray to achieve a more authentic and convincing performance, LLMs can 'get into the role' assigned in the prompt to produce more accurate and coherent responses. ⚠️ Assigning a role (roleplay) to an LLM model is good practice for maintaining effective conversations with Generative AI models. It can also be used to execute attacks on LLMs and bypass security restrictions or induce the model to provide restricted information, as we saw in Attacks to Artificial Intelligence (I): Jailbreak. Key principles of the 'Method Actor' approach Prompt engineering is like writing scripts and directing: Prompts should provide context, motivation, and 'stage direction' to guide the LLM. Performance requires preparation: Just as an actor prepares for a role, the LLM needs 'preparation' in the form of additional information and intermediate steps. Decomposition of complex tasks: Tasks should be divided into simpler subtasks where imitation and authenticity produce equivalent results. Compensation for limitations: When imitation fails, methods that do not depend on LLMs should be used to compensate. Examples Solving a complex mathematical problem Simple prompt: > Solve the differential equation: dy/dx = 3x² + 2x – 5 'Method actor' prompt: > You are a mathematician specialized in differential calculus. Break down the differential equation dy/dx = 3x² + 2x – 5 into its simplest parts, explain each step of the solution, and provide the final solution. Writing a literary analysis Simple prompt: > Write an analysis of the use of symbolism in 'The Great Gatsby' 'Method Actor' prompt: > You are a literature professor specialized in the work of F. Scott Fitzgerald. Analyze the use of symbolism in 'The Great Gatsby,' focusing on the main symbols such as Dr. T. J. Eckleburg's eyes, the color green, and the valley of ashes. Explain how each symbol contributes to the overall theme of the novel. ⚠️ Making prompts too long is sometimes not good. It can cause the problem lost in the middle: when there is too much text in the prompt (you introduce a very long instruction), the LLM may 'forget' what you've asked and lose coherence in its response. Specific improvements introduced by the 'Method Actor' model In a recent study, researcher Colin Doyle describes how applying the 'method actor' technique introduces several key improvements in LLM performance. According to him, it achieves: Greater accuracy in complex reasoning tasks: By breaking down tasks and providing richer context, LLMs can tackle complex problems more effectively. Better handling of context: The method allows for more efficient management of the LLM's context window, resulting in more coherent and relevant responses. Reduction of hallucinations: By focusing on 'performance' rather than the generation of thoughts, it reduces LLMs' tendency to produce false or irrelevant information. Enhanced flexibility: The approach allows adapting prompts and even the system architecture to the specific needs of each task. ■ In LLMs, when working with API calls, role labels are often used: system, assistant, and user. And sometimes, especially in exercises of N-shot prompting for training or performance evaluation, a forced dialogue between the roles is used beforehand. Study findings The researcher applied the 'method actor' technique to solve the Connections puzzles from The New York Times. This is a game that requires identifying groups of related words. Doyle shares the results of testing different techniques on models: Graph generated using Claude Puzzles solved with GPT-4 — Basic approach: 27% — 'Chain of Thought' approach: 41% — Initial 'method actor' approach: 78% — Revised 'method actor' approach: 86% Puzzles solved with OpenAI's eo1-preview — 'Method actor' approach: 79% — Step-by-step approach: 100% — 'Method actor' approach: 99% and 87% perfectly solved According to the study, these results prove that the 'method actor' technique can improve LLMs' performance in complex tasks or mathematical challenges and that it can even "surpass human experts in certain tasks." Conclusion The 'method actor' technique can improve our interaction with LLMs. Turning these models into actors capable of playing a role helps to obtain more accurate and coherent results in a wide variety of complex tasks. This approach enhances the performance of existing LLMs and opens new possibilities for the development of more sophisticated and capable AI systems. This research is a good reminder that sometimes the key to unlocking AI potential is not in the technology itself but in how we conceptualize and use it. Thinking of LLMs as actors on a set allows us to better leverage their capabilities, overcome their limitations, and open new possibilities with more intuitive and effective human-machine interaction. ■ You can download Colin Doyle's paper here: LLMs as Method Actors: A Model for Prompt Engineering and Architecture. — CONTRIBUTED BY Manuel de Luna —
December 10, 2024
Cyber Security
Cloud
Connectivity & IoT
AI & Data
Smart Data Path, the road to Industry 4.0
Industry 4.0 has ushered in a series of technological advancements that optimize and enhance industrial processes. One of the most innovative and transformative concepts in this context is Smart Data Path, a methodology that enables the efficient and secure collection, connection, processing, analysis, and utilization of data across the entire production chain. This approach improves product quality through continuous monitoring and analysis, ensuring compliance with standards and swiftly addressing deviations. This leads to enhanced customer satisfaction by delivering superior, reliable products. Moreover, process optimization and efficient resource usage contribute to better production margins, reducing operational costs and minimizing waste. With Smart Data Path companies transition towards a more efficient industry focused on quality and customer satisfaction while boosting profitability. Components of Smart Data Path Smart Data Path is built on five core pillars that ensure a seamless and coherent flow of data from its origin to its final use: Collect: Data is captured via sensors and IoT devices, enabling real-time insights into production and operational processes. Connect: The collected data is integrated into a secure and stable network, facilitating communication among different systems and devices. Process: Data is processed for storage, analysis, and storage using Big Data platforms and data management systems. Analyze: Advanced analytical tools, AI, and machine learning extract valuable insights, enabling informed decision-making. Act: Actions are implemented based on analysis results to enhance efficiency and productivity, such as production adjustments, predictive maintenance, and resource optimization. And cross-cutting element: Securize: Data and infrastructure are protected against cyber threats through IT and OT Cyber Security technologies, environment segmentation, and adherence to regulations. Cyber Security underpins all pillars of Smart Data Path, safeguarding data, infrastructure, and people. Benefits of Smart Data Path Implementing a Smart Data Path in industrial environments offers numerous benefits, including: Enhanced Operational Efficiency: Real-time visibility into processes helps identify and eliminate inefficiencies, optimize resource utilization, and reduce maintenance and downtime. Cost Reduction: Process optimization and demand prediction minimize waste and overproduction, lowering operational and production costs. Improved Product Quality: Continuous monitoring and analysis ensure compliance with quality standards, with deviations identified and corrected in real time. Security and Compliance: Robust data and infrastructure protection guarantees adherence to regulations and secure operations, preventing financial losses and reputational damage. Continuous Innovation: Advanced data analysis enables rapid adaptation to market changes and technological trends, fostering new products and services. Up to 75% of processes are expected to be replaced by new digital technologies in the next two years. Integration of Cyber Security and Cloud capabilities The integration of Cyber Security, Cloud, and Edge Computing capabilities with IoT connectivity and devices, Big Data, and data analytics is significantly transforming industrial processes and operations. This combination offers companies deeper operational insights, optimized decision-making, and improved asset protection and system resilience. Cloud and Edge Computing enable efficient storage, processing, and analysis of large data volumes, providing scalability, flexibility, and data sovereignty while enabling AI inference at the plant level. Advanced OT Cyber Security ensures the protection of data, infrastructure, and industrial systems against increasingly complex and sophisticated threats. Up to 40% of CIOs in the sector identify Cloud platforms as a priority investment area, while 66% plan to increase Cyber Security investment. It is the combination of real-time analytics, machine learning, and robust cyberthreat protection that creates a secure and dynamic environment that fosters innovation without compromising data security or integrity. ■ By incorporating a Mission Critical SOC (Security Operations Center) and GRC (Governance, Risk and Compliance) office, industrial environments can maintain resilience by monitoring and responding to security incidents and ensuring regulatory compliance. As a result, risks are reduced and critical infrastructures are protected, ensuring operational continuity and the security of data, systems, and people. Integrating capabilities enhances resilience to market challenges and ensures long-term competitive advantage. Key success cases demonstrate positive impact At Telefónica Tech we have implemented our solutions in a variety of sectors with significant and quantifiable results: SEAT: Comprehensive operations management Through collaboration with Seat, we connected their robots, machines, PLCs, servers, and operator input data to their MES and MOM systems. This enabled a real-time monitoring center with automated reporting. Results: A 37% improvement in ecological efficiency and significant reductions in production times. Example: Seat can monitor and optimize all aspects of manufacturing in real time, addressing issues before they disrupt production, reducing downtime, and improving product quality. Repsol: AI for logistics and sustainability We worked with Repsol to optimize warehouse product management. Efficient systems for product location, transportation, and delivery were established. Results: Leveraging AI, Big Data, and analytics, Repsol automated logistics tasks, reduced costs, increased stock rotation, and promoted sustainability. Example: Analyzing millions of variables improves real-time decision-making, optimizing logistics operations and reducing transportation and storage costs, cutting carbon footprint and waste generation. Horse: Predictive OT Intelligence to counter threats In partnership with Horse we implemented our OT Cyber Security platform, Aristeo, at their Valladolid factory. Aristeo uses real industrial hardware to capture cyber threats and provide predictive intelligence on attacks. Results: Horse can monitor its environment in real time and proactively address potential threats. Our Aristeo solution anticipates and responds to threats advancedly: both predicting and preventing them. Example: Aristeo handles all threats, including zero-day vulnerabilities, identifying trends, campaigns, and targeted attacks while helping to anticipate and understand attackers’ tactics, increasing resilience and ensuring operational continuity. Conclusion The Smart Data Path concept is a vital tool for Industry 4.0, offering endless possibilities for enhancing efficiency, reducing costs, and improving product quality. Our success cases illustrate its transformative potential in industrial processes, ensuring a more innovative and competitive future. By integrating next-generation digital technologies such as AI, IoT, Cloud computing, and real-time data analytics with robust Cyber Security capabilities, companies can optimize current operations while discovering new growth and development opportunities. This fosters greater resilience to market challenges and ensures a sustainable competitive advantage in the long term. ■ We at Telefónica Tech strive to drive the digitalization of the industry through secure solutions based on asset connectivity and data analytics, creating value at every step. ______ Connectivity & IoT IA & Data Factory floor inference: how AI transforms manufacturing in real time September 25, 2024
December 4, 2024
Cloud
AI & Data
Intelligent Workplace
Evolution of communications: how companies adapt to boost efficiency and productivity
Work modes have changed significantly. Companies are shifting toward hybrid and remote work environments to meet evolving demands and needs, requiring innovative solutions that facilitate effective employee collaboration anytime and anywhere. In this context, convergent technologies such as Unified Communications and Collaboration (UC&C) solutions are crucial. These systems unify voice, data, and video onto a single platform, enhancing efficiency and collaboration among employees. Moreover, their benefits are magnified by AI systems integration in these convergent environments. Forty percent of employees now work remotely or in hybrid setups, a trend more pronounced in large enterprises. ✅ These technologies optimize existing tech resources and foster more integrated, cohesive work environments —essential in today's hybrid, remote, or mobile work context. This shift enhances the experience for employees, suppliers, collaborators, and clients alike. Adoption of Convergent Technologies and observed benefits According to the IDC, Microsoft, and Telefónica report, 'Evolution of Communications: Toward Convergent Environments', 79% of surveyed companies (and 87% of large companies) are already using Unified Communications and Collaboration (UC&C) solutions. A substantial proportion of these companies (68%) identify Microsoft as the leading provider of collaboration and UC&C solutions. When it comes to voice-UC&C integration, most companies (67%) prefer implementation through telecommunications providers and technology integrators. 68% of companies identify Microsoft as the leading provider of collaboration and UC&C solutions. Some of the key benefits identified by companies using UC&C solutions include: Improved operational efficiency (64%) in a dispersed and flexible work environment. This is achieved by optimizing technological and human resources and providing effective collaboration tools that enhance teamwork. Enhanced employee productivity (57%), enabling companies to be more competitive, respond quickly to market demands, innovate, and position themselves more effectively against competitors. Adopted technologies improve internal communication and facilitate access to business information and resources. Cost reduction (46%) by consolidating multiple communication systems into a single platform, lowering operating and maintenance expenses. Also, improving employee experience, establishing omnichannel interactions, and adhering to ESG standards are also essential for a comprehensive strategy that aligns talent with business objectives. ✅ Integrating various communication services facilitates smoother and more effective collaboration among employees, making it easier to share information and work as a team. Additionally, it helps maximize the use of existing resources, reducing the need for investments in multiple, separate systems and solutions. AI Integration in Convergent Environments The integration of Artificial Intelligence (AI) systems within these unified communications and collaboration environments further amplifies their benefits, transforming how companies manage interactions and optimize resources: Enhanced meeting experiences: AI enables real-time transcription and translation or provides automatic summaries, helping participants focus on key points and improving collaboration, especially for distributed teams. Generative AI for real-time content creation: This includes notes and summaries generated as digital assistance or efficient information retrieval. AI-driven automation within UC&C speeds up and reduces repetitive, tedious tasks, freeing employees to focus on more value-added activities. Conclusion The IDC report concludes that communication convergence is an inevitable and beneficial trend for businesses. Companies that adopt these technologies will be better positioned to tackle future challenges and seize digital transformation opportunities. Efficient communication solutions reduce unnecessary travel, lowering emissions. For UC&C solution implementations, companies should carefully assess their current communication infrastructures and partner with trusted, experienced technology providers recognized for offering advanced, integrated, and customized solutions. Future Workplace What is Copilot Microsoft 365 and how it can help you in your work? August 1, 2024
November 18, 2024
Cyber Security
Tourism sector cyber-resilience: challenges and best practices
Cyber security in the tourism sector faces complex and growing challenges arising from the adoption of digital and connected technologies such as Cloud infrastructures, Internet of Things (IoT) and Artificial Intelligence (AI) or connectivity. These technologies offer significant benefits for the sector, such as cost and environmental impact reduction, greater operational efficiency, implementation of advanced guest and visitor services or data analysis to improve decision making, among others. However, the adoption of these technologies, which are essential for digitalization, also increases the area of exposure to attacks. These technologies become potential attack vectors that offer cybercriminals more options to exploit vulnerabilities if adequate security measures are not implemented. This increases the risks and challenges for tourism companies, which need to implement cybersecurity measures to protect their operations, data and digital assets, as well as their customers. Cyber resilience enables companies to continue operations even in the event of a cyberattack. Resilience is essential to minimize the impact on operations and customer experience. Main cyber threats in the tourism sector Some of the main cyber threats faced by companies in the travel industry can seriously compromise their operations and the security of their customers' data, including: New attackers, including threats from nation-states, criminal enterprises or anti-tourism movement activists, which represent a significant and evolving threat to the tourism sector. Impersonation, which has evolved significantly thanks to the information available on social networks and the advancement of AI, facilitating video and voice deepfakes (vishing) attacks. This tactic allows attackers to create realistic imitations of people, from executives to public figures that are used to deceive employees and security systems, gaining unauthorized access and compromising sensitive information and making it imperative to incorporate advanced identity verification and multifactor authentication solutions. Social engineering, which remains one of the most effective techniques for compromising a target's security. Tactics such as the aforementioned vishing or CEO fraud represent a complex challenge. Deepfakes have increased the effectiveness of social engineering attacks. Ransomware, which involves encrypting critical data and demanding a ransom for its release. In more advanced techniques, attackers first steal copies of data before encrypting it and subsequently threaten to publish or sell it on the deepweb. Ransomware attacks can completely cripple operations. Phishing: Phishing involves sending fraudulent emails to trick employees and obtain confidential information. These common social engineering techniques can lead to obtaining critical credentials and unauthorized access to sensitive systems. IoT exploits: The proliferation of IoT devices in the travel industry, from smart locks to building management systems, has expanded the attack surface. Cybercriminals can exploit vulnerabilities in these devices to gain access to wider networks and systems. DDoS attacks: Distributed denial of service attacks (DDoS attacks) can overwhelm a company's servers and networks with 'junk' or malicious traffic, rendering systems inaccessible. This can result in significant service interruptions, affecting customer experience and causing financial losses. Malware (malicious software), which includes viruses, trojans and spyware, can infiltrate a tourism company's systems through emails, software downloads or removable media. Once inside, malware can steal confidential information, damage data or allow remote control of systems. Insider threats from insiders, employees with access to critical information and systems who, accidentally or intentionally, become a potential threat if they are dishonest, greedy or negligent and can cause data leaks, sabotage or provide unauthorized access to external actors. Cyber resilience in the tourism sector must adapt to the evolution of attackers and cyber threats. ⚠️ In the current cyber security arms race criminals are using Artificial Intelligence (AI) and automation to improve the effectiveness of their attacks. ✅ Defenders, in turn, are leveraging these technologies to detect and respond to threats in real time. This arms race underscores the need to stay current with the latest cyber security knowledge, techniques and innovations, as well as cyber physical infrastructures and systems up-to-date with security patches and updates. The cyber-resilience formula for the tourism sector Cyber resilience in the tourism sector is essential to ensure operational continuity in the event of cyberattacks. Companies in the travel industry must adopt their own unique resilience formula to improve their ability to respond and recover from these events. The resilience formula for the tourism sector includes: Identify: Use a scheme that combines cyber intelligence and exposure management. It is vital to know the specific threats facing the travel industry and to continuously assess vulnerabilities within the organization. Protect: Implement a Zero Trust scheme on all systems. This approach is based on the principle that no entity, user or system should be considered trustworthy when accessing company data or applications. Identity-based protection is essential to secure critical assets. Detect and respond: Improve incident detection and response capabilities. This includes implementing advanced monitoring, behavioral analysis and automated response tools to mitigate the effects of cyberattacks as quickly as possible. Manage: Apply risk management practices and maintain ongoing resilience. Organizations should establish clear procedures for incident management, where communication and notification are key, disaster recovery and business continuity, ensuring that all employees are trained and prepared to act in the event of an incident. Key practices to improve cyber-resilience Implementing key security practices, such as identity and access management, continuous monitoring and penetration testing (pentesting), is key to maintaining a robust security posture. Key practices to improve cyber resilience include: Develop a sound strategy to align security objectives with corporate business objectives. A well-defined strategy directs security investments and efforts to the areas of highest priority and risk. Governance involves establishing policies, procedures and controls to ensure that security management is performed consistently and efficiently. It includes the definition of roles and responsibilities, compliance with regulations and standards, and continuous monitoring of security activities. Detailed analysis of security data and events enables the identification of trends, patterns, unusual behavior and potential vulnerabilities. It is essential to anticipate threats and proactively improve defenses. Optimizing service involves continually improving processes and practices to ensure they are as effective as possible. It can include upgrading technologies, restructuring processes, and ongoing staff training and awareness. Consulting services enable organizations to obtain expert perspectives and recommendations. Consultants help assess current security strategies, identify gaps and propose solutions tailored to the specific context and needs of each company. Prioritizing use cases is essential to focus resources and efforts on areas of greatest risk or impact. The identification and classification of these cases allows for a more efficient allocation of resources and improved responsiveness. Automation uses advanced technologies such as machine learning (ML) and AI to identify threats, classify incidents, respond to security events and mitigate risks. Security automation reduces the time and resources required to detect and respond to threats and minimizes the risk of human error. Automation and optimization are key elements to ensure efficient and effective operations. Optimizing the cyber-resilience formula for the tourism industry Protect your customers Ensure that customers enjoy a secure and pleasant travel, lodging and entertainment experience with adequate security measures commensurate with risk and trained personnel. Facilitate processes such as reservations, check-in and check-out or the use of mobile applications, ensuring secure, intuitive and easy-to-use platforms. Implement fraud protection, payment security and personal data protection measures to ensure customer confidence and satisfaction. Protect your operations Effectively manage logistics, order fulfillment, service provisioning and maintenance with security and efficiency protocols, with a focus on supply chain and third-party interdependencies. Keep online processes such as billing, payments and reservations secure and efficient using up-to-date technology and cyber security best practices. Protect your databases, digital assets and intellectual property information, and safeguard your company's online reputation through cyber security and crisis management strategies. Combat ransomware Design a robust cyber security strategy to prevent ransomware attacks, including regular backups, employee training and awareness, and periodic drills and tests. Have continuity and disaster recovery plans in place to quickly restore data and systems affected by an attack. Develop and implement postmortem procedures to prevent future threats and improve system resilience. Anticipates threats from nation-state actors Be aware that the tourism sector is vital to the economy and well-being of Western nations and therefore a potential target for threats originating from nation-state actors. Beyond common cybercriminals, identify threats from nation-states, groups or activists with political or economic objectives. Be prepared to face advanced persistent campaigns (APTs) that can attack on multiple fronts at the local, regional or national level. ✅ The adoption of these practices will not only better protect organizations in the tourism sector but will also strengthen customer confidence in the safety and integrity of their services, and their overall satisfaction. Security challenges faced by tourism companies As we have seen, companies in the tourism sector face several complex and evolving challenges that include, among others: Adoption of cyber resilience strategies with a proactive and collaborative approach to risk that includes the implementation of zero trust strategies and the use of advanced technologies and Artificial Intelligence to protect both your operations and your customers' data. Increased surface area of exposure to cyberattacks due to the adoption of digital technologies such as Cloud platforms, IoT or 5G connectivity that increase the chances of attackers exploiting vulnerabilities in connected devices and unsecured networks. Budget constraints while increasing costs due to social and corporate pressure, stricter regulations and standards or cyber insurance policies, coupled with a shortage of specialized talent. Cybercrime as-a-Service (CaaS), a model that facilitates the proliferation of attacks such as phishing, vishing or ransomware. It allows malicious actors to buy and sell tools and services to carry out cyberattacks even without having the necessary technical skills. Automation and use of Artificial Intelligence (AI) that attackers apply to create more complex threats and to increase the effectiveness of their attacks or to use tactics such as vishing, with generative AI. Protection against ransomware attacks, especially those involving the “total ransom” (ransomALL) of digital assets discussed above. As we have seen, these attacks can cripple operations, significantly affect the customer experience and the organization's reputation. Protection of customers'personal data to ensure that attackers do not gain access to sensitive information, such as identity-related or banking information. Management and cost of incidents which can be high, both financially and in reputational damage. Although the investment in cyber security may seem high, it prevents major losses. Continuous training of employees in security practices and regular updating of the technologies and techniques used are critical to maintaining an effective security posture. Conclusion Tourism companies must implement robust cyber security strategies and continuously adapt to new threats and emerging technologies. Constant training and technology updates are essential to maintain effective security and prevent financial loss and reputational damage. Companies in the travel industry must also adopt cyber-resilience strategies, including a zero-trust approach and the use of advanced technologies and Artificial Intelligence. Adopting these strategies is critical to protecting operations, digital assets and customer data. As it is to anticipate threats and minimize the impact of potential attacks, thus ensuring business continuity and customer confidence. The industry, by learning from recent incidents and adopting proactive and adaptive strategies, can fortify itself against future threats and protect both its customers and its operations. — CONTRIBUTED: Juan Campillo, David Prieto & TEAM OF Govertis, part of Telefónica Tech — Cyber Security Cyber Security strategies to protect the financial sector October 10, 2023 Image: Freepik.
October 21, 2024
Connectivity & IoT
Network Operations Center (NOC): Proactive 5G private network management for critical industries
Connectivity has become a foundational element in industrial sectors. At Telefónica Tech we specialize in advanced private 5G connectivity solutions, ensuring efficient, customized network management for critical sectors such as industry, healthcare, and energy through our Network Operations Center (NOC). In this context, Telefónica Tech’s NOC plays a crucial role in the continuous, secure operation of these private networks. It serves as a centralized facility where our teams monitor, manage, and maintain telecommunications and data networks. These centers are vital in ensuring uninterrupted network operation, detecting and responding to incidents and issues before they impact systems and users. ✅ With locations in Spain and Brazil, this key network infrastructure center provides constant support and monitoring, managing private 5G networks through a range of technologies and in multiple languages. Importance of Private Networks in critical sectors Private networks or non-public networks (NPN), are dedicated telecommunications infrastructures built exclusively for a particular company or organization. Unlike public networks, these are specifically designed to interconnect all smart devices within a company, driving digital transformation. Based on 4G or 5G technologies, these networks offer secure, customizable, and low-latency connections, making them indispensable for various sectors to ensure safe, continuous, and efficient operations across numerous applications, including: Industry: Private networks enable robust, secure interconnections between all components of industrial processes and IoT devices, as well as OT systems for monitoring, controlling, and automating production. Network disruptions can lead to production halts, significant economic losses, and safety issues. Healthcare: Private networks are essential for the secure and rapid transmission of medical data, the efficient operation of monitoring systems, and telemedicine or telesurgery applications. A network interruption can delay critical decisions, affecting patient care quality and potentially putting lives at risk. Energy: These networks ensure the supervision and control of critical production and distribution infrastructure, guaranteeing a safe, continuous energy supply. Interruptions could cause energy distribution failures, outages, equipment damage, and significant financial losses. This could have especially serious repercussions for hospitals, factories, and other critical facilities relying on a steady power supply. ✅ To ensure network stability and security, Telefónica Tech’s NOC also adopts a proactive approach to identify and resolve potential issues before they escalate into critical incidents, ensuring the constant operability of these networks. Capabilities of Telefónica Tech’s NOC 24/7 service throughout the year, with response times under 30 minutes depending on the priority level of each incident. We provide multilingual support in Spanish, English, and Portuguese, with expertise in network technologies from various manufacturers. Our key capabilities include incident management and diagnostics, network performance monitoring and initial diagnostics, service restoration management, and preventive maintenance. WE adapt our models and SLAs (Service Level Agreements) to meet specific client needs, leveraging Telefónica’s global tools and processes. Security and recovery Security is a top priority at our Telefónica Tech’s NOC, which employs core node hardening, encrypted communications, and a privileged access manager. Additionally, it proactively detects threats and has robust business continuity and disaster recovery plans in place. Quality management and compliance Our NOC ensures SLA compliance and designs, implements, and monitors the operational model. Postmortem reports and KPI analysis are conducted to improve services. Our NOC team undergoes a rigorous selection and training process, with access to global provider training and an internal academy for courses and training sessions. Conclusion Telefónica Tech’s NOC stands out for its focus on security, quality, and efficiency. This underscores its importance in critical sectors such as industry, healthcare, and energy. Its ability to proactively identify and address issues and respond swiftly to incidents ensures industrial processes continuity and the secure, efficient transmission of critical data. ✅ With a global service model and multilingual support, at Telefónica Tech’s NOC we adapt to the specific needs of our clients, ensuring a high level of security and compliance. Connectivity & IoT IA & Data Factory floor inference: how AI transforms manufacturing in real time September 25, 2024
September 25, 2024
Cyber Security
How to prevent DDoS attacks and safeguard your business from cyber threats
A distributed denial-of-service (DDoS) attack consists of sending a large amount of unsolicited traffic against a network, a server or a website. Its purpose is to saturate the capacity of the attacked system and slow it down, block it or cause it to crash, preventing legitimate users from accessing it. These attacks can affect the availability, performance and security of digital services, causing financial losses, reputational damage and legal problems. For example, an online store website that is the victim of a denial-of-service attack during the sales period will lose sales and money, and its reputation will be damaged. DDoS attacks are a growing threat to businesses of all sizes and industries. It is important for businesses to have a plan in place to protect against these attacks. The motivations for these attacks can be political, economic or ideological, and have been used to interfere with elections, sabotage business activities, compete unfairly or protest against governments or institutions. DDoS attacks can also be part of a cyberwarfare strategy to weaken or destabilize the enemy. ◾ While a DoS (denial-of-service) attack comes from a single source, a DDoS (distributed denial-of-service) attack involves multiple coordinated sources, making it more difficult to detect and mitigate. Consequences of a Distributed Denial-of-Service Attack (DDoS) The consequences of a distributed denial-of-service (DDoS) attack can be severe and varied. The most common consequences include: Service interruption: The main objective of a DDoS attack is to interrupt the service of a network, server or website, making it inaccessible to other users. This can have a negative impact on the activity and on the provision and fulfillment of the service. Loss of revenue: If a website or server is down due to a DDoS attack it can result in a loss of revenue for the company. This can have a significant impact especially for businesses that rely on their online presence to generate revenue, such as an ecommerce store. Reputational damage: A DDoS attack can have a negative impact on a company's reputation. Customers may lose confidence in the company if its website or server is down for an extended period of time, which can result in decreased sales and customer loyalty. Recovery costs: After a DDoS attack the affected company will need to invest significant time and resources in recovering the website or server. This may include hiring IT security experts, implementing additional security measures and repairing any damage caused by the attack. ⚠️ In 2016, a well-known case of DDoS attack occurred when a botnet called Mirai, composed of millions of IoT (Internet of Things) devices, launched a massive attack against a company that provided DNS services to widely used websites. The attack affected more than 100 million users worldwide who were unable to access these services for several hours. The attack also had geopolitical repercussions, as it was suspected to be related to cyberespionage and electoral interference. How to execute a DDoS attack A distributed denial-of-service (DDoS) attack relies on the use of networks of multiple infected or compromised devices, called bots, which act as emitters of malicious traffic to flood a server or website. The attacker infects a large number of devices with malicious software (malware). These devices, such as computers, routers, cell phones, cameras or any other device connected to the Internet, become bots. These bots can be remotely controlled by the attacker through a central server called botmaster, from which he sends the instructions to carry out the attack. When they receive the order, the bots simultaneously send many requests against their target. The DDoS attack can be carried out in various ways, such as HTTP requests or UDP or ICMP messages, among others. The requests sent by the bots are often larger than usual and contain junk information. The attack floods the target with a large amount of traffic that it cannot process or respond to at once. This prevents it from functioning and can bring the system down, making it inaccessible to legitimate users. Cybercriminals also use tactics such as spoofing and amplification to make their attacks more effective. Spoofing involves modifying the source IP address of malicious traffic so that the target cannot identify or block the real senders. Amplification consists of using vulnerable services that respond with more data than they receive, thus multiplying the traffic reaching the target. ⚠️ Some examples of services that can be used for attack amplification are DNS (domain name system), NTP (network time protocol), SNMP (simple network management protocol) or SSDP (simple service discovery protocol). This 'digital echo' allows attackers to generate a much higher volume of traffic than bots could generate on their own. DDoS attacks are very difficult to detect and mitigate, as they disguise themselves among normal traffic and use different techniques to amplify their impact. How to protect yourself from DDoS attacks As we have seen, distributed denial-of-service attacks have the potential to cause damage that threatens business continuity. A DDoS protection service will provide quick detection of an attack and mitigate the attack by clearing the traffic and allowing only lawful traffic to pass through to the destination, minimizing the consequences of the attack and ensuring the continuity of the customer's business. The best way to protect businesses from this type of attack is to have a specialized DDoS protection service, such as Telefónica Tech's DDoS Protection. ✅ Thanks to the collaboration of Telefónica as ISP, the solution is implemented directly on the data network. It monitors and filters malicious traffic at the carrier level, thus ensuring continuity and quality of service. In this way, the traffic arrives cleanly to the client, without tunnels or additional redirection configurations. Telefónica Tech DDoS Protection Telefónica Tech has configured DDoS Protection as a comprehensive and robust solution that protects companies from any type of DDoS attack with features such as: Combines cloud and on-premises protection, allowing to adapt to the specific needs of each business and ensure the continuity and quality of digital services for all types of DDoS attacks, both volumetric and application. Separates malicious traffic from legitimate traffic. Unlike other solutions on the market that in case of attack discard all traffic (legitimate and malicious), our DDoS Protection solution is able to clean the traffic allowing the passage of legitimate traffic. It uses AI and advanced algorithms for fast and accurate detection of attacks. Service mode. The DDoS Protection solution is provided in service mode, with no investment required from the customer. It has a team of experts who offer support, advice and help to configure and manage the service 24/7 and a web space where attack reports and alerts can be consulted in real time. Non-intrusive solution. Activated on the Customer's dedicated links, it does not involve modifications to the Customer's equipment. The Operator is able to provide a more effective response. This type of volumetric attacks can saturate any element of the customer's network; therefore, an optimal solution must be integrated into the operator's network. We count on the collaboration of Telefónica which, as an ISP, allows the provision of this service to be more effective. Benefits from Telefónica Tech's capabilities, technology, and experience. Whichever specialized DDoS protection service is chosen, it must comply with essential aspects such as: Broad and flexible coverage that adapts to the specific needs of the business and makes it possible to protect infrastructures, applications and data against any type of DDoS attack, both volumetric and application-layer. Fast and accurate detection to identify and mitigate attacks as soon as possible, minimizing the impact on business and customer experience. Effective and coordinated response that integrates different solutions and technologies to provide a multi-channel, multi-layered defense that blocks attacks at the network, perimeter and cloud. An expert team that provides advice, monitoring and continuous support for any incident. Cyber Security is a key aspect to protect and make any business resilient against cyber threats, contributing to its success. Cyber Security The importance of Threat Detection and Response in the identity landscape July 16, 2024 Image: Kjpargeter/ Freepik.
September 19, 2024
Cloud
Connectivity & IoT
Edge Computing and 5G: a symbiosis driving innovation
The symbiosis between Edge Computing and 5G connectivity hinges on two key concepts: proximity and speed. By bringing computing closer to the network edge with Edge Computing, the distance data travels for processing and transmission is reduced. Coupled with the high-capacity 5G mobile network, this increases wireless communications efficiency and performance. These advantages enable efficient and disruptive solutions across various sectors and applications, including: Smart Agriculture: This uses sensors, drones, and cameras to monitor crop conditions, weather, and soil, optimizing resource usage such as water, fertilizers, and pesticides. ✅ Edge Computing and 5G facilitate real-time data analysis and automated decision-making to improve productivity and sustainability in agriculture. Remote Education: This allows access to high-quality educational content from anywhere on any device, thanks to high-definition video and audio transmission, virtual or augmented reality, and real-time interaction between teachers and students. ✅ Edge Computing and 5G enhance the learning experience by reducing latency and data loss, and by providing a more immersive and personalized environment. Digital Health: This enables remote delivery of healthcare services, such as teleconsultations, telemonitoring, teleassistance, or telesurgery. ✅ Edge Computing and 5G ensure the security, reliability, and speed of communications between healthcare professionals and patients. In addition, they ensure access to relevant information for diagnosis and treatment, such as medical images, clinical records, or medical histories. Industry 4.0: This is characterized by the integration of digital technologies, such as IoT (Internet of Things), Artificial Intelligence, Big Data, and robotics, into production processes. ✅ Edge Computing and 5G connect and manage industrial devices and machines, facilitating remote control, automation, optimization, and prediction of operations, and preventing failures and breakdowns. Advantages of Edge Computing: Reduced bandwidth usage and latency: By avoiding excessive data transmission to the cloud and back, Edge Computing minimizes bandwidth use and latency. Improved operational efficiency: It enhances system speed and recovery capacity. Lower costs: It reduces data storage, transmission, and local server costs. Security and privacy compliance: Keeping sensitive data within a specific geographical territory rather than sending it to remote data centers in other countries or regions helps address security and privacy requirements. 5G networks provide 100 times more speed than 4G networks and have greater security. They also consume 90% less energy. Relationship between Edge Computing and 5G Edge Computing and 5G are symbiotic: they combine their individual strengths to form a more powerful solution that leverages novel ways to connect devices. Although they are different technologies, their nature complements them. 5G connectivity increases data transfer speeds, while Edge Computing shortens the round trip between the device and the cloud or data center, reducing unnecessary traffic. Together, these technologies improve network performance and create new opportunities, enhancing the digital experience. Edge Computing and 5G combine their individual strengths to form a powerful solution for the digital transformation of numerous sectors. Use cases for Edge Computing and 5G Some examples of projects leveraging Edge Computing and 5G synergy include: Precision tele-surgery: This allows remote surgical operations with the help of robots controlled by specialist doctors, thanks to the low latency and high reliability of the connection. Connected cars: These can communicate with other vehicles, road infrastructure, and mobility services, thanks to the network's high speed and density. Drone usage: Drones can perform tasks such as surveillance, transportation, rescue, or agriculture, thanks to the ability to process data at the edge and device autonomy. Computer Vision and Edge AI: This allows real-time analysis of images and videos to extract useful information, thanks to computing power and edge storage. Conclusion Edge Computing and 5G are two technologies that complement and enhance each other, creating a landscape of innovation and digital transformation across various sectors. Edge Computing extends cloud capabilities beyond data centers, bringing computing power closer to the data source. 5G offers next-generation connectivity at unprecedented speed and density, facilitating real-time data exchange. The combination of these technologies opens new possibilities for products, services, and experiences, and for business competitiveness. Edge Computing and 5G are undoubtedly key to the development of Industry 4.0, characterized by the integration of digital technologies into production processes. These technologies allow the connection and management of numerous industrial devices and machines. This facilitates remote control, automation, optimization, and prediction of operations, and prevents failures and breakdowns. Cloud AI of Things Edge Computing and Machine Learning, a strategic alliance July 7, 2022
August 19, 2024
Cloud
What is sovereign cloud (and why it’s important for your business)
You've likely heard of Cloud Computing or "the cloud," a technology that allows you to access computing resources on demand, in a simple, secure, and flexible manner, without the need to invest in your own infrastructure or specialized personnel. However, you might not know that there are different types of Cloud based on the level of control, security, and customization required. One type that is gaining relevance is the sovereign cloud. In 2024, 40% of large companies will move 10% of their workloads to sovereign cloud providers. What is sovereign cloud? The sovereign cloud was created to ensure that cloud services comply with regulations and jurisdiction of a specific territory, region, or country. This is crucial for public administrations and companies that handle sensitive or critical data. Sovereign cloud aims to secure sovereignty, independence, control, and security of the data and the infrastructure that processes and stores it. This avoids dependence on foreign providers or governments. Sovereign cloud provides public institutions and strategic companies with greater sovereignty, security, and trust over their data and applications. It respects the regulations and national sovereignty of each country, without being subject to foreign legislations like the U.S. Cloud Act, which allows U.S. authorities to access data stored in the cloud. What type of cloud is sovereign cloud? Among the various types of clouds based on the level of control, security, and customization required (public, private, hybrid, multicloud), the private cloud is the enabler of the sovereign cloud: The private cloud model uses its own dedicated infrastructure for a single client, unlike the public cloud where computing resources are shared among multiple users. The private cloud offers companies complete control over the design, management, and security of their cloud environment, with advantages such as: Data Protection: Utilizing a dedicated infrastructure that meets data protection standards and ensures data remains within national borders. Resource Optimization: Flexibility to adapt resources to business needs. Customization: Tailoring the private cloud to each client's specific requirements (availability, scalability, security, support) and the desired service type. What are the benefits of sovereign cloud for your business? Public administrations and companies may require a sovereign cloud for several reasons, including: Compliance: Ensuring their data is stored and processed according to local laws, free from foreign governments' jurisdiction or control. Data security: Protecting the confidentiality, integrity, and availability of information, especially when dealing with sensitive or critical data related to national security, defense, health, or the economy. Independence: Ensuring the independence and mobility of their data, avoiding reliance on cloud providers that may impose restrictive or abusive conditions or may suffer service interruptions or cyberattacks. Regulatory compliance: Meeting regulations and standards for data protection, privacy, transparency, sovereignty, and information security. Innovation and development: Promoting local innovation and technological development by supporting national or European cloud providers offering solutions tailored to the specific needs of each country or region. Conclusion Sovereign cloud is an option that offers institutions and companies handling sensitive or strategic data a higher degree of control, security, and confidence over their information, respecting each country's regulations and national sovereignty. In this context, the private cloud is the model that enables the sovereign cloud, offering a dedicated infrastructure that adapts to the specific needs of each client. The sovereign cloud is a key factor in competitiveness, innovation, and technological development of organizations and regions. Cloud Cloud terms you can't miss March 30, 2023
August 12, 2024
Cloud
Why Edge Computing is key to strategic sectors
Can you imagine remotely operating on a patient with a robot, driving an autonomous car, or playing online with thousands of people without delay or interruption? These are some of the applications that could become a reality thanks to Edge Computing, a technology that brings the power of Cloud Computing (computing power, storage, environments for running AI applications...) to the edge of the network. In other words, close to where data is generated and consumed. This has several advantages, such as: Reduce latency, the time information travels from its origin to its destination and back. With data closer, information exchange is faster and more efficient, allowing real-time responses. Increasing data security and privacy, by preventing data from leaving a specific geographical area and being sent to remote data centers in other countries or regions, where it could be subject to different laws, regulations, or risks. Optimizing bandwidth consumption, by reducing data sent to the cloud and processed at the edge. This alleviates network congestion and improving performance. Edge Computing, key to strategic sectors Edge Computing is a technology that can benefit many sectors that generate and use large volumes of data. This is especially those requiring high availability, immediate response, or data residency. Some examples include: Healthcare, enabling telemedicine, telesurgery, imaging diagnostics, patient monitoring, or medical records management, among other applications that require fluid and secure communication between doctors, patients, and medical devices. Transport and logistics, to improve mobility, safety, and efficiency of connected and autonomous vehicles, drones, navigation systems, traffic control, or fleet management, among other applications that depend on high-speed, low-latency connectivity. Energy and utilities, optimizing the operation, maintenance, and safety of electrical grids, generation plants, substations, smart meters, or renewable energy sources, among other applications involving a large number of sensors and IoT (Internet of Things) devices. Industry, enhancing automation, robotics, computer vision, Edge AI, quality control, or process monitoring. In addition, it enhances other applications requiring high processing and real-time data analysis capabilities. Edge Computing, 5G, and the Sovereign Cloud To harness Edge Computing capabilities, next-generation connectivity like fiber optics or 5G connectivity is needed. These connections offer high bandwidth, high speed, and minimal latency, allowing data to be transmitted and received almost instantly. Edge Computing and 5G are complementary technologies that enhance each other, enabling new interaction possibilities between people and machines, both in the physical and virtual worlds. The sovereign cloud, on the other hand, refers to a cloud computing infrastructure that respects users' and countries' data sovereignty, ensuring their protection, control, and regulatory compliance. The sovereign cloud is based on collaboration between local and global providers, offering cloud services tailored to the needs of each market and sector. Edge Computing is a key piece for the development of the sovereign cloud, as it brings cloud services closer to the edge of the network, where data is generated and consumed, respecting data residency and security. Conclusion Edge Computing is a technology that brings cloud computing to the edge of the network, where data is generated and consumed. This technology has multiple advantages, such as reducing latency, increasing security, and optimizing bandwidth. Edge Computing is critical for strategic sectors such as healthcare, transportation, energy, or industry, which require a quick and efficient response to their data needs. It is also related to 5G connectivity and the sovereign cloud, two concepts that improve connectivity and data sovereignty. In short, Edge Computing is a technology that is transforming the way we connect to the internet and opening new opportunities for strategic sectors.
July 29, 2024
Cyber Security
Out of Office: How to communicate your vacation while protecting your privacy and Cyber Security
Before shutting down your computer and heading out for vacation, there's one detail you shouldn't forget: setting up an "out of office" (OoO) message is a common practice to inform your clients and colleagues that you'll be away for a few days. However, when configuring an OoO message, it's crucial to find a balance between providing useful information, individual privacy, and company security. Additionally, since the automatic response is typically sent without distinguishing the sender, whenever your email client allows it —such as in Outlook— you may consider setting up two different messages: a more detailed one for trusted colleagues and clients, and a generic one to respond to any other sender. What to include (and not) in an OoO message For a generic message, it's important to limit personal information since the same text will be returned as a response to any incoming message: whether it's for a trusted contact or a malicious actor. As much as possible: Avoid including unnecessary personal details such as exact dates, destination, specific activities, or individuals you'll be with. Not even a mobile for contact. You only have to inform that you are going to be away for a few days. Refrain from revealing sensitive company information like "If you're writing regarding Project X, contact...". If you mention an alternative contact, ensure it's someone who is aware of your absence, ongoing tasks, authorized, and genuinely able to assist the sender during your absence. Of course, confirm with that person first before including them as an alternative contact. Cloud Cloud Computing still running on holidays August 7, 2024 How to configure an effective and secure OoO message When writing an out-of-office message in your corporate email account, the text should be concise. Since you don't know who will write during your absence and what the reason is, avoid informal or creative language. At least not too much. Also remember that the person writing to you is probably not on vacation. A secure out-of-office message could be: Hello, Thank you for your message. I am currently out of the office but will reply to your email upon my return. Best, If you feel it's necessary to provide a reference to help the sender manage their expectations regarding a response, you can add a non-specific timeframe, such as: ...I will respond to your email in the second half of August. Not providing an exact return date, besides being more secure, gives you a margin of time upon your return. This way, you can review the emails received during your absence calmly and prevent the same emails from arriving again from a specific day onwards. Upon your return, you can dedicate more time and attention to messages received during your absence. This is always beneficial to avoid rushing when responding, and mistakes such as downloading an attachment, or clicking on any suspicious link. Photo by Ethan Robertson / Unsplash.
July 4, 2024
Telefónica Tech
The power of sustainable digitalization in the fight against climate change
Climate change is considered the greatest challenge of our time. Its effects range from desertification and droughts to floods and rising sea levels. Some of its most visible consequences are natural disasters, disruptions in food production, and impacts on energy markets. However, in this scenario, next-generation digital technologies offer solutions to combat climate change and mitigate its consequences: Artificial Intelligence, Cloud, Big Data, Internet of Things (IoT), or ultra-efficient 5G connectivity are some of the enabling innovations of sustainable digital transformation.. They can help us decarbonize the economy, optimize the use of renewable energy, and protect natural resources. Sustainable digitization for green transformation At Telefónica Tech, we are committed to developing digital solutions that protect, optimize, and reuse natural resources with minimal environmental impact. Almost two out of three solutions in our portfolio carry the Eco Smart seal, verified by AENOR. The Eco Smart seal identifies products and services designed to drive green digitization. This results in reduced water and energy consumption, promoting the circular economy, and lowering emissions that contribute to the greenhouse effect. Furthermore, our solutions promote energy efficiency, sustainable agriculture, and efficient transportation. Next-generation digital technologies can contribute to a 20% reduction in CO2 emissions by 2050. Source: Accenture & World Economic Forum. The impact of next-generation technologies The application of next-generation digital technologies has the potential to reduce global emissions by 15% to 35% in the coming years. These technologies enable improved management, control, and real-time action in various sectors, such as industry, energy, and public utilities. For example, when our digital technologies are applied to: Water supply: they can optimize distribution and reduce losses, which is particularly relevant in a context of water scarcity due to climate change. Infrastructure, such as natural gas, to reduce leaks and greenhouse gas emissions, optimize energy distribution, including renewable sources, or public lighting, among others. The agricultural sector, which is highly exposed to the increasing impact of climate change. They enable smart and precision farming, optimizing the use of resources such as water and chemicals, thus reducing costs. Workspaces, transforming the way we work: they reduce daily commutes, energy consumption, and associated CO2 emissions. Additionally, digitization of documents and reduced paper usage have a positive impact on the environment. Towards a sustainable and resilient future in the face of climate change Sustainable digitalization has emerged as a powerful tool to combat climate change and its consequences. Through the implementation of these solutions, we can: Protect our natural resources, such as water. Optimize operations, logistics routes, and industrial and production processes. Reduce polluting and greenhouse gas emissions. Generate opportunities for progress for all. Therefore, it is essential to continue driving technological innovation and fostering collaboration between the public and private sectors. This will enable a successful transition towards a more sustainable and resilient future in the face of climate change. Telefónica Tech Cloud How do sustainable data centers work and what are their benefits? November 2, 2023
June 5, 2024
Telefónica Tech
AI & Data
Smart Steps: understanding mobility
Smart Steps is a platform developed by Telefónica Tech that is used for crowd mobility analysis. It integrates next-generation technologies, such as Artificial Intelligence and Big Data, to convert anonymized and aggregated data from millions of mobile lines into insights about the behavior of groups of people. ✅These insights help both public administrations and companies to understand in depth the patterns and trends of groups of people and to make informed, data-driven decisions. In order to protect the individual privacy of users, all data is anonymized at source. In addition, the data is always handled in large volumes to provide useful information, such as relevant mobility patterns on user group behavior, consumption preferences, leisure trends and demographics… Smart Steps is a powerful tool capable of accurately analyzing crowd mobility and providing valuable information for decision making in numerous industries. Smart Steps application and uses Smart Steps is applied in several sectors, such as retail, tourism, advertising and marketing, transportation and logistics, natural resources, and energy, among others. In general, any company seeking to better understand its potential customers and make data-driven decisions can benefit from this platform in order to, for example: Design public transport planning, develop mobility programs and optimize route planning and management. Segment users or customers into specific groups according to their characteristics and behaviors. This helps companies to personalize their messages and offers, which can increase the effectiveness of their marketing campaigns. Optimize the use of natural resources such as water and electricity or gas distribution and plan the distribution of utilities based on seasonal demand, for example. Help companies identify the most strategic locations to open new stores or branches, based on movement and population density data in different areas. Better understand their customers' behavior, such as movement patterns, consumption preferences, and leisure activities, among others. This helps them make more informed decisions about marketing, advertising and sales strategies. Obtain reports and analysis that allow companies to make faster and more informed decisions. In addition to its potential usefulness in case of crisis, emergencies, or rapid changes in the market. How Smart Steps works As we said at the beginning of this post, Smart Steps is an advanced solution that uses new generation digital technologies for companies and administrations to obtain detailed insights about the mobility of groups of people based on mobile line data. How Smart Steps works can be summarized in the following steps: It collects anonymized data from millions of mobile lines. This data includes information on location, movement, and other relevant attributes. The collected data is anonymized to protect the privacy of individuals. It is then categorized and aggregated to obtain an overview of mobility behaviors and patterns. Use of AI techniques and Big Data analytics to process the data and extract relevant insights. These insights provide information about people's mobility, such as trips made, modes of transport used and socio-demographic profiles. ✅ The insights obtained can also be visualized and delivered in a variety of ways. This includes graphical visualizations, API-like interfaces for data consumption by other applications and customized analysis as needed. Use case: Mobility insights at MWC We used Smart Steps during the last edition of MWC to measure the impact of the event on mobility in Barcelona. In this case and thanks to the Smart Steps platform we analyzed anonymously the origin, destination, the time spent and the displacements of the congress attendees as a whole. As a result of these data, we were able to identify trends in mobility and among visitors, such as the busiest areas, the most common modes of transport, and length of stay, as well as opportunities for improving urban mobility management. Observed trends include: Trends detected during the Mobile World Congress 2024 The insights gained at MWC are useful for assessing visitor flow, understanding mobility patterns, segmenting visitors, analyzing accommodation preferences, and comparing the event with other similar events. Evolution of visitor flows. Most relevant conclusions recorded from 19 to 25 February (prior to the MWC), contextualized with data from the same week in 2023. These insights thus provide useful information for optimizing event planning and management, as well as for making strategic decisions related to event planning and its impact on the city's resources and services, including: Increased attendance throughout the week at the fairgrounds, with a 25% increase compared to last year. 45% increase in the presence of international visitors compared to last year, and more than 65% of overnight visitors were international. A pattern of mobility with working hours was observed, with the morning and afternoon stretches being the most frequent at the Fira destination. Most visited area in the afternoon (left) is the Dreta de L'Eixample (place of concentration of tourist attractions such as Casa Batlló, La Pedrera, as well as the main commercial areas of the city, Plaza Cataluña, Paseo de Gracia,...) while Poble Sec and La Marina de Port are the most popular neighborhoods for overnight stays before attending the fair (right). Dreta de L'Eixample was the most visited area when leaving the Fira, while Poble-sec and La Marina de Port were the most chosen neighborhoods for overnight stays. Visitors arriving the weekend before the fair tend to stay mostly in the center, but those arriving directly on the days of the fair opt for accommodation closer to the fairgrounds. TOP international countries of origin previous week MWC at Fira Germany, the United Kingdom and China were the main countries with the largest number of visitors at the Fira. There was a 223% increase in mobility with respect to the previous Monday in trips to Fira. There was a notable increase of 20% in the number of visitors compared to last year, with the second day being the busiest and the last day with 40% fewer participants. Travel to the fair during the mobile event took place mainly in the morning, coming from the center, Poble Sec and other nearby neighborhoods. Time slots destinations to the Fair. We can see that the busiest times at the fair are in the morning and afternoon. The most frequent trips to the fair take place in the morning and after lunch. Usefulness of the insights provided by Smart Steps These and other insights with detailed information allow you to analyze different aspects related to the event and visitors. They also help to better understand their behavior and needs during the event, including: Identify the flow of visitors throughout the MWC week, comparing data with previous years. ✅ This is useful for evaluating the success of the event and planning future editions or urban services, such as public transportation. Information on the origin of visitors, including the percentage of international visitors. ✅ This helps to understand the profile of attendees and adapt marketing, visitor assistance and promotion strategies. Knowledge about visitor movements within the city, identifying the most visited neighborhoods and peak times. ✅ This is useful for planning transportation routes, managing logistics and optimizing the attendee experience. Identify which neighborhoods are the most chosen by visitors to stay during the event. ✅ This provides relevant information for the city, the hotel industry and allows to adapt the accommodation offer to the needs of attendees. Compare the flow and behavior patterns of MWC visitors with other similar events or previous editions. ✅ This provides a broader perspective and helps to assess the relevance and impact of MWC compared to other events in the same sector. Transportation used by Fira visitors during MWC 2024. Conclusion Smart Steps offers significant benefits in many sectors, such as retail, tourism, advertising and marketing, transportation and logistics, and natural resources. It allows optimizing the use of resources, designing public transport planning, segmenting customers, identifying strategic locations, and understanding customer behavior. Its use during MWC allowed us to better understand the impact of the event on mobility in Barcelona, providing relevant insights for planning and managing the event, on the preferences, trends and needs of attendees and for making strategic decisions related to future editions and other events in the sector. More, We analyze the mobility of Swifties attending Taylor Swift's concerts in Madrid AI of Things 10-minute delivery: how Artificial Intelligence optimises delivery routes September 26, 2022 Photo: Martin Adams / Unsplash.
May 23, 2024
Connectivity & IoT
AI & Data
Advanced Factories 2024: trends and use cases for industry 5.0
After very busy three days we are saying goodbye to Advanced Factories 2024, the professional meeting on innovation, automation, and industrial cyber security, robotics, and digitalized industry that has been held this week at Fira de Barcelona. In this edition Advanced Factories has explored advances in AI, Cloud and Edge Computing, and new trends and use cases in the industry have been revealed. Our stand at Advanced Factories 2024. At the Telefónica Tech stand we showed the digital technologies that allow us to automate business and industrial processes to produce more and better: more efficiently, productively, securely, and sustainably. Maria Eugenia Borbore (Telefónica) and Darío Cesena (Geprom, part of Telefónica Tech), during the institutional visit to the Telefónica Tech stand. We have taken part in Advanced Factories 2024 at the... Round table: " Digitalization as a key element to ensure productivity and sustainability in the automotive sector " Darío Cesena, (Geprom part of Telefónica Tech) and Diego Galar (European Federation of National Maintenance Societie). Dario Cesena, CEO of Geprom, part of Telefónica Tech, speaking to industry experts on process automation, the use of AI in manufacturing and data analysis, explained that: The availability of data in real time guarantees immediate decision making in day-to-day production processes. Analyzing real use cases with customers, Darío believes that “particularly in the automotive industry, due to the complexity and competitiveness of the sector, data becomes a fundamental asset to maximize efficiency and ensure the sustainability of automotive factories.” Presentation: Migasa use case Javier Arias (Geprom, part of Telefónica Tech) and José Rodríguez Macías, (Migasa). This has been a didactic and pleasant talk between Javier Arias, commercial director at Geprom, which is part of Telefónica Tech, and José Rodríguez Macías, assistant to the Technical Management at Migasa, in which they explained the process of integration of the MOM (Manufacturing Operations Management) system in the production process of the agri-food group which produces thousands of tons of olive oils and vegetable oils every year. The project addressed the challenges and lessons learned during the implementation of the system by our colleagues from Geprom, now also "part of the Migasa team", in the words of Rodríguez Macías. Round table: “Industrial organizations oriented to the use of data. Data as a key element in the company's strategy” Marta Serrano (Acciona Energía), Julia Díaz (Repsol) and Juan Manuel Ferrer (Telefónica Tech). Juan Manuel Ferrer, Business Development Manager Industry 4.0 at Telefónica Tech moderated this round table in which industrial experiences were shared. These experiences, thanks to data, have improved their processes to achieve significant savings in terms of economy, energy, time, and resources. ✅ Data collection and analysis systems implementation facilitate significant optimization, allowing companies not only to reduce costs and errors, but also to improve their sustainability and operational efficiency. Round table: “Cyber security in smart plants: how to protect our industrial infrastructures” Sergi Gil López, Industrial Cyber Security & GRC consultant at Govertis, part of Telefónica Tech, emphasized the need to protect industrial systems against cyber threats. Sergi Gil (Govertis, parte de Telefónica Tech), Sergi Gil López Industrial Cyber Security & GRC consultant at Govertis, part of Telefónica Tech. We usually find that the IT part is protected, but the OT part is not. It is often forgotten. Sergi explained that “in industrial plants we find remote systems uncontrolled. Facility maintenance can also be an entry point for threats.” This requires integrated and robust Industrial Cyber Security solutions that include both hardware and software protection, and the effective segregation of critical networks and systems to minimize the chances of unauthorized access and limit the potential damage in the event of a security breach. Functional demo of our technologies for Industry 5.0 Together with our colleagues from Geprom, part of Telefónica Tech, we built the functional demo of a digitalized Industry 5.0 factory that was installed at our stand at Advanced Factories 2024. KUKA industrial robot is key to automated manufacturing process in Smart Industry 5.0 demo KUKA industrial robot is key to automated manufacturing process in Smart Industry 5.0 demo. Through this demo we displayed our capabilities and technologies applied to the industrial and productive sectors and explained to visitors how the integration of technologies such as Cyber Security, Cloud, IoT, Artificial Intelligence, Big Data, and Blockchain is transforming the industry, making it more competitive and resilient. The manufacturing process, as an example, produced personalized key rings with Telefónica's logo and the identity of our Centennial, in addition to a QR code that allows verifying the authenticity of the product and tracking its manufacturing process through Blockchain. For this purpose, several technologies are used, such as industrial and collaborative robots, artificial vision cameras, management and control systems, and a digital control tower based on the Legato platform. Manufacturing process of a key ring commemorating Telefónica's Centenary in the Smart Industry 5.0 demo, in a process supervised through a digital control tower based on the Legato platform. The digital control tower is a platform that sits on top of all plant operating systems, designed to integrate all the information collected in the different layers (PLC systems, SCADA, MES systems, Business, external information, etc.) and represent it in an agile and visual way for the use of different profiles within an organization. Its main objective is to have all the information of a factory, regardless of its origin, in a single interface and allow different users to make strategic decisions through the transparency of processes and complex interactions between different parameters. ✅ Our Aristeo Cyber Security solution ensured complete protection of industrial systems and critical data in the demo, detecting and neutralizing cyber threats in real time to strengthen the security and confidentiality of processes, systems and information in the production sector. In this expanded installation, the demo makes use of 5G technologies for connectivity, MES (Manufacturing Execution System), WMS (Warehouse Management System) and AMR (Autonomous Mobile Robots) for the management of storage and logistics. Maria Eugenia Borbore (Telefónica) and Darío Cesena (Geprom, part of Telefónica Tech) explaining the Smart Industry 5.0 demo at Advanced Factories 2024. The demo was structured in five different use cases, highlighting applications such as warehouse management, print area, OT cyber security and in robots, visualization in digital twin and verification with Blockchain. Each of these use cases proves the tangible benefits of implementing advanced technologies in terms of efficiency, security, productivity, and sustainability. ✅ Telefónica Tech's presence at Advanced Factories 2024 reaffirms our commitment to innovation and technological excellence, showing how we integrate technological solutions to digitalize the industry and prepare it for present and future challenges. More information and use cases → _________ Connectivity & IoT IA & Data The digitalization of the industry: a practical guide October 4, 2023 — Photos and information provided by Mirian Martínez, Cristina Sebastián, and Clara Ávarez —
April 11, 2024
Telefónica Tech
GSMA's manifesto for Europe's digital transformation
The GSMA report New Rules for a New Era. Connecting Europe to 2030: A Mobile Industry Manifesto for Europe is a mobile industry manifesto for a connected Europe in 2030. It sets out the need for new rules and policies in the European Union to drive connectivity and the mobile industry in the digital age. The purpose: to achieve a connected, competitive, digital, and green Europe by 2030. It also highlights the importance of connectivity for economic growth, the green transition, digital inclusion, and Europe's strategic autonomy. To achieve this, it proposes specific measures in different areas, including: The need to establish new rules and policies in the European Union to boost connectivity and the mobile industry in the digital era. The importance of connectivity for economic growth, green transition, digital inclusion, and Europe's strategic autonomy. The specific recommendations proposed in areas such as spectrum policy, market regulation, green transition, and digital empowerment. The role of Open Radio Access Networks (Open RAN) in building more energy-efficient, secure, and customer-centric 5G networks. The focus on technology innovation and the delivery of pan-European digital networks and services of the future. The contribution of the mobile sector to reducing emissions and driving the adoption of clean technologies. The key role of connectivity in bridging the digital divide and ensuring social cohesion by ensuring that "all citizens and businesses have access to digital services such as eGovernment." The need to establish a new regulatory framework that enables long-term investment, recognizes the need for economies of scale and facilitates the rapid deployment of world-leading connectivity. GSMA report underlines the importance of the mobile sector in driving the green transition, digital inclusion, and strengthening Europe's strategic self-reliance and resilience. Here are two examples of our contribution to Industry 4.0 Gestamp The GSMA report includes, among other examples, how Telefónica has implemented a use case of a factory connected with 5G technology in the industrial plant of Gestamp, in Barcelona, to test advanced technologies in Industry 4.0. In this case, the industrial machinery is connected with the new 5G technology to process data generated in real time thanks to Edge Computing. ✅ This example of industrial digitization and digital twin technologies is one of the key ingredients for Europe's industrial competitiveness. Check out the case study → Redeia The report also covers our collaboration with Redeia. We are working with this company on a DLR (Dynamic Line Rating) system for a more efficient operation of the power grid. This solution optimizes the use of the electricity transmission grid by calculating the transmission capacity of the lines in real time to achieve a more efficient operation of the electricity grid. DLR provides real-time power transmission capacity information for each line using IoT sensors, local weather stations, an advanced cloud platform, and AI and machine learning systems, allowing the system to operate dynamically using the capacity of the transmission grid without compromising its security. ✅ This approach facilitates the integration of renewable energies and contributes to a further reduction of CO2 emissions associated with electricity supply. Learn more → Open Gateway to foster innovation and power pan-European digital networks Open Gateway enables mobile operators to open their networks to developers and cloud service providers through standardized API interfaces. This initiative expands the possibilities and scope of any digital innovation, allowing business users to seamlessly connect to their nearest Edge Cloud and benefit from networks tailored to their latency and quality needs. Open Gateway thus establishes itself as a fundamental tool to foster innovation and empower pan-European digital networks and future services, contributing significantly to Europe's positioning in the global technology race. ✅ Download the full report here → _________ Connectivity & IoT IA & Data The digitalization of the industry: a practical guide October 4, 2023
April 4, 2024
Telefónica Tech
Connectivity & IoT
World Water Day: Smart digitalization boosts water resilience
📅 World Water Day raises awareness about the need to protect and preserve water resources, ensure equitable access to water, and its efficient use. Furthermore, it encourages public-private collaboration to achieve responsible water management and combat climate change. __________ Water is one of our most valuable resources and faces unprecedented challenges due to population growth, urbanization, and climate change. In this context, efficient management of this resource has become a global priority, and digitalization is an essential ally in enabling sustainable and efficient water management. Water resilience is the ability of water systems to adapt to variability and challenges, ensuring a sustainable supply for people, economies and the environment. Sensorization and Smart Metering At Telefónica Tech, we use Smart Metering technology to optimize water management through our Smart Water solution. Utilizing intelligent IoT sensors, NB-IoT connectivity, and Artificial Intelligence and Big Data systems, we provide managing companies with real-time data on consumption, distribution, and water quality. This constant flow of information allows managing companies to detect leaks with millimeter precision, anticipate future demands, and optimize distribution, resulting in a significant reduction in losses and a more efficient use of water. Telefónica Tech's Smart Water has a positive impact in social, economic, and environmental terms: it can reduce leaks by up to 40% and lower the operation and maintenance costs of the network by about 20%. Adopting these digital technologies in water management not only optimizes the use of this resource but also generates a positive impact at the social, economic, and environmental levels. Our Smart Water solution The convergence of IoT sensorization, AI models, and Big Data application are technologies that enable the collection, analysis, and application of data to improve efficiency and decision-making. They allow the analysis of large volumes of data to identify consumption patterns and forecast future demands. ✅ This helps optimize water distribution, ensuring adequate supply and avoiding waste. By using IoT sensors and real-time data analysis, AI can identify anomalies in water flow and pressure. ✅ This helps detect and prevent leaks in distribution networks. It enables monitoring and analyzing data related to water quality, reservoir levels, and weather conditions. ✅ This aids in making informed decisions about water resource management, such as optimizing irrigation cycles in agriculture. They facilitate the construction of models and simulations that help better understand hydrological systems and predict their behavior in different scenarios. ✅ This is particularly useful for infrastructure planning and managing extreme events like floods or droughts. Every drop of water is data that allows us to analyze trends, make predictions, and aid in efficient management. Accurate and real-time information allows for a quicker response to critical situations like leaks or contamination, minimizing damages and associated costs. Moreover, it provides consumers with real data about their consumption and personalized recommendations to encourage more conscious and sustainable water use. ✅ Success story: Thanks to NB-IoT connectivity, our project with Emasesa utilizes smart meters to send consumption readings remotely and automatically. Through advanced analysis with our Big Data and AI solutions, the meters transmit consumption data hourly, allowing Emasesa to optimize management, detect anomalies, and enhance service. This contributes to the digital transformation, sustainability, and efficiency of Emasesa's water service. See success story → Digitalization for sustainable water management Implementing these smart solutions to water management also promotes resilience to climate change by enabling cities and communities to adapt faster to changing weather patterns. This proactive approach to water management is essential to strengthen future water security, highlighting the importance of digitalization as a fundamental pillar of sustainable water management. ✅ Smart Metering technology and sensorization are not limited to the water cycle domain. Their application extends to various sectors, from agriculture, where they facilitate irrigation optimization, to industry, improving processes and reducing resource consumption. Technology enables us to manage water more effectively, sustainably, and resiliently. With digitalization, we can solve the water issues we face today, ensuring access to water for everyone and preserving this vital resource for future generations, aligning with the sustainable development goals and adapting to current and future challenges in managing this vital resource. The application of digital technologies for smart water management contributes to achieving United Nations Sustainable Development Goal (SDG) number 6, which requires efficient use of water resources and universal access to drinking water before 2030. Image by Freepik.
March 22, 2024
Telefónica Tech
Digital and green transition: sustainable technology solutions for energy efficiency
In a society that is increasingly aware of the importance of caring for and protecting the planet, digital transformation is key to improving efficiencies in all processes. Within this context, new-generation digital technologies make it possible to digitalize natural resources and improve energy management to optimize their use. They also make it possible to reduce the environmental impact of technology, decarbonize the economy, maximize the use of renewable energies, and protect natural resources. Digital solutions have the potential to reduce emissions by up to 35%. Digital technologies such as IoT (Internet of Things) Artificial Intelligence, Cloud, Big Data, or ultra-efficient 5G connectivity are some of the enabling innovations for a green transition and for sustainable digitization. These technologies enable improved management, control, and real-time performance in various sectors, such as industrial, energy and utilities. When applied to the administration of resources such as water, for example, our digital technologies can improve its management, digitize its entire cycle (from its collection to its return to the environment) and reduce losses in distribution networks. Energy efficiency is also fundamental to the digital and green transformation. Sensorization and monitoring of energy allows consumption to be reduced and electricity production to be optimized, also facilitating the incorporation of electricity from renewable sources that are so dependent on changing weather conditions. ✅ As part of our commitment to reduce the environmental footprint of our activity, using digital solutions that protect, optimize, and reuse natural resources, more than 50% of our products and services have the Eco Smart seal verified by Aenor. ♻️ This seal identifies those Telefónica Tech products designed to promote green digitalization by reducing water and energy consumption, fostering the circular economy, and reducing emissions. Green and digital transition in key sectors: Water and Energy As part of MWC 2024, we recently participated in an expert session focused on the green and digital transition that brought to the table the importance of technology in the move towards energy efficiency and sustainability. The session included, among others, the participation of Belén Díaz-Guerra, from Redeia, and Guillermo Pascual, from Veolia, who shared use cases of these technologies and talked about the challenges and advances in the electricity sector and in water management. Belén from Redeia, highlighted the integration of renewable energies and the digitalization of the electric grid with IoT and AI, among other technologies, as key elements to improve efficiency and sustainability, in addition to increasing the integration capacity of renewable energies by improving the observability, prediction and control of the grid. This is one of the great challenges surrounding renewable energies due to the variability of their production, which is dependent on weather conditions. However, digitalization and the application of technologies such as IoT and AI help to accurately know the production and optimize its management. The green and digital transition represents an opportunity to achieve a more efficient and sustainable society. Guillermo from Veolia, spoke about the impact of climate change on the water sector. He also highlighted the importance of technology to manage water stress and adapt to new conditions. He mentioned the importance of remote sensing and digitalization in water management to improve efficiency, interaction with citizens and adaptation to climate change. The conversation also addressed the importance of managing the impact associated with digitization, such as energy consumption, waste generation, and the need to capture digital talent and initiate a "cultural transformation" that will allow us to move towards a more sustainable and digital future. AI of Things Integrated smart building management as a driver for greater operational efficiency August 28, 2023 Photo by Appolinary Kalashnikova at Unsplash.
March 5, 2024
Telefónica Tech
Telefónica Tech at the MWC 2024, the edition of our centenary
As we approach the end of MWC 2024, this post summarizes the main activities we participated in. During the last four days, we showed our vision of the digital future and how our solutions benefit industry, business, environment, and society. From 5G and IoT connectivity, which allows us to enable the 'low altitude economy' and Internet of Drones, to Cloud and Edge Computing through Cyber Security, IoT, Artificial Intelligence or Blockchain. We have covered the most relevant and current topics of the technology sector, its economic, individual, and social impact —as well as its role in green and digital transitions— with expert presentations, use cases, and real and innovative demonstrations. Do not miss this summary of our presence in the most important event of the telecommunications industry. Talks, presentations, and round tables Beyond Automation: responsible AI in the company, uncovered Telefónica was a pioneer in the use of AI and Big Data technologies, currently enhanced by the take-off of generative AI, to provide better service to customers, generate new revenue and optimize operations and investments. In the Beyond Automation: responsible AI in the company, uncovered session we addressed the main AI projects we have developed in our company. We explained their impact and showed the AI proposition we are offering both public and private customers, to whom we provide end-to-end capabilities. Elena Gil (Telefónica Tech) in 'Beyond automation: responsible AI in business, unveiled', at MWC 2024. Elena Gil, our AI & Big Data Director, explained: "We support them throughout this journey, which has to be closely linked to their business, and we help them define their best AI strategy: to prioritize use cases, transform the capabilities of their employees, to establish a governance model where data is used in an appropriate way, and how to use the data in the right way..." Telefónica Tech's value proposition in AI is comprehensive, elastic and agnostic, depending on the needs of each customer. We can also measure its impact and benefits with KPIs. In Elena's opinion, "AI is one of the most promising technologies in the business environment, particularly when combined with other technologies such as Cloud, Cyber Security or IoT. More and more companies are using and integrating it to automate and optimize their processes, and to make better business decisions," she explained. Gabriela Ramos, Assistant Director-General for Social and Human Sciences at UNESCO, also took part in the presentation, emphasizing the importance of harnessing the potential of these technologies to improve business results, always guaranteeing responsibility and human rights through initiatives around responsible governance and the adoption of ethical principles and criteria in the use of AI. Companies like Telefónica are global players and we need them to help us advocate for ethical AI". Gabriela Ramos, UNESCO. Urban air mobility: secure, autonomous, and cleaner In the session Urban air mobility: secure, autonomous, and cleaner, Alfredo Serret, global IoT director, and Victoria Jing Xiang, EHang COO for Europe and Latam, took part. Alfredo Serret (Telefónica Tech), and Victoria Jing Xiang (EHang) at MWC 2024. Ehang is the first company to achieve type certification for this kind of air vehicle intended for passenger transport, without a pilot. "Connectivity is the invisible thread that unites and makes airspace smart", Alfredo Serret, Telefónica Tech. Victoria and Alfredo shared their experience and knowledge in this field. They talked about the operation of drones intended for the transport of passengers, goods and also health services, maintenance or protection against fires or emergencies, among others. They also talked about the challenges, including technological ones, in the use of drones and the critical role that connectivity plays in the field of urban air mobility. Low-altitude economies enabled by the Internet of Drones: autonomous aerial vehicles, connectivity, and digital technologies. "The low altitude economy, below 300 meters, is considered one of the strategic emerging industries in China." Victoria Jing Xiang, EHang. A few weeks ago, we performed the first certified flight of this kind in Europe, at Lleida airport. We used our 5G connectivity from Telefónica Tech and drones from EHang. Smart Tourism and Destination Digitalization In Smart Tourism and Destination Digitalization session we addressed the great challenges of the digitalization of tourism and in particular of tourist destinations combining the knowledge and experience of two major tourist destinations, Benidorm and Murcia region (Spain) and institutional presence of the UNWTO (World Tourism Organization), who were very kind to join our colleagues Diego Lopez, process manager Telefónica Spain; and Carmen Alonso, Business Development Manager. The speakers shared their first-hand experience in the transformation into Smart Tourism Destinations and the benefits offered by the digitalization of tourist destinations, such as optimizing resources, measuring and sizing tourist carrying capacity, improving efficiency, and enhancing the visitor experience. "Technology is the best vehicle to generate efficiency in promotion, management, and administration in destinations, also for emerging and rural ones." Natalia Bayona, executive director of UN Tourism. They also addressed challenges such as privacy and ethical use of data, Cyber Security or the development of common and harmonized standards for the exchange and management of data and technological tools, so that they are accessible to any destination regardless of its size and budgetary capacity. Carmen Alonso (Telefónica Tech) in the session 'Smart Tourism and Destination Digitalization' at MWC 2024. "As we say: technology, processes, and people. The team behind us is the 'core' of Telefónica Tech." Carmen Alonso. Participating in the GSMA's 5G IoT Summit Juan José González Menaya, Head of IoT connectivity and devices, took part in the 5G IoT Summit. Juan José addressed the challenges facing IoT connectivity, such as roaming or security, and the solutions we have developed together with our customers and IoT partners to solve them in the session Creating and Enhancing value for IoT Customers - IoT SIM, AI for Data. Juan José González Menaya (Telefónica Tech, center) at GSMA 5G IoT Summit. He also talked about LPWA technology, which enables use cases that were previously not possible with conventional mobile technology, such as water meters with simple and inexpensive devices that have a 10-year lifespan. With AI applied to Kite Platform, we are able to prevent fraud when IoT SIM cards are used, avoiding security breaches and invoice surcharges. Regarding the application of AI models in the IoT domain, Juan José mentioned how Telefónica Tech uses this technology in different aspects, including: Automatic classification of customer tickets, reducing average resolution time by 19%. Software development, where it is used to generate code and increase efficiency by 20-25%. Our Analytics Plus service from Kite Platform, which automatically groups customer SIM cards according to their behavior and generates alarms in case of anomalous behavior, allowing us to detect security and fraud issues. Our participation in other presentations ◾ Elena Gil also participated in the session "How to optimize the delivery of Big Data & AI projects" at the IBM stand, highlighting the importance of data governance, which she described as "an essential best practice to ensure the success and sustainability of AI projects". She further emphasized how "collaborative efforts between IBM and Telefónica Tech have helped overcome barriers during the implementation of AI projects. Elena was also present at the round table “Beyond ChatGPT: The Future of Generative AI in Business” regarding generative AI and its practical applications in different industries for its transformative capacity in companies of any size and sector and added: “the proven value will come when generative AI is integrated into companies by extending human performance”. Telefónica Tech is more than ready to support its customers in this exciting journey towards generative AI. Additionally, Elena moderated the roundtable Humanizing Artificial Intelligence: Ethics as a Challenge, focusing on the transformative potential of AI for businesses and society, as well as the challenges posed by its regulation and responsible use. We also participated in the Connected Industries Sports and Entertainment Summit: Creating a world-class fan & guest experience, which explores how technology enables a more immersive and enhanced experience for fans and guests, "while monitoring athlete performance and improving stadium efficiency, providing unique experiences for fans," Elena explained. Carlos Martínez (Telefónica Tech) at the round table 'Diversity 4 Tech: The Use of AI to Drive Inclusivity in the Workplace', MWC 2024. ◾ Carlos Martinez, our Global Director, AI & Data Solutions & Services at Telefónica Tech participated in the round table “Diversity 4 Tech: The Use of AI to Drive Inclusivity in the Workplace”. He focused on how AI technologies can be leveraged to address bias, improve diversity, and create a more inclusive work environment. "Using AI and data analysis, the labor market can be optimized, facilitating a better match between the supply of skills and the demands for jobs." Carlos Martinez, Telefonica Tech. ◾ Carlos Carazo, our Global Director of Product, Technology & IoT Operations at Telefónica Tech, spoke at Manufacturing Summit providing his extensive experience on how to approach the deployment of a smart factory. Carlos Carazo (Telefónica Tech) at the 'Manufacturing Summit', MWC 2024. Carlos underlined that "optimal connectivity is essential in industry, particularly in the automotive industry. This is the reason why we wanted to introduce our capabilities today at the Manufacturing Summit: those related to Private Networks and Edge Computing, as well as our specific industrial solutions such as AGV, MOM systems, computer vision, digital twin..." In addition, Carlos participated together with Jorgina Diaz, Director of Business Development at Alisys, in an interview at the AWS stand where they talked about how to enable innovation for Industry customers through 5G-based Edge services. ◾ Darío Cesena, our CEO at Geprom, part of Telefónica Tech, participated in the session Industry 4.0: Digital Transformation Enabled by Private Networks, where the impact of private 5G networks in sectors such as industry and healthcare was discussed. The session showcased how these networks drive smart factories, enhance patient care, and ensure resilience and security in critical infrastructures. Darío Cesena (Geprom, part of Telefónica Tech), MWC 2024. We demonstrated the transformative potential of 5G, including its contribution to operational efficiency and data security, as well as the opportunities it offers in various industries to optimize processes, reduce costs, and maintain competitiveness. 📢 Darío also participated in the joint press conference with our customer Fini Golosinas where he highlighted the "strategic importance of this industrial digitalization project for both parties." ◾Jose Antonio Cascallana, our Industrial Cyber Security Product Manager, participated in the presentation Aristeo: the industrial God of OT bees for Cyber Security, where he pointed out that the lack of cyber security in industrial environments and critical infrastructures "causes damage to people, and the environment". 'Aristeo: the industrial God of OT bees for Cyber Security' session with Horse, Basquevolt, and Telefónica Empresas, at MWC 2024. He also explained the purpose and operation of Aristeo, our predictive cyber intelligence platform for threats in industrial environments, critical infrastructures, and defense, and talked with our customers Horse and Basquevolt about the challenges around OT Cyber Security. Both companies are highly automation-oriented in the automotive sector, and also have in common high-quality requirements, specific to their industry, and the use of large amounts of data in all their operations. "Horse is also data-driven when it comes to cyber security. Data is essential, it gives us the ability to make decisions and to know what to do at all times." María Luisa Redondo, Horse. ◾Daniel Ribaya, our Cloud Products & Services Director, participated in the presentation Towards a new generation of Edge and Cloud. Daniel spoke about the opportunity offered by Edge Computing to transform the Cloud proposal and turn it into a sovereign, distributed, hyperconnected, and sustainable Cloud. Daniel Ribaya (Telefónica Tech) at Cloud & Edge Computing session, MWC 2024. As Daniel explained, "these are the four attributes for the Cloud of the future that Telefónica Tech materializes with its Cloud Platform service, built on its own platform. It is continuously provisioned in regional data centers and physically closer to the customer, right where the customer needs it. It is also interoperable and federated". "Edge Computing combines the best of traditional infrastructure, close and with low latency, and the benefits of the Cloud paradigm: elasticity, low entry barriers, and fostering innovation." Daniel Ribaya, Telefónica Tech. ◾Alfredo Serret also orchestrated the session Green and digital transition in key sectors: Water and Energy on the role of new generation digital technologies in improving energy efficiency and the transition to a more sustainable future. Belén Díaz-Guerra (Redeia) and Guillermo Pascual (Veolia) In this session Belén Díaz-Guerra and Guillermo Pascual presented their real-world use cases implemented by Redeia, in the energy sector, and Veolia, in water management, respectively. "Green and digital transitions are twin transitions: digital solutions can reduce emissions by up to 35%." Ana Belén Pociña, Telefónica. ‘Smart Industry: Leading the Change’ demo Using the Smart Industry: Leading the Change setup, we proved our technological capability to create a fully automated, digitalized, and secure industrial process. The functional demo automated and protected the printing process in a keychains factory with Telefónica's centenary logo and a QR code. Our demo 'Smart Industry: Leading the Change' at MWC 2024. It was a demo that offered real comprehensive solutions for the industrial sector to digitalize and become more competitive, innovative, and sustainable. The demo consisted of two robotic arms and a computer vision camera that controlled the quality of the parts. The camera, using Artificial Intelligence, decided and commanded actions to the collaborative robotic arm. Each element recorded all relevant events in the process through Telefónica Tech's Blockchain service, to offer visitors a detailed traceability of the different phases of the keychain's manufacture. The QR code includes a digital certificate issued by TrustOS, our managed Blockchain platform, which certifies the originality and authenticity of the keychain. It also enables to show the visitor the detailed traceability of the industrial process. ✅ The demo also featured MES and WMS industrial scanning systems to monitor available material and printed parts, as well as to manage inventory and control production in real time. All this data provides valuable information to ensure a more efficient and productive production cycle. In addition to advanced technologies such as 5G, Edge Computing, OT Cyber Security, Big Data or computer vision the demo incorporated our industrial Cyber Security solution Aristeo from Telefónica Tech, which protects connected elements from possible cyberattacks. "We proved our capabilities and knowledge to cover any automation and security need in the industrial field thanks to this demo." María Jesús Almazor, COO Telefónica Tech. Aristeo is a Cyber Security solution used to protect connected elements from potential cyberattacks by deploying a network of decoys on the Internet with real industrial hardware that reproduces industrial environments. Its purpose is to confuse attackers and extract all the information about incoming threats, including their strengths, weaknesses, and behavior in each part of the system they pass through. This tool uses a predictive approach, identifying trends and unknown vulnerabilities that are then used to improve and update protection for customers. In the demo visitors could see how Aristeo protected the automated installation from potential threats for which the security system was not initially prepared. We also participated in the Telefónica Open Gateway demo, which offers an immersive experience to enjoy sporting events such as, in this case, the ACB Basketball Copa del Rey on Movistar Plus+. The demo, powered by Edge Computing technology from our partner AWS and offered by Telefónica Tech, broadcasted a match with low latency and immersive cameras distributed in the José Mª Martín Carpena Sports Pavilion, in Málaga. Edge Computing technology was also used in the Telefónica Edge Haptic Arena immersive haptic sensory technology experience, which immersed players in a virtual world where they competed in a highly dynamic head-to-head match. New Transformation Handbooks We have published new Transformation Notebooks to explain why and how digital transformation is key to the progress and sustainability of our society. They are available for free download in PDF format here: Telefónica Open Gateway Progress Sustainability Innovation ◾ You can also access and download the 2023, 2022, and 2021 editions of the Transformation Notebooks. A year of Open Gateway and 100 years of Telefónica During MWC 2023, José María Álvarez-Pallete, Chairman of Telefónica and GSMA, introduced the GSMA Open Gateway. In his commencement speech this year, Álvarez-Pallete highlighted the importance of this initiative and the collaboration between operators, technology companies, and developers to co-create the digital future. On the first anniversary of Open Gateway, the first APIs have been launched in six countries and at least one API has been commercialized in 40 networks, addressing needs such as security and anomaly detection. Within this context, Álvarez-Pallete emphasized the need for a global alliance and fair governance to achieve a sustainable, win-win value chain. He also stressed the importance of adopting adequate and modern regulation for the telco sector. He also highlighted the sector's commitment to the responsible use of networks and innovation in sustainable investment models. Álvarez-Pallete also pointed out that the celebration of Telefónica's centenary, which will take place next April, is possible thanks to the role played by telcos at the service of society and their ability to adapt to change. — COLLABORATORS: Irene Jiménez, Ane Urain, PAULA GUTIÉRREZ, Ade JustribÓ, Marta Elvira, MIRIAN MARTÍNEZ, Nacho Palou —
February 29, 2024
Cyber Security
SASE is the end-to-end solution for maximizing business security
Working in Cloud environments offers companies many possibilities to innovate and grow, yet it also poses new network and security challenges with increased complexity related to workstations in a context of constantly changing location, connection, needs, or devices. Companies need security solutions that meet these conditions and provide fast, secure, and uninterrupted access to their digital resources and assets for secure productivity. Designed to secure the network at all levels, Security Service Edge (SSE) is the solution to protect enterprise resources in this changing context, safeguarding users, and data from malware cyber threats with a comprehensive strategy that combines flexibility, simplification, and consolidation of functions, resulting in a better security posture for the digital business. ✅ Telefónica Tech, together with Palo Alto Networks, is aware of these challenges and we offer our experience and specialization through the Telefónica Tech Security Edge solution → What is SSE and SASE? SSE is a security service under the Zero Trust paradigm designed to protect companies and employees against threats and data loss in hybrid and telework environments. It offers security with traditional security controls such as NGFW (Next-Generation Firewall), SWG (Secure Web Gateway) or Secure Remote Access consolidated on a single platform, and centrally managed to optimize the operation and management of security. SASE is the SSE and SDWAN service integrated into a single service that is operated and managed from the service provider's Edge. The SASE service model makes effective protection possible, making it easier for companies to secure their employees, data and digital assets regardless of their location, devices or connection. It is therefore a transversal and comprehensive protection that does not rely on the perimeter of traditional enterprise networks but extends it by incorporating hybrid environments and remote access based on user identity and device context under a Zero Trust principle). The key idea of SASE is to make access to the network and assets of digital enterprises dynamic, flexible and secure through a converged architecture and a comprehensive and centralized management based on a Zero Trust principle. This issue is crucial, as the capacity of traditional IT infrastructures is overwhelmed by the demand for data and applications hosted in Cloud environments. SSE responds to this situation effectively by moving the point of security from physical locations to a cloud infrastructure, enabling in a hybrid architecture secure and efficient access to the resources needed for work, no matter where both the user and the information are located. SASE also helps improve network performance by making applications and services accessible and running efficiently, even in distributed environments. ✅ SASE, and in particular SSE, is therefore a comprehensive solution that meets the security needs of enterprises, facilitating a smooth transition to the cloud and digitalization. Adapting security strategies is important The adoption of Cloud environments and new work models, hybrid or remote by companies, leads to increased exposure to cyber threats that evolve and change rapidly. This makes it necessary for digital companies to adopt an effective security strategy that can adapt to the new challenges of digitalization in an agile and rapid manner, ensuring business continuity. The Zero Trust model of the SSE solution from Telefónica Tech and Palo Alto Networks applies a Zero Trust strategy for access to company resources. In this Zero Trust strategy, all identities -person or device- inside or outside the corporate network are authenticated and authorized before accessing corporate systems or applications, thus minimizing the risk of unauthorized access, lateral movement, or loss of information. This strategy also limits the internal impact of a possible security incident. This strategy not only protects against external threats, but also limits the internal impact of a potential security incident by verifying that only verified and authorized users can access corporate resources. ✅ SSE enables a consistent, centralized security policy that is applied uniformly across the organization, decreasing complexity, and improving security management. ✅ SASE provides a converged architecture and services that integrates security and cloud connectivity services, enabling comprehensive protection and secure access to enterprise resources. Key benefits of SASE It optimizes the user experience by improving application performance and network traffic efficiency through reduced latency and enhanced security. It provides a comprehensive and unified view of network activity, making it easier to quickly identify and respond to threats. It consolidates various security and network functions into a single service, including ZTNA (Zero Trust Network Access), DLP (Data Loss Prevention), malware protection, CASB (Cloud Access Security Broker) or SWG (Secure Web Gateway), resulting in significant time and resource savings. It easily adapts to changing business needs without requiring major infrastructure modifications, offering both a flexible and scalable solution. It strengthens defense against a wide range of threats and protects sensitive data, reducing the risk of cyberattacks. It enables regulatory compliance with regulations such as GDPR and PCI DSS, ensuring data protection and privacy. It generates valuable information about user behavior and network performance, optimizing decision making. It saves costs by reducing the need for additional security hardware and software. It enables optimal centralized connectivity and security for applications located in multiple locations through a distributed architecture. It supports a wide range of devices, ensuring robust security and efficient traffic routing decisions for different applications and users. Challenges in SASE implementation There are many benefits to adopting SASE, but companies must carefully consider the challenges to ensure a successful transition. Its implementation is a significant change that requires detailed planning and strategy, including: SASE can involve traditionally separate IT teams working more closely together, so it is essential to effectively integrate security and networking teams to take advantage of SASE. Considering that SASE is a new and developing framework, there may be differences in vendor offerings, so it is essential to choose a vendor that provides a complete and integrated solution according to the specific needs of the company and offers specialized support. Deployment requires reviewing and revising security and access policies, so companies must ensure that policies are dynamic and flexible, aligned with the Zero Trust strategy and able to respond to evolving cyber threats. It may require changes or upgrades to the existing IT infrastructure, requiring careful assessment of the current infrastructure and planning for any modifications needed to support the SASE solution. ⚠️ Incorporating SASE into the enterprise IT infrastructure is an important step toward more efficient and adaptive security and network management. Addressing these considerations and challenges enables enterprises to ensure a proper implementation that takes full advantage of the benefits of SASE. ✅ Telefónica Tech and Palo Alto Networks recommend that enterprises consider enlisting the advice of a partner with expertise in SASE, connectivity, and security services to define and adopt a SASE strategy. Conclusion SASE a significant change in the way the network is managed and secured in an increasingly cloud-centric environment. As we have seen, it offers important advantages such as greater flexibility, resource optimization, simplification, and improved security posture, and is presented as the ideal solution for companies that want to strengthen their resilience. Cyber Security How to achieve strategic management of Cyber Security investments January 15, 2024
February 20, 2024
AI & Data
Intelligent Workplace
Effective conversations with generative AI models (II)
In the previous post we learned how generative AI models, such as GPT, Copilot or Bard, represent a significant advance in natural language processing and understanding. As we saw, getting accurate and relevant answers from these models depends largely on the way we communicate with them and the prompts we use to explain our needs and guide their response, allowing them to be creative in order to generate quality content. Having notions about their fundamentals helps to simplify the formulation of questions and allows us to explore and learn about the capabilities of these advanced models. Also to better understand how they work, what they can do, what they cannot do, what is better not to tell them. Productive conversations with Copilot and ChatGPT Microsoft Copilot (Bing Chat) Asking the right questions influences the quality of the response. Microsoft proposes seven recommendations for building effective instructions when conversing with Copilot, including: Explain the desired outcome and the type and format of the expected response, including examples. Provide the model with context and relevant details and explain the goal and purpose of the request. Indicate a tone for the text and the intended audience of the response and request further information or clarification of your response whenever necessary. Maintaining a polite and curious interaction to encourage a dynamic and interactive conversation with the model, providing constructive feedback and acknowledging helpful responses. Providing context is key to getting accurate responses. OpenAI's ChatGPT OpenAI proposes six tactics for building better prompts for ChatGPT, including: It recommends placing instructions at the beginning of the prompt and using markers such as ### or """ to distinguish instructions and context. This makes it easier for the model to accurately understand the requested task. >> ###Summarize the following text in key points: "Artificial intelligence is transforming…"| It agrees in emphasizing the need to be specific, descriptive, and detailed in the instruction, both in context, expected result, length, format and style. This helps the model to generate responses that are more aligned with the user's needs. >> Write a 500-word article on…| Provide specific examples of the desired output format. >> Write a poem following this format: Title: The Star and the Moon Stanzas: 3 Verses per stanza: 4 Rhyme: ABAB Theme: The nocturnal nature| It is recommended to start with the Zero-Shot approach, which does not use previous examples, >> Generate a list of tips for improving personal productivity| And if necessary move towards the Few-Shot approach, with examples to guide the model >> Rank the feeling of these reviews: ###Example 1: Review: "I loved this restaurant. The food was delicious and the service exceptional." Feeling: positive ###Example 2: Review: "Disappointing experience. I expected more variety on the menu and better taste." Feeling: Negative Review to rate: "Prices are a bit high, but the quality of the food justifies it. I would go back again." Feeling: [Your answer] | It advises avoiding ambiguous descriptions and providing clear, concise instructions to increase the accuracy of responses. >> Write an executive summary for a market report. The summary should be exactly 150 words, focusing on the report's main findings and recommendations| It agrees that it is more effective to provide positive and specific instructions on what is expected than to indicate what to avoid. Less effective ❌: >> When writing the article for the blog, don't use technical lingo or complicated terms. Don't make the article too long or include opinions | More effective ✅: >> Please write an 800-word article for our blog about current trends in technology. Use clear and accessible language for a general audience | Conclusion All recommendations underline the importance of being clear and concise with instructions. It is also important to structure prompts appropriately to improve interaction with generative AI models. Properly constructing instructions and prompts effectively improves the quality and accuracy of responses. Applying these rules and recommendations helps to simplify question formulation, examine, and learn about the capabilities of these advanced models, and better understand both what they can and cannot do. Practicing and mastering prompting not only enriches the user-IA interaction, but also expands its usefulness to the user, the possibilities and more creative and efficient use of generative AI models. ◾ CONTINUING THIS SERIES IA & Data Future Workplace Effective conversations with generative AI models (I) February 1, 2024
February 19, 2024
AI & Data
Intelligent Workplace
Effective conversations with generative AI models (I)
Generative AI models such as GPT, Copilot or Bard represent a significant advance in natural language processing and understanding. However, getting accurate and relevant answers from these models will depend a lot on the way we communicate with them, on the instructions ('prompts', requests, or commands) we use to explain what we need. Constructing the appropriate instruction is a practice known as 'prompt engineering' or 'prompting', which is nothing else than The art of communicating with a generativelarge language model. This is how "Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4" defines it, a study that also serves as a guide with 26 rules that help optimize the way to converse, ask or give instructions to AI models. What is prompt engineering or prompting engineering? Prompt engineering consists of designing queries or describing instructions that are appropriate and specific for AI models to provide accurate and relevant responses, while maintaining their ability to inform and be creative. In this sense, the structure and specific content of these prompts, including language, are critical to getting the most out of AI models, both in terms of executing tasks (such as analyzing data, generating text, video, or images) and obtaining information. In this sense, the ability of these models to handle queries and return answers using natural language has simplified their adoption and use for the general public. The basic principle of prompting is to create detailed, clear, and concise instructions to guide the response of the AI model while allowing for a creative and informative response. 26 simple rules to build effective prompts In order to help users get better executions and responses, the study authors propose 26 rules that help build better prompts, simplifying and optimizing question formulation. In addition, they explain, formulating the right prompts also helps users better understand how an AI model works: what it can and cannot do and its limitations. And why they get the answers they do. The rules proposed in the study are simple and designed to streamline the process of formulating queries and prompts, and fall into five categories: Structure and clearness include using affirmative directives such as "do" and avoiding negative language such as "don't do." Also mention the target audience: indicating whether the query is for a student, or a subject matter expert will significantly change the model's response. ✅ Formatting the instruction When formatting an instruction start with "###Instruction###", followed by "###Example###" or "###Question###" if relevant. Subsequently, introduce your content, if applicable. Use one or more-line breaks to separate instructions, examples, questions, context, and input data. Specificity and information: Adding phrases such as "Ensure that your answer is unbiased and does not rely on stereotypes" can guide the model to predict or generate more neutral, factual, or data-driven responses; while including examples in the instructions helps the model understand what format or type of response you expect. Interact with the user: prompt the model to ask for additional or more precise details you may need by asking questions of the user and until you have enough information to provide the most appropriate response. ✅ For example, "From now on, I would like you to ask me questions...". Language content and style: The study suggests that it is not necessary to use formalities or polite formulas such as "please" or "thank you". Being direct and concise can produce more focused responses. ✅ Of course, you can use polite formulas with AI models. Sometimes you won't be able to avoid it. Not so much for the effect they are going to have on the AI model (which can be influential, according to Microsoft) but also for the effect on yourself of having a cordial conversation that enhances the experience and encourages learning. You can also ask that your response mimic the language used by the user, either by delivering a sample or that of the instruction itself. ✅ You can ask them to review and correct a text to clarify or improve grammar or vocabulary while ensuring that it does not change the naturalness of the text or the writing style. Complex tasks: It may be more effective for technical or multi-step queries to break the task into simpler, sequential prompts (using the "think step by step" formula) or by having an incremental conversation that goes deeper into the topic. You can then ask him or her to generate a single response with all the information contained in that conversation. ✅Extended prompts also work better if they are structured, e.g [Assign a role] [Context] [Objective] [Request] These recommendations are general and may vary depending on the model, the domain and the objective of the task, so it is best to experiment and adapt the instructions and prompts depending on the tools used and the needs and preferences of each one. ◾CONTINUING THIS SERIES IA & Data Future Workplace Effective conversations with generative AI models (II) February 19, 2024 * * * Telefónica Tech AI of Things How to create realistic images using Generative AI November 23, 2023
February 1, 2024
Cyber Security
Security and privacy: difference and impact on reputation
📅 The 28th of January is International Privacy Day. Its purpose is to promote good privacy practices and raise awareness about the value of protecting privacy and personal data. In the digital realm, privacy and security are often used interchangeably. However, although both terms are related and equally important, each refers to a different aspect of protecting personal or sensitive information. Privacy shields you from unwanted prying eyes, while security safeguards you against unauthorized access. Maintaining your personal information as both private and secure is key to protecting your online reputation. Understanding the distinctions between privacy and security is essential to safeguarding your information, whether it's personal data or not, as well as your online reputation. Privacy and Security: What They Are and How They Differ What Is Privacy? Privacy revolves around personal information control and management. It encompasses the right to decide what aspects of your life and identity are shared —what, how, and with whom— and what you choose to keep hidden from the public or specific individuals. This control over your information includes keeping personal data, images, photographs, online activity, or messages and communications private to reduce the risks of exposure and its negative consequences. Privacy is pivotal to reputation management. Key features of online privacy include: Data privacy: Protecting sensitive personal information, such as banking data, medical reports, or personal communications, from unauthorized or malicious access. Privacy protection: Employing measures to safeguard online navigation and activity, ensuring that the digital life you wish to keep private remains so. Privacy settings: Understanding privacy settings on social media profiles, email accounts, and other digital platforms limits the amount of personal information visible to the public, including friends and contacts. What Is Security? Security refers to the measures taken to protect information, digital assets, and systems or devices from potential threats. These threats include cyberattacks, malicious software, and data breaches. It is a broader concept than privacy and includes proactive measures to protect data, both common and personal, and sensitive. Therefore, security is fundamental to protecting privacy —without privacy, there is no security— and both are necessary to safeguard one's reputation. ✅ Example: Unauthorized access to someone's mobile photos or messages, publicly disclosed with their date of birth serving as the PIN, can damage their reputation. Some key components of security include: Implementing robust cybersecurity measures to defend against cyber threats. Ensuring safe internet usage through protocols and security tools. Protecting the network and data from unauthorized access and security breaches. Safeguarding data through encryption and secure storage. Staying informed about the latest threats and security measures. Applying effective security practices, including secure browsing and two-factor authentication (2FA), also on social media. To protect privacy and security, consider these recommendations: Use strong passwords and two-factor authentication (2FA) or tools like Google KeyPass. Understand the privacy settings of accounts on services and social networks and learn privacy configurations. Review and update app and website permissions. Exercise caution when sharing personal information. Learn about phishing and cyber scam risks. Regularly update software on your computer, mobile devices, and security tools like antivirus programs. Limit data sharing, even with acquaintances or trusted apps and websites. Use VPN and encrypted messaging apps. Cyber Security Cyber Security is an essential skill in today's digital era, and Cisco offers a free course in it July 17, 2023 How to Protect and Manage Your Online Reputation Online reputation is how others perceive you on the internet, and it largely defines your digital identity. Moreover, in today's world, it also has real-world implications, impacting how you are perceived personally, socially, or professionally. ✅ Examples: A company may consider social media content when making hiring decisions during a selection process. A reputation crisis on the internet, caused by a social media post, can affect a company's financial activity and results. Privacy and security are two fundamental concepts for safeguarding your data and online reputation. Manage reputation encompasses monitoring opinions about products, services, or publications, as well as overseeing your presence and reputation on the internet and social media. It is an ongoing process that requires time and effort to control your digital footprint—the trail of activity and data left behind when using the internet. Here are some tactics to protect your online reputation: Before posting anything online, including on social media, always consider how it might be perceived by others and how it will affect your reputation. Periodically search for yourself to stay informed about what is said about you online. In addition, monitor your profiles on social media to detect any inappropriate or negative content. If you find something inappropriate, take steps to remove it or request its removal from the respective platform. Be respectful of others, even when interacting with bots. When engaging online, whether through blog comments, forums, or social media, ensure your communication is respectful and constructive. Avoid personal attacks and heated arguments. Remember that everything said online can be tracked and affect your reputation. Further protect your personal information by avoiding sharing sensitive information such as your address, phone number, and the companies or banks you are a customer of. Use strong passwords and do not share them, even with acquaintances. Additionally, exercise caution with phishing or malware-laden emails that attempt to obtain personal information. Always verify the authenticity of websites and sources before providing confidential information. Published 09.19.2023 | Updated 01.24.2024 Cyber Security AI of Things Things you shouldn't share with ChatGPT July 4, 2023
January 24, 2024
AI & Data
Raspberry Pi for Edge AI: Artificial Intelligence at the edge for everyone
Raspberry Pi is a popular computer widely used among developers, students and fans of computing, robotics, and gadgetry. Among its virtues are its low cost (a basic kit costs about 20 euros), versatility to run a wide variety of applications and solutions, and a huge community of users who share projects, tutorials, and all kinds of content to get the most out of it. The most affordable basic configuration of Raspberry Pi is not a powerful computer. Although its capacity will depend on the model chosen, its configuration, peripherals and added components, and the modifications made by the user. Even in their most basic and affordable models, Raspberry Pi boards can run programs with multiple purposes and applications. However, in any of its models, even the cheapest ones, Raspberry Pi allows running applications with multiple purposes, including the control of home automation installations, robotic systems or as a web server. Raspberry Pi 1 Model A+. Photo: RASPBERRY PI The Raspberry Pi boards, however, are not as efficient as they could be when it comes to running algorithms and Artificial Intelligence models, regardless of their power. Something that Sony wants to solve. Sony integrates its Edge AI Aitrios platform on Raspberry Pi boards In order to change this, Sony has announced a "strategic investment" to help "drive the development of Edge AI solutions" by integrating its Aitrios platform into Raspberry Pi boards. In this way Sony enables AI capabilities on this popular computer for everyone. This integration will allow Raspberry Pi users to use the Aitrios platform to run their own Artificial Intelligence solutions; with customized developments tailored to their needs, whether domestic or industrial. Aitrios provides Raspberry Pi with increased capacity to process data and AI models in real time. By incorporating this Sony technology, Raspberry Pi devices equipped with Aitrios are more efficient at processing and analyzing data locally, in the same place where that data is generated and where it is needed: at the edge of the network and without the need to send it to a data center or a Cloud platform. Raspberry Pi boards are increasingly used in the industry due to their low cost, versatility and ability to run a wide variety of applications and solutions. Photo: RASPBERRY PI. As we saw in a previous article, processing data and running AI models at the edge (Edge AI) is especially useful when an immediate response is required and without relying on an internet connection. For example: For the control of drones and self-piloted robots, or for image or voice recognition. When an IoT project is deployed in an area without network coverage or in an isolated environment, without connectivity. Sony Aitrios platform for IoT devices Sony's Aitrios platform enables data processing and efficient execution of AI algorithms and models in IoT devices. Aitrios is a complete hardware and software solution, scalable and flexible, Sony explains. It can be adapted to a wide variety of IoT devices and applications and is available in different architectures, such as SoC processors or peripheral modules. It can be controlled with different operating systems. These features should facilitate its adoption by the Raspberry Pi user community by facilitating the development and implementation of new projects. In the process, Sony gains for Aitrios a potentially huge user and developer base. Local machine vision to monitor retailer inventory while protecting customer privacy is an Edge AI use case. Photo: SONY. On the hardware side, Aitrios uses an ASIC (Application-Specific Integrated Circuit) type processor. As an advantage over CPU or general-purpose processors, ASIC processors are especially efficient at performing the task for which they are designed. In this case, the Aitrios processor is specifically designed to run and train ultra-low latency machine vision and machine learning models, which will improve the efficiency of Raspberry Pi computers to process and analyze data locally. Cloud AI of Things Cloud AI vs. Edge AI: know their differences and choose the right approach for your AI project June 14, 2023 Featured photo: Karminski / Unsplash. Published: 05.8.2023 | Updated: 01.4.2024
January 4, 2024
Cyber Security
The Hacktivist, an online documentary about Cyber Security pioneer Andrew Huang
Andrew 'Bunnie' Huang unintentionally got his start in the world of cyber security in 2003, after unlocking Microsoft's Xbox. He started just for fun, to make harmless modifications such as changing the color of the game console's power indicator. Later on, as Huang got deeper and deeper into the guts of what was then Microsoft's first video game console, he began to discover more things, including vulnerabilities. He shared what he learned in the book, 'Hacking the Xbox: An Introduction to Reverse Engineering'. Huang proved the importance of understanding the vulnerabilities in systems and computer programs and how malicious actors can also exploit and leverage them to their advantage by discovering and exploiting in their benefit. 'The Hacktivist' is a documentary about Andrew Huang, aka Bunnie Singularity University's short documentary The Hacktivist available on Youtube (35 minutes, in English), explores the history, contributions and philosophy of Andrew "Bunnie" Huang, who became a highly influential hardware hacker for his defense of security in technology and a leading figure in the field of Cyber Security. The documentary begins precisely with that famous confrontation with Microsoft, an episode that determined Huang's lifelong commitment to security, transparency, and ethics in technology. We have to recognize that technology is dual-use and that bad people can do bad things with it Over almost two decades, Huang has been challenging conventional industry norms by exposing vulnerabilities in widely used electronic devices. His focus on identifying and documenting vulnerabilities, which is commonplace today, has been essential to improving security in technology, as the documentary relates. Another notable work by Huang was related to the security of SD memory cards (including microSD, MMC, smartphone memories or USB) in which he identified vulnerabilities in 2013 that could be fixed to make it difficult for malicious actors to carry out man-in-the-middle (MITM) attacks that compromise both the data stored in those memories and the systems and devices. 🕵️♀️ Google Project Zero research team, created in 2014, identifies and discloses vulnerabilities so manufacturers can fix them before malicious actors exploit them. On right to repair and Cyber Security The documentary also echoes Huang's advocacy for consumers' "right to repair", an aspect that also has security implications. Huang defends that users should be able to redress and modify their electronic devices instead of being trapped in programmed obsolescence and 'official' or authorized technical services. “If you can't hack something that belongs to you, you don't own it.” —Bunnie. Right to repair makes it easier for users to ensure that open and/or technically accessible electronic devices can receive updates and patches. In this way, they remain protected against new cyber threats without relying on scheduled updates or updates that arrive once those vulnerabilities have already been exploited for malicious purposes. Or even repeatedly exploited due to the lack of updates from the manufacturer. 🏴☠️ While WannaCry affected up-to-date systems as well, a lack of security updates on many Windows computers played a considerable role in the spread of the attack. Also, the right to repair also encourages a more sustainable approach to technology: by exercising this right, users can extend the useful life of their devices by avoiding the need to replace them, which also has a positive impact on the environment. “Precursor”: entertainment and learning in one device The documentary also explores Huang's "Precursor" project as a representation of his commitment to security, learning, open technology, and ethics in the technology field. Precursor This open-source platform aspired, when announced in 2020, to become a reference for learning Cyber Security on mobile devices. It was conceived as a handheld game console that combined cutting-edge hardware features with a focus on programming and electronics exploration, as well as the ability to play retro video games. The device featured an open and 'hackable' design. Users could have full access to the hardware and software, allowing them to experiment with different configurations and operating systems. Very valuable for anyone who wants to learn about Cyber Security in a practical way. "Precursor" has a similar approach to a pen-test lab environment, where security professionals can explore vulnerabilities in a secure and controlled environment to develop more robust solutions. Precursor also aims to raise awareness and teach about the importance of mobile device security, theoretically allowing users to explore and understand potential mobile device security threats and teaching them how to mitigate them. Precursor at Crowd Suppply. Although it is currently not widely available, Precursor's underlying philosophy is transparency and ethics. Andrew Huang strongly believes that users should have full control over their devices and systems to explore and understand the technology, thus creating a stronger and more resilient Cyber Decurity culture. Andrew Huang's contributions in Cyber Security To a large extent Andrew Huang's work in technology and his contributions in the field of cyber security have driven the debate on the importance of understanding and addressing vulnerabilities in technology. As well as on user rights, security, and sustainability. With this in mind, Huang founded Bunnie Studios, the hardware consulting firm that he currently heads and from which he helps other companies to design electronic devices in an ethical and secure manner, something essential at a time when security and privacy are priority issues that force us to adopt best security practices as an integral part of any development, digitalization process, or implementation of technology. Vía Fran Ramírez. Cyber Security The trillion dollar mistake May 29, 2024 Published: 10.3.2023 | Updated: 01.2.2024
January 2, 2024
Telefónica Tech
Cyber Security
Cyber Security: advances, trends, and threats
As we bid farewell to 2023, it's a fitting time to reflect on the importance of Cyber Security and its role in a context of growing digitization of systems and processes. During this year, Cyber Security has been one of the main priorities and trends. It will continue to be so in 2024 due to the impact of cyberattacks and security breaches on businesses. In 2023 we have witnessed an evolution in Cyber Security solutions. They are no longer offered as isolated services but integrated into business processes and adapted to different Cloud environments and ‘as a Service’ models. There have also been significant advancements at the intersection between AI and Cybersecurity. New AI tools and techniques have been developed for cyberattack detection and prevention, contributing to improved security for businesses and organizations. The year 2023 has highlighted the importance of cyber resilience, to recover quickly and ensure business continuity in the face of incidents. It has also highlighted the value of cyber intelligence services. Furthermore, in the field of OT (Operational Technology) security, its fundamental role in protecting production processes in sectors such as Industry 4.0 has been recognized. OT Cyber Security has become a key concept to ensure production integrity and continuity. Looking towards 2024, it is clear that the training and development of technological skills in the field of Cybersecurity will be more imperative than ever. The growing threat of cyberattacks requires trained professionals to protect and defend businesses' systems and data. Investing in education and training in Cyber Security is essential to ensure businesses' security in an increasingly digital and complex environment. In this compilation post, we gather some of the most outstanding articles on advancements and trends in Cyber Security during the year 2023. We also shed light on cyber intelligence, cyber resilience, and OT protection. With this collection of content, we can better understand the necessity and advantages offered by professional Cyber Security services. Cyber Security Consequences of a cyber-attack in industrial environments January 17, 2023 Cyber Security Artificial Intelligence, ChatGPT, and Cyber Security February 15, 2023 Cyber Security Connectivity & IoT IA & Data Artificial Intelligence applied to industrial Cyber Security (OT) March 25, 2024 Cyber Security Pay When You Get Infected by Ransomware? Many Shades of Grey May 9, 2023 Cyber Security Future Workplace The importance of access control: is your company protected? May 29, 2023 Cyber Security IA & Data Cyber Security Evolution: AI as a Tool for Attack and Defence June 28, 2023 Cyber Security AI of Things Things you shouldn't share with ChatGPT July 4, 2023 Cyber Security Cyber Security is an essential skill in today's digital era, and Cisco offers a free course in it July 17, 2023 Cyber Security Cloud Key to business digitalisation: the convergence of connectivity and Cyber Security July 25, 2023 Cyber Security AI of Things Cyber Security challenges and solutions for IoT medical devices September 7, 2023 Cyber Security AI of Things Artificial Intelligence risks: injecting prompts into chatbots September 26, 2023 Cyber Security Cyber Security strategies to protect the financial sector October 10, 2023 Cyber Security AI of Things Cyber Security in the age of AI: why phishing attacks are now more dangerous October 9, 2023 Cyber Security AI of Things Advances in deepfakes: a threat to business October 16, 2023 Cyber Security Pentesting and Security Assessment: two sides of the same coin in Cyber Security October 26, 2023 Cyber Security How DRP (Digital Risk Protection) solutions protect your business from cyberthreats November 6, 2023 Cyber Security OSINT: an underused weapon for journalism to combat fake news November 8, 2023 Cyber Security Third-Party Risk: The Hidden Threat to Your Business November 13, 2023 Telefónica Tech Cyber Security IA & Data Report: on the intersection of AI and Cyber Security December 7, 2023
December 28, 2023
Telefónica Tech
AI & Data
AI: 13 essential posts to wrap up a key year for its development
The year 2023 will be remembered as a key year in AI development, with significant advances in machine learning, content generation, and process automation. It is foreseeable that this trend will continue in 2024, and may even accelerate beyond what we can imagine. These technologies are not only changing industries, from healthcare to entertainment and manufacturing, but also opening up new opportunities to creatively and effectively solve complex problems. ◾ As indicators, 2023 will end with a global AI investment of more than 150 billion euros, a growth of 27%, according to IDC data collected by CIO. Business Insider says global GDP could reach 15 trillion euros by 2030. AI integration into everyday life is one of this year's most notable aspects: We have witnessed how it improves user interaction with technology and the environment, facilitating laborious and tedious tasks. Furthermore, it has become an indispensable tool for analyzing large volumes of data, for creative content of all kinds, and for obtaining valuable information that allows for better decision-making in business, science, and government. The year 2023 has witnessed significant advancements in AI, redefining our capabilities and how we interact with the digital world. AI growth also poses ethical and social challenges. In 2023, topics related to Cybersecurity, privacy, and data usage have been discussed, making AI development more conscious and responsible. Moreover, it is essential to leverage this wave of innovation to promote inclusion and diversity in technology. This will ensure that AI solutions are fair, accessible, and beneficial for everyone. Looking toward 2024, it is evident that training and developing technological skills in AI will be more necessary than ever. To keep pace with these rapid advances, it is essential that we invest in training and education around AI — individually, professionally, and academically — with the aim of closing any existing skill gaps and fostering a diverse and inclusive workforce, capable of addressing challenges and opportunities with creativity, innovation, and responsibility. Below, we have compiled a selection of content that explores some of the technologies and applications of AI during this period to review and recognize the progress made until 2023 and as a preview of what this technology will bring in 2024. AI of Things Creative AI in business: how to adapt ChatGPT (and similar) to my customer's needs August 14, 2024 AI of Things Ghosts in the machine: does Artificial Intelligence suffer from hallucinations? February 20, 2023 AI of Things The rise of AI in education: How is it transforming the way we learn? November 27, 2023 Telefónica Tech AI of Things How to create realistic images using Generative AI November 23, 2023 AI of Things Can Artificial Intelligence understand emotions? May 23, 2023 Cyber Security AI of Things Generative AI as part of business strategy and leadership September 20, 2023 AI & Data Advanced AI can change our relationship with technology. How can we mitigate this? October 30, 2023 AI of Things Artificial Intelligence in hospital emergency rooms to improve patient care October 23, 2023 IA & Data Responsibility from design applied to AI July 10, 2023 AI of Things Data Governance: a great ally to put limits to Artificial Intelligence October 31, 2023 AI of Things These free Google courses will get you started with generative-AI June 8, 2023 AI of Things How to start programming in Artificial Intelligence: languages, tools and recommendations January 18, 2023 AI of Things A concept popularized by a former Google CEO is more relevant today than ever January 23, 2023
December 27, 2023
Telefónica Tech
Cyber Security
Lady Tech: María Loza ensures privacy and security
Telefónica Tech's #LadyHacker prove every day with their work that progress has no gender. Who are you and what do you do? I am María Loza Corera, Lead Legal Advisor at Govertis, part of Telefónica Tech, and responsible for the Emerging Technologies Competency Center. We advise our customers on privacy and security issues, helping them to design and implement projects that comply with all regulatory requirements, especially in new regulatory frameworks that are emerging, such as the proposed Artificial Intelligence regulation. How would you describe your career path so far and what are the skills you use at Govertis, part of Telefónica Tech? I started out as a candidate for the judiciary and when I left after two years. Thanks to a European research grant, I entered fully into this specialty and, from then on, I have been linked to this world of technology, both from a practical and academic point of view. About the skills I use, based always on rigor, trying to find a practical solution to problems, using creativity, critical thinking, empathy and promoting teamwork. We all have unique qualities and when they come together in a team, incredible things happen. A big shout out to my teammates at Govertis, the best team ever. What do you see as the specific challenges women face in IT and how have you overcome those challenges? Possibly the same as in any other field (equal opportunities, salary, etc.), because women no longer give up a career and this has been hard and difficult to implement in society and in all sectors, but it is evolving in a very positive way. Regarding how I have overcome these challenges, working. She loves photography as one of her great hobbies. How would you describe the evolution of opportunities for women in technology? I think technology has revolutionized and is currently revolutionizing the way we live and also the way we work. Every day new opportunities and careers are emerging around technology. I remember more than twenty years ago when technology law was not even considered a specialty and there were very few of us who were dedicated to this field. It is now an essential discipline and of great importance in the world in which we live and, above all, the world we are heading towards, since the balance between technological development and respect for our fundamental rights must be guaranteed. What do you consider to be the greatest achievement or project you have worked on so far and how did your perspective as a woman influence its development? I hope my greatest achievement is yet to come 😊, but basically, for me, achievements are all those projects in which you contribute from their design to their materialization and implementation or those complex cases in which you achieve a satisfactory and viable solution for the customer or the archiving of a sanctioning procedure or lawsuit. I don't know how my perspective as a woman can influence a project. I just do my best. How do you balance your professional life with your personal responsibilities, and what advice do you have for women in similar situations? I admit that this is something I am trying to get better at every day, so I am not the best person to give advice on this point. I try to dedicate quality time to my loved ones, time to take care of myself and find the balance between everything. What would you say to a 20-year-old María? Rely on your intuition and be true to your principles. * * * Telefónica Tech #LadyHacker Tamires Abujamra is innovating at Telefónica Tech Brazil September 14, 2023 Telefónica Tech Meet #LadyHacker Karla Parra, Cybersecurity expert at Telefónica Tech June 20, 2023 Telefónica Tech Meet #LadyHacker Jess Woods, Cloud expert at Telefónica Tech March 8, 2023 Telefónica Tech “To be a hacker in life is to be a passionate, talented person who manages to influence the transformation of society", Carmen Alonso July 28, 2022 Telefónica Tech «We are moving towards genderless professions», María Martínez August 8, 2022
December 12, 2023
Telefónica Tech
Cyber Security
AI & Data
Report: on the intersection of AI and Cyber Security
In the last year, the convergence of Artificial Intelligence (AI) and Cyber Security has captured the attention of professionals, experts, and enthusiasts in technology and Cyber Security. This is precisely the central theme of the BP/30 report on the Approach to Artificial Intelligence and Cybersecurity recently published by the National Cryptologic Center (CCN-CERT), which provides an overview of what the authors call Artificial Intelligence Cyber Security (AICS). This concept focuses on integrating AI into Cyber Security and OT Cyber Security. ◾CCN is Spain's National Cryptologic Center, entity responsible for information and communication technologies security within the scope of the General State Administration. Operating under the direction and supervision of the CNI (National Intelligence Center), its objective is to guarantee Cyber Security in Spain. ◾A CERT (Computer Emergency Response Team) is an organization responsible for coordinating and responding to computer security incidents at a national or regional level. The CCN-CERT report begins with an introduction defining both AI and Cyber Security and tracing a brief history of their relationship in security. These introductions help understand how AI has revolutionized Cyber Security, “two disciplines with origins clearly separated in time that have seen their competencies converge into what currently constitutes a new practical activity.” Although machine learning techniques for intrusion detection emerged in the 90s, AI began to play a significant role in cybersecurity in the 2010s. Fundamentals of AI in Cyber Security A key section of the report focuses on AI fundamentals. Here, concepts such as Machine Learning (ML), Deep Learning (DL), classification algorithms, and generative AI are explained. These concepts help understand how AI can be applied in Cybersecurity. ✅ Machine learning allows machines to learn from historical data and make decisions through algorithms. This translates into increased ability to identify cyber threats and suspicious or malicious behaviors. Given the massive amount of data organizations handle daily, automation through AI has become an indispensable component to improve threat detection efficiency and accuracy. Algorithms like Support Vector Machines (SVM), decision trees, and neural networks are used to identify threat patterns and malicious behaviors in real time. Applications of AI in Cyber Security The report highlights several applications of AI in Cyber Security, including: Detecting attack patterns and suspicious behaviors, which can help identify and respond in real time to cyberattacks. Automating cyber attack response, which can accelerate recovery and reduce impact. Predicting cyberattacks, which can help organizations better prepare to face them. Identifying and authenticating users, which improves system and network security. Automated vulnerability analysis and pentesting, which can help organizations identify and remediate security vulnerabilities more efficiently. Defending against automated adversaries, who use AI to launch cyberattacks. Cyber Security AI of Things Things you shouldn't share with ChatGPT July 4, 2023 Challenges and limitations of AI in Cyber Security Despite the benefits AI brings to Cyber Security, the report also addresses challenges and limitations surrounding this technology, including: Adversarial attacks against AI models used in Cyber Security, to manipulate or disable them. Over-reliance on automated solutions, which can lead to negligence of traditional security measures or create a false sense of security. False positives and negatives that can lead to resource loss or threat exposure. Privacy and ethical issues, such as personal data collection and use. ✅ Attackers may try to deceive AI systems by introducing malicious data to avoid detection or penetrate a system. This may include injecting prompts into chatbots. This highlights the need to develop robust and secure AI systems to counter such threats. ✅ Cybercriminals can use Generative AI to orchestrate very sophisticated phishing attacks, with more convincing, contextual, and personalized messages. Additionally, it facilitates hyperrealistic deepfakes, a method of identity impersonation with fraudulent or malicious intent. CCN-CERT highlights emerging trends in AI and their potential impact on the industry and society, which makes it imperative to anticipate how AI will continue to impact Cyber Security. The report also addresses several relevant topics in the field of AI and Cyber Security, including: Data security: The risk of security failures that expose confidential information is a constant concern. AI plays a key role in identifying and mitigating these risks. Profiling: AI's ability to profile individuals based on their online behavior raises ethical and privacy issues. Transparency and decision making: AI models' opacity can generate mistrust and difficulties verifying their decisions. Bias and fairness: AI can be affected by inherent biases in the data used to train it, which could lead to discriminatory decisions. Surveillance and oversight: While AI can be a powerful tool in Cybersecurity, it also raises concerns about potential abuse of user surveillance. Accountability and responsibility: Dealing with failures or errors in AI-based systems is a complex challenge. Regulations and ethical guidelines: The need for clear regulatory frameworks becomes increasingly evident as AI integrates more deeply into Cyber Security. 🔵 The CCN-CERT report BP/30 Best Practices Report on Approach to Artificial Intelligence and Cybersecurity can be freely downloaded in English. Is a valuable source of knowledge for those interested in understanding how AI is transforming Cyber Security. * * * Cyber Security The Hacktivist, an online documentary about Cyber Security pioneer Andrew Huang January 2, 2024 Image by Starline on Freepik.
December 7, 2023
Telefónica Tech
AI & Data
How to create realistic images using Generative AI
Generative AI models for image creation, such as Midjourney and Dall-E or Bing/Copilot, are increasingly making it possible to produce images and photographs with ever greater realism and technical and aesthetic quality in seconds. And in this creative process, chemical photographic film plays an influential role in the result of digital images produced with generative AI. In fact, photographic film has a significant and visible influence on today's digital photography. Many digital cameras, filters such as those in Instagram, or settings in programs such as Adobe Lightroom allow us to emulate aesthetic qualities that have their origins in conventional photographic film, including its imperfections and limitations. Over the course of a century and a half, the resulting photographic image, in technical and aesthetic terms, as well as in terms of style and character, has depended largely on the film chosen. Factors such as the film manufacturer, its purpose and technical characteristics, the manufacturing process or the materials and chemicals used have left characteristic traits in millions of images throughout the history of photography. Traits that can be applied to images created by generative AI. In generative AI, indicating a type of chemical film helps to simplify the process of generating realistic and more natural-looking images. It also allows you to develop a personal style in image creation, or to have more control over the technical and aesthetic result by taking advantage of the type of scene and the most recognizable characteristics of that film. The impact of chemical film on generative AI Generative AI uses text instructions (prompts) to create digital images. By specifying photographic film types in theprompt it is possible to influence to a greater or lesser extent the style and aesthetics of the generated images, to narrow down the reference models, and to implicitly mandate a realistic image type. A prompt that includes the Fujifilm Velvia 400 style, for example, will generate realistic images with vivid colors and sharp contrasts without the need to indicate this in the instruction, since these are characteristic features of photographs taken with that film. Fujifilm Velvia 400 style However, the same prompt tacked to Kodak Tri-X style will produce a black and white image without prompting, resulting in a digital image with the high contrast, fine grain and sharpness characteristic of that film. Kodak Tri-X film style The generative AI can interpret the type or style of film and take it into account when generating the image, simulating its visual style and characteristics. Simplification of image creation Mentioning chemical film styles in the generative AI thereby saves specifying other parameters that are already implicit in the mentioned film style: colors and tones, saturation, grain, dynamic range and contrast, texture, and even the overall quality of the generated image. Although not necessary, describing these or other parameters (such as the type of light or camera angle) does help the model to interpret the type of image and respond to the request more accurately. Kodak Professional Ektachrome E100 style Another example would be to indicate on the prompt Kodak Professional Ektachrome E100 style, a film especially appreciated for capturing scenes that require high precision in the reproduction of bright, vivid colors with great color accuracy and fine details, even in low light situations. Polaroid style Also very characteristic is the result of adding Polaroid style to the prompt, the film that already in the 1970s made it possible to have the photographs on paper within seconds of taking them without going through the development lab. Kodak Portra 400 style Kodak Portra 400 film is particularly appreciated for its versatility, as it can be used in a wide variety of lighting conditions, including low-light situations. Additionally, it offers an exceptionally wide tonal range, allowing for images with great detail in both highlights and shadows. Kodachrome 64 film Kodachrome 64, by Kodak, was a photographic film known for its color accuracy and low sensitivity (ISO 64) well suited for high light conditions. It was manufactured between 1935 and 2009. In 1984 Steve McCurry used Kodachrome 64 film to take his iconic photo of Sharbat Gula, "the Afghan girl," which made the cover of National Geographic magazine in 1985. This image stands out for the intensity of its colors. In the context of generative AI, Kodachrome 64 would be a good choice for generating a realistic digital portrait with intense natural colors. Other photographic parameters influencing generative AI In addition to the choice of photographic film, there are other parameters and camera settings of conventional photography that can also influence the appearance and final result of images generated with AI: ISO sensitivity: Mentioning a high ISO in the prompt may result in images with more noise or grain, similar to photographs taken in low light conditions or at high ISO. Aperture: Including a wide aperture such as f/2 at the prompt is likely to generate images with a blurred background, which will emphasize the main subject. On the other hand, a small aperture, such as f/11, will result in a greater depth of field and a flatter, more focused overall image. A wide aperture produces shallow depth of field. Shutter speed: Specifying a high or low shutter speed (or numerically) will affect motion capture. This can result in images that freeze the image or images that instead 'capture' motion, as in a long exposure photograph. Changing shutter speed. White balance and contrast: These parameters also influence the appearance of the image. A warm white balance will give more yellow tones, while a cool white balance will give more blue tones. High contrast will produce darker shadows and brighter highlights, while low contrast will soften shadows and bright highlights. Saturation and HDR: High saturation will generate more vivid and intense colors, while low saturation will result in duller colors. HDR can result in more detail in highlights and shadows. Dynamic range and selective focus control. In addition to these parameters, the generative AI can also interpret and take into account other details related to the photograph, such as the type of lens (wide angle, telephoto, fisheye, macro), lighting, including use of flash, selective focus, camera angle or distance in meters, among others. These adjustments provide more information to the model, resulting in greater control over the final result. Beyond digital photography, chemical photographic film maintains its influence in the creation of digital images through generative AI. The choice of a photographic film style in generative AI prompts influences the type and appearance of the final image, while defining key technical and aesthetic aspects, providing greater control and involvement in the final outcome of the generated images. 🎞️ All images in this post are generated using AI (Dall-e + Magnific.ai) to demonstrate possible results, although the same concept can be applied to other generative AI models. OpenAI's Dall-e can also be used through Bing and Copilot from Microsoft, as well as ChatGPT. Cyber Security AI of Things Things you shouldn't share with ChatGPT July 4, 2023
November 23, 2023
AI & Data
Key ingredients for today's Smart Cities
The term Smart City used to refer to "cities of the future", but Smart Cities are increasingly turning the present. Thanks to the use of digital solutions such as IoT (Internet of Things) and Artificial Intelligence, among other technologies, it is now possible to sensor cities. Capturing real-time data from the urban environment —and its infrastructures and services— allows optimising and automating processes and making better management decisions. This makes cities that embrace digitalisation increasingly healthier, more attractive and safer. With more efficient public services and a lower environmental impact. Smart cities are able to grow in a more sustainable way and improve the quality of life of their citizens. We have brought together six key components of today's Smart Cities. They include solutions that enable features today that not so long ago defined the cities of the future: Smart parking Smart mobility solutions make it easier for drivers to find parking spaces in urban areas. This improves traffic flow, reduces emissions, and promotes local commerce and citizen satisfaction. These solutions can also be applied to park-and-ride facilities, airports, hospitals, shopping and leisure centres or natural areas. Smart transport Many public transport systems are already interconnected, enabling coordination between different modes of transport and allowing them to provide real-time information on their location, time of arrival at the stop, journey time and more. Similarly, smart transport facilitates "mobility mixes", journeys that combine different modes of transport according to the context and individual needs: private car, public transport, shared bicycle or scooter… Smart street lighting City lighting accounts for between 40% and 70% of municipalities' electricity bills. It is possible to reduce costs and emissions, improve energy efficiency and reduce light pollution by integrating IoT devices and connectivity for smart management and remote management. Smart street lighting offers new business opportunities and benefits for both cities and their citizens. AI of Things Smart shelving does not only store products, but also knowledge and efficiency September 5, 2023 Mobility management Mobility control systems for vehicles and pedestrians provide timely information on the status and possible incidents affecting mobility. These systems calculate and even anticipate the most convenient route for public transport, drivers and pedestrians. They also automatically manage traffic lights and other signals to adapt to different contexts and unforeseen events to optimise mobility and reduce congestion. Efficient waste management Technology is already making it possible to improve the management of municipal waste, improving the quality of the service, saving costs and reducing its environmental impact. Not all districts in the same city generate the same type of waste or the same amount of waste. Technology makes it possible to obtain real-time information on the status of the containers. It is possible to combine this information with a vehicle fleet management system to plan optimal collection routes depending on collection needs, areas, volumes, or types of waste. Efficient municipal waste management is essential to improve recycling, promote the circular economy and reduce environmental impact. Monitoring environmental parameters Monitoring environmental parameters makes it possible to assess the quality of the urban environment and reduce environmental, light or noise pollution. Thanks to the deployment of IoT sensors, it is possible to monitor these data in real time in order to know the situation accurately and make better decisions. The monitoring of environmental parameters also makes it possible to measure parameters such as the concentration of CO2 or harmful particles or pollutants. Or meteorological phenomena that can influence the functioning of the city. 🌆 If you want to know more about the present and future of Smart Cities, visit our new Smart Cities section of the Telefónica Tech website. Connectivity & IoT Learn about our IoT Partners Program, a business opportunity to grow in the global market September 25, 2023
November 16, 2023
Connectivity & IoT
AI & Data
Private Mobile Networks: the key to Industry 4.0
Private Mobile Networks (PMNs) are wireless communication infrastructures similar to conventional mobile networks, but dedicated and exclusive to a specific organization or location: factories, warehouses, urban areas and even geographic areas such as an agricultural facility, as they can be implemented both indoors and outdoors to suit the specific needs of each company. Unlike public mobile networks, which are open to all customers and users of an operator, PMNs are closed and only accessible to authorized users. For this reason, being customized and closed networks, they provide companies with full control over their communications infrastructure, its operation, traffic, or performance. In addition, PMNs offer a high level of security using encryption and authentication protocols to protect data and ensure network integrity. Considering Mobile Private Networks as part of a digital transformation strategy allows companies to take full advantage of the benefits offered in terms of security, performance, and flexibility. Functioning and characteristics of Private Mobile Networks One of the main characteristics of Private Mobile Networks (PMN) is their high data transfer capacity. PMNs operate over wireless communication technologies, such as 4G LTE or 5G, providing a high quality, low latency connection. Key features include: Full enterprise control over the network, including traffic management, security, and configuration. Customization and scalability according to specific business needs, evolution, and growth. High quality connectivity in terms of coverage and capacity, with lower latency compared to public networks. Advantages and benefits of Mobile Private Networks As private networks, they improve security by reducing the risk of external attacks and data leaks. High capacity and low latency facilitate the implementation of critical applications in real time. The ability to customize their operation and infrastructure offers greater flexibility to meet specific needs, such as the integration of IoT devices. PMNs operate independently of public networks, ensuring business continuity even when public networks are overloaded or down. 🔒 Security is a fundamental pillar of PMNs. These networks incorporate advanced Cyber Security measures, such as data encryption, strong authentication, and network segmentation, to protect against cyber threats and ensure data integrity. PMNs can be used in all types of industries and processes, including: In manufacturing to connect machines, robots and IoT devices to automate your production processes, improve efficiency and reduce costs. In logistics, distribution, and transportation to connect vehicles, warehouses, containers and IoT devices to gain full supply chain visibility, optimize routes and reduce delivery times. In healthcare and medical care to connect patients, doctors, medical devices, and information systems to provide better patient care, reduce costs and increase safety. In the energy sector to connect power plants, power grids and IoT devices in production and distribution infrastructures to provide better service, improve efficiency and reduce costs and emissions. Cyber Security Consequences of a cyber-attack in industrial environments January 17, 2023 Mobile Private Networks in Industry 4.0 In the Industry 4.0 context, PMNs are critical to enable technologies such as the Internet of Things (IoT), automation and artificial intelligence. Mobile Private Networks provide robust and secure interconnection of all components of industrial processes, leveraging next-generation technologies to optimize processes, reduce costs and maintain your competitiveness. These networks, for example, allow the implementation of sensors and IoT (Internet of Things) devices to collect real-time data on machine performance, product quality, energy consumption, among others. This data can be analyzed and used to make informed decisions and improve process efficiency. In addition, PMNs allow the implementation of technologies such as augmented reality and virtual reality, which can be used to train workers, perform remote maintenance, or simulate processes prior to their implementation. Case studies: Gestamp and Navantia ○ Gestamp, a leading company in the automotive sector, implemented a PMN PoC to automate its manufacturing processes. The use of PMN enabled more efficient communication between its machines and control systems, reducing production times and increasing the quality of the final product. ○ Navantia, a benchmark in the shipbuilding industry, has integrated PMNs to improve its shipbuilding and maintenance processes. The implementation of these networks has facilitated better data management and more effective communication between the various components of its shipyards, significantly improving operational efficiency. Mobile Private Networks as a backup to public networks Public networks offer wide geographical coverage, but are more susceptible to problems of congestion, security, and even specific coverage. For example, an industrial facility may not have good coverage in specific areas of the facility where adequate connectivity is needed. On the other hand, Mobile Private Networks, although limited in the geographic scope of their coverage, provide greater security, reliability, and customization, including providing adequate coverage in precise locations of an industrial facility. Therefore, Mobile Private Networks serve as a backup for public networks in situations where the latter are unstable or overloaded. In the event of public network failures, they maintain the critical operability of business services, thus ensuring business continuity. Telefónica Tech's Mobile Private Network solutions Telefónica Tech offers a wide range of PMN solutions ranging from devices and connectivity to platforms and operations. Our end-to-end approach meets the customized needs of each enterprise, backed by recognition from analysts such as Gartner. Our Mobile Private Networks are suitable for medium to large enterprises in any industry, optimizing processes, reducing costs and improving competitiveness through tailored configurations that include hybrid private and fully private 5G networks, according to each customer's specific needs. Hybrid private 5G networks: These networks offer greater flexibility and scalability, as they can use existing infrastructure and expand as needed. Hybrid private networks provide public network backup, ensuring service continuity in the event of failures or congestion in the private network. Fully private 5G private networks: offer a higher level of control and security, as they do not rely on the public network infrastructure. These networks offer a higher level of control and security as they do not depend on the public network infrastructure, such as those operating in highly regulated sectors or those requiring confidentiality and data protection. Conclusión Telefónica Tech offers a comprehensive approach to PMNs, providing not only the necessary network infrastructure, but also devices, connectivity, management platforms and data analytics, and ensures continuity of operations through its Network Operations Center (NOC). This ensures that companies can count on a complete and customized solution for their needs, without having to worry about network management and maintenance. In this way, Telefónica Tech's private mobile network (PMN) solutions allow companies to take advantage of new generation technologies to optimize processes, reduce costs and maintain their competitiveness. With a comprehensive approach, Telefónica Tech offers customized and complete solutions for each customer's needs. Telefónica Tech NOC: Private 5G connectivity management The Network Operation Center (NOC) at Telefónica Tech is essential for managing private networks, particularly in the realm of 5G connectivity across critical sectors such as industry, healthcare, and energy. Located in Spain and Brazil, this center provides constant support and monitoring in multiple languages, ensuring secure and efficient connections. In this way, we guarantee private networks' continuous operation, detecting and responding to incidents before they impact users. It offers 24/7 service with quick response times and support in Spanish, English, and Portuguese. Additionally, it implements robust security measures and has continuity and disaster recovery plans in place. Telefónica Tech's NOC ensures efficient and secure operation of private networks in critical sectors, with a strong emphasis on security, quality, and efficiency. Connectivity & IoT IA & Data Industrial digitalization: we share the keys at Advanced Factories April 20, 2023
November 14, 2023
Telefónica Tech
Cloud
How do sustainable data centers work and what are their benefits?
Data centers are essential infrastructures in the digital society. They store and process vast amounts of information, from emails and financial transactions to multimedia content, social networks, and critical business operations. However, data centers' operations entail significant energy consumption, both to keep them running and maintain them at the appropriate temperature, below 25°C. A data center from any hyperscale provider (such as AWS, Microsoft Azure, or Google Cloud) consumes as much energy as a small city with 80,000 households. There are 7.2 million data centers worldwide, which collectively consume 1.1% of the global electricity supply, according to the World Economic Forum (WEF). Despite our growing appetite for data and dependence on it, this global percentage has remained relatively stable since 2010. This is even as internet users doubled, and internet traffic multiplied by 15 times. The key to this stability lies in energy efficiency and sustainable data centers. ✅ In 2018, data centers in the European Union (EU) consumed 76.8 TWh, and it is expected to increase to 98.52 TWh by 2030, a 28% increase. This rise is significant in both absolute and relative terms, as data centers represented 2.7% of the EU's electricity demand in 2018 and is projected to reach 3.21% by 2030 if current trends continue. Edge computing, driven by increased digitalization and the associated need to capture, transfer, and process more data, is also growing. Edge data centers accounted for 2% of data center energy consumption in 2018 and are expected to reach 12% by 2025. Sustainable data centers, also known as Zero-Net-Energy (ZNE) data centers, are designed to be highly energy-efficient and, more importantly, to consume as much energy as they generate or even less, making them a net source of clean energy. This achievement is made possible through a combination of advanced technologies, sophisticated management systems, and often renewable energy sources. ZNE data centers drive digital societies while minimizing environmental impact. "Although there is no magic solution, the next decade is expected to bring substantial advances towards data centers with a net-zero energy balance as emerging technologies are combined and integrated in innovative ways," says the World Economic Forum. The goal: minimizing data center environmental impact A ZNE data center aims to operate with a net-zero energy consumption. This means that over a year, the total energy consumed by the data center is equal to the renewable energy it generates or acquires. This is a crucial approach to ensuring that digital infrastructure is sustainable. The solution lies in the rapid development of ZNE data centers, a concept that has become feasible by addressing: Heat recycling cooling systems One of the main challenges in data centers is managing excess heat generated by servers. To address this issue, liquid cooling systems use water or dielectric refrigerants to efficiently dissipate heat. Importantly, excess heat is not wasted: it is repurposed for applications such as space heating, water heating, and industrial processes. Real-time efficiency maximization through AI Artificial intelligence (AI) plays a fundamental role in the quest for net-zero-energy data centers. AI algorithms analyze and optimize energy consumption in real-time, ensuring maximum efficiency without compromising performance. ✅ In 2016, AI-driven energy management achieved up to 40% reduction in energy consumption in Google's data centers. Modular and demand-based infrastructure Net-zero-energy data centers are becoming modular and demand-based. "Cloud and Edge Computing systems allow data processing and storage to be distributed across multiple devices, systems, and even locations", WEF explains. Modularity allows for easy expansion and adaptation, enabling businesses to adjust their infrastructure according to changing demands without wasting resources. They can deploy, expand, or relocate easily to optimize energy usage according to their evolving needs. ✅ Edge Computing enables data processing to occur closer to its source, reducing the need to transfer large amounts of data over long distances. This not only improves latency and speed but also reduces the load on central data centers, decreasing their energy consumption. Moreover, data centers' geographical distribution can more effectively harness local renewable energy sources. Innovative software and hardware Innovations in software and hardware are key to ZNE data centers. Novel computing architectures, such as System on Chip (SoC), have proven to be efficient alternatives. Additionally, "computing proportional to energy" is an optimization strategy, ensuring that servers and computers consume energy in proportion to the work they perform. Renewable energy and intelligent storage One of ZNE data centers' pillars is renewable energy adoption. Solar and wind energy are the most significant renewable energy sources, enabling data centers to get energy from clean and sustainable sources. However, the true innovation lies in intelligent energy storage. Advanced batteries and storage systems allow data centers to accumulate excess energy when available and utilize it when needed, ensuring continuous and sustainable operation. Conclusion The idea behind ZNE data centers is simple yet effective: data centers that consume as much energy as they generate or less, resulting in making them a net source of clean energy. This concept is increasingly close to becoming a common reality thanks to technological advancements and changes in companies' environmental stances. ZNE data centers have significant environmental benefits but can also be a competitive advantage for businesses. This is despite the inherent challenges, from initial costs to regulations. Ultimately, the path to ZNE data centers requires technological innovation and a shift in mindset toward sustainability.
November 2, 2023
AI & Data
Advanced AI can change our relationship with technology. How can we mitigate this?
OpenAI recently announced the addition of new 'see, hear and speak' capabilities to ChatGPT. This means that this model becomes a multimodal AI: it is no longer only capable of processing and generating text, but can also interact with other types of data, 'see' images, 'listen' to audio, and speak. ChatGPT is moving towards a more capable and versatile system with this evolution, which increases its potential applications through image recognition, audio recognition, or audio and image generation using Dall-E, also recently incorporated. The best use of Artificial Intelligence is the one we make when we are aware of its potential and also of its limitations. The 'Eliza effect' and its modern manifestation This evolution of ChatGPT also has an impact on how people perceive and relate to this technology, which can provoke or intensify the Eliza effect. The Eliza effect refers to the phenomenon of attributing human capabilities to a machine, even when we know we are interacting with a computer. ◾ Example: typing 'please' in ChatGPT before a query or saying 'thank you' when Aura warns that it is going to rain would be mild cases of the Eliza effect. This phenomenon was documented following the Eliza computer program developed by Joseph Weizenbaum at MIT in 1966. Eliza, the program that gives its name to the effect, was designed to simulate in parody form the interaction of a therapist giving conversation to the user through preconfigured questions designed to maintain a basic dialogue between the user and the computer. Advanced AI models foster the Eliza effect, as their capabilities make them appear more human-like. Many people who interacted with Eliza at the time came to believe that the program somehow understood their problems and emotions, even though the program actually just followed a set of predefined rules (IF, THEN, PRINT…) to generate responses and continue the conversation. So even though Eliza was simply a text-based conversational program with simple rules, users often interacted with it as if it were a human therapist, confiding in it intimate thoughts and emotions. As Weizenbaum wrote at the time, Eliza "induced powerful delusional thinking in quite normal people”. 👩💻 Simple conversational programs such as Eliza work from a predefined set of rules and patterns programmed by the developers. ○ These rules dictate how the system should respond to different user inputs. For example, respond 'Hello, how are you' if the user says 'Hello'. ○ This limits the complexity and traversal of conversations and can leave the program unable to coherently continue the conversation in case it does not find any applicable rules. In contrast, advanced AI models such as ChatGPT are not governed by fixed rules. ○ Instead, they learn from large amounts of data during their training, allowing them to generate responses based on patterns and connections identified within that data. ○ This gives them the ability to generate more complex responses in a more flexible and coherent manner, derived from the vast information they were trained on. “Her”: an extreme case of the Eliza effect and a reflection on human-IA interaction Spike Jonze's film 'Her' (2013) presents a scenario in which a man falls in love with an advanced Artificial Intelligence digital assistant, known as Samantha. This relationship transcends the traditional boundaries of human-machine interaction and addresses issues such as love, relationships, or loneliness. The story tells how a person can attribute human qualities to a machine, even when fully aware of its artificial nature. In “Her”, the Eliza effect is magnified precisely because of Samantha's advanced capabilities. Not only is she an effective assistant, but she also demonstrates emotions, learns, and has the ability to make personal connections with the protagonist, who comes to consider her a life companion. It would be an extreme case of the Eliza effect. Recognizing and understanding the Eliza phenomenon is critical to the proper development of AI-based technologies. "Her", however, serves as a preview of the implications that advanced AI can have on human-machine interaction. The film depicts an AI so realistic that it challenges our current concept of what a 'normal' relationship is: it shows how the lines between humans and machines can blur when AI systems advance to a point where they can replicate or even surpass certain human capabilities. ⚠ Risks and threats ○ The Eliza effect can be exploited to manipulate users, making them trust an AI model more and share personal or sensitive information, believing that they are interacting with an empathetic, sympathetic, and honest entity. ○ Obtaining such information can violate user privacy and use the collected data for malicious purposes, such as selling information to third parties, blackmail, or social engineering. ○ Also, the predisposition to view AI as 'human' can be used to influence users' decisions, guiding them towards actions or choices that benefit self-interested or malicious actors, all under the veil of genuine, well-intentioned interaction. ○ Furthermore, overestimating the capability of advanced AI that 'sees' and 'hears' can mask potential vulnerabilities to attacks that exploit these sensory channels. For example, images that hide malicious prompts that the AI will 'unintentionally' execute when analyzing the image. Measures to mitigate the Eliza effect As was the case in the movie 'Her' the Eliza effect raises important questions. For example, whether it is ethical for an AI to induce emotions in a human being, especially if those emotions can be misleading or harmful to the person. This can happen with virtual assistants such as Aura or Siri, and most particularly in the case of children, who might come to form emotional attachments to interactive toys that make use of basic AI models. The user experience of an AI plays a very influential role in human perception, and therefore in the Eliza effect It is essential to take care of the user experience and adopt a responsible approach to AI design to mitigate the Eliza effect, helping to ensure that: The AI model is accessible and easy to use for the user, but without giving rise to misunderstandings about its artificial nature. The user is clear about how it works and what the capabilities and limitations of that AI are. A proper overall design of AI-based technologies promotes a more informed, aware, and safe interaction with this technology. To achieve this, it is necessary to consider aspects such as: Transparency in its design and operation, so that users are aware that they are interacting with an AI and not with a human being. Transparency aligns user expectations with reality Set clear boundaries about AI capabilities and limitations to help users understand at all times that they are interacting with a computer program. Provide consistent and realistic responses that help users keep in mind at all times that they are interacting with a machine. If an AI provides responses that are too human-like or emotional, it will encourage the Eliza effect. Educate users about the capabilities and limitations of AI and provide information about how it works, what kind of data it uses to generate responses, and how those responses should be interpreted. User-centric interfaces minimize the chances of users attributing cognitive or emotional capabilities to AI. For example, an intentionally artificial or neutral tone of voice will help to mitigate this phenomenon Regularly reviewing and evaluating AI models allows to detect if they foster the Eliza effect. Ongoing evaluation allows to intervene and make changes to the model or interface if necessary. Ensure protection of user privacy to mitigate the security consequences of the Eliza effect. The user will have confidence that even if the effect occurs, their personal information will not be collected or misused. AI of Things Using Midjourney to create social media content February 16, 2023
October 30, 2023
Cyber Security
Pentesting and Security Assessment: two sides of the same coin in Cyber Security
The constant identification and mitigation of vulnerabilities is essential in the field of cyber security to protect the systems, applications, and digital assets of companies. In this context, Pentesting and Security Assessment are two tactics that differ in their approach, methodology and results, but complement each other within a complete Cyber Security strategy. What is pentesting? The term 'pentesting' ('pen test') refers to controlled and authorized 'penetration testing' on an organization's systems and applications to identify and exploit vulnerabilities and security breaches. This practice is distinguished by its active nature and offensive orientation. Its objective is to practically evaluate the security of systems, networks or applications in search of weaknesses by applying attack scenarios involving multiple steps and vectors. Therefore, when performing penetration testing or pentesting, different techniques and tools are used, just as in real attacks, to get as deep as possible into a company's infrastructure. This allows the resilience and effectiveness of its defenses and layers of protection to be evaluated from a Cyber Security point of view. The role of the pentester The professionals who carry out these tests, known as 'pentesters', assume the role of attackers. Their purpose is to compromise security systems to discover vulnerabilities and security gaps. This involves thinking and acting like a real cybercriminal, to expose potential flaws in the system. Penetration testing is generally conducted in stages, which include gathering information, identifying vulnerabilities, exploiting them, and finally reporting. This approach allows vulnerabilities in systems to be proactively identified and addressed before malicious attackers and cybercriminals can exploit them. These penetration tests reflect how advanced targeted attacks operate in the real world. So these tests provide a concrete view of the impact that a real attack could have, exploiting misconfigurations or vulnerabilities in systems. ✅ Companies such as Tesla or Microsoft have reward programs in place to incentivize the search for vulnerabilities: Tesla on its vehicles, which rewards between $100 and $100,000, and Microsoft for its Azure platform, with up to $60,000. The goal is to detect and resolve vulnerabilities before they can be maliciously exploited. What is a Security Assessment? The main purpose of a Security Assessment is to identify, categorize and manage vulnerabilities that may affect an organization, covering technical, operational, and human aspects. These vulnerabilities can range from insecure system configurations to bad programming practices and lack of security patches or updates. Its approach includes the identification of vulnerabilities in the organization. This ranges from a thorough review of policies and procedures to deficiencies in employee security training or knowledge, among other aspects. Pentesting and Security Assessment: why both are important Security assessment and pentesting are two fundamental security tactics, each with a different focus, purpose, and outcome: Pentesting (penetration testing) It is designed to test whether a mature defense can prevent an attacker from achieving one or more specific objectives. It targets the entire organization or relevant vulnerabilities from previous assessments. The test result details what was tested, how it was tested (including exploitation paths) and the test results, along with recommendations for improving the security posture. Security assessment It is designed to find as many defects as possible to make a prioritized list of items to remediate. It focuses on specific assets such as infrastructure, software or employees. The test result provides a prioritized list of all vulnerabilities and risks found on the tested assets, along with recommendations for improving the security posture. Despite their differences, both tactics are essential within a comprehensive cyber security strategy: Pentesting focuses on identifying specific, real-world flaws that can be imminently exploited. Security assessment offers a panoramic view and provides a long-term analysis of security postures and policies, allowing for more robust strategic planning. ✅ Example: a hospital could employ pentesting to test the resilience of its network against external attacks, such as ransomware. At the same time, a comprehensive security assessment would review its medical records access policies, the level of cyber security training of its staff, and the effectiveness of its incident response procedures. This dual strategy would ensure that the hospital is prepared for both immediate threats and risks inherent to its organization. Telefónica Tech's Security Assessment and Pentesting Service Telefónica Tech offers a comprehensive security and pentesting service for organizations of different sizes and sectors (SMEs and large enterprises) that provides an independent and objective assessment of multiple aspects of security, including technology infrastructure, software and employee training and awareness: Technological infrastructure to identify potential vulnerabilities and uncontrolled access. Software to identify security gaps in web applications, apps and APIs, with actionable recommendations to improve security. Employee training to identify risks associated with social engineering attacks and assess employee preparedness for security incidents. ✅ Example: A company in the financial sector can use our service to perform a comprehensive pentesting of its infrastructure and applications. This would not only help the company identify and fix vulnerabilities, but also facilitate compliance with regulations such as GDPR or PCI DSS. Pentesting and Security Assessment Methodology Our methodology consists of four main phases: Planning: Defines scope, limits, success criteria and reviews past threats and vulnerabilities. Testing: Performs security testing, considering different scenarios and standards. Findings: Delivers a detailed technical report with recommendations. Vulnerability management: Clients prioritize and mitigate vulnerabilities found. This comprehensive approach enables organizations to take effective preventive and corrective actions. Our methodology provides clear and concise guidance on how to protect customer information against real attacks. Analysts specialized in Pentesting and Security Assessment We have a global team of analysts highly specialized in the execution of Cyber Security testing that stands out for their expertise, covering a wide range of sectors, from banking to healthcare and transportation, and their constant training to stay updated on the latest trends in risk and compliance. Our team also holds industry certifications that support our capabilities and expertise. This solid foundation allows us to effectively adapt to each client's specific environment, applying proven methodologies to deliver effective solutions. Our team of analysts ensures a cyber security strategy prepared to address both immediate threats and long-term risks. Our Pentesting and Security Assessment service is comprehensive and adapts to the needs of various types of organizations, offering an unbiased security assessment for any company seeking to strengthen its security posture with preventive and corrective measures to protect its critical assets and data. Image from Freepik. Cyber Security Cybercrime, a constant threat to all types of companies March 29, 2023
October 26, 2023
Cyber Security
AI & Data
Cyber Security in the age of AI: why phishing attacks are now more dangerous
One of the most persistent attack methods used by malicious actors when it comes to cyber threats is phishing: emails designed to trick the recipient into revealing confidential information or performing actions that allow their data to be stolen. Phishing is one of the most successful attack methods. And in the age of Artificial Intelligence (AI) cybercriminals have found in AI a tool that allows them to send more convincing, contextual, and personalized messages to increase their chances of success. And according to the Phishing Threat Trends Report 2023, they are succeeding. ✅ A classic example of phishing would be an email that impersonates a banking institution and asks the recipient to type in their online banking username and password. When the victim follows the directions in the email and types in their bank credentials, they are actually handing them over to the cybercriminals. Artificial Intelligence in phishing and spear-phishing attacks The report, published by Egress (under subscription), reveals that AI-generated phishing emails are on the rise and becoming more effective. More difficult to identify and to differentiate from legitimate communications. So much so that in three out of four cases (71%) AI detectors cannot distinguish between an email written by a bot and a person, according to Infosecurity Magazine. In this way, using generative AI models, attackers manage to create more convincing texts, very similar to legitimate emails from companies (as in the bank example), entities, non-profit organizations or government agencies. Cybercriminals also find it easier to personalize emails with specific information about potential victims with the help of AI. This information can be obtained from open sources, such as social networks, or by compiling and analyzing large databases available on the Deep Web black market. The purpose is also to steal data or install malware on your computer. This means that phishing emails are becoming increasingly personalized by adding information about the victim, such as name, job title and place of work. This makes the message more believable. More data about the victim's personal interests and preferences or lifestyle is also used to construct a unique bait targeting a specific individual or company. Cybercriminals use AI in this way to achieve an even more effective deception that adds to the danger of the attack by increasing the risk that users will take the bait. AI automation comes to phishing attacks AI is not only used in the creation of phishing emails, but also in the automation of attacks. Attackers can use AI bots to send more efficient phishing emails on a large scale. These bots can automatically identify potential targets and, in many cases, evade conventional security defenses. In this regard, AI enables, for example: Sending thousands of personalized, topical phishing emails in 'real time'. To solicit fake aid or donations, for example, from seemingly legitimate humanitarian organizations in the event of a natural disaster. Or to mimic the security notifications of an entity that has been the victim of a phishing campaign, making it even more difficult for recipients to distinguish between the hoax and the reality. 🎣 The Tax Agency sent a message in May 2023 alerting of a phishing campaign. The cybercriminals took advantage of this message to start a new phishing campaign that spoofed the Tax Agency's alert and asked users for personal data to verify that they were not affected by the initial phishing campaign. Detect responses from victims and continue the conversation through chatbots and with automatic responses to maintain simulated conversations with those who respond or show interest. Also, to extract additional information. When the out of office or away message provides more information than necessary, for instance. As discussed above, to collect data from social networks, public websites and large filtered databases to build detailed profiles of potential victims. This allows them to design personalized phishing emails that have a higher chance of success. Bypass conventional security defenses, such as spam filters and anti-virus, by obfuscating the actual content and intent of the email. Or constantly changing its content and technique to avoid automatic detection, making automatic protection against phishing attacks more difficult. How to protect against AI-generated phishing Given the increasing sophistication of AI-generated phishing attacks, it is imperative to automatically implement measures to protect yourself. Strategies that can help include: Be informed and up to date on campaigns and developments related to phishing and learn how to identify it, also in the age of AI. Companies should provide their employees with adequate training in Cyber Security. Invest in proven and recognized protection tools and advanced security solutions that use AI to detect and prevent phishing attacks beyond content, identifying suspicious behavior patterns and alerting about potential threats. Verify the origin (From: field) of an e-mail before clicking on links or providing confidential information, carefully checking the sender's address and the URLs included in the message. Always be wary of urgent e-mails or e-mails that demand immediate action or threaten negative consequences (such as the blocking of your bank account or an incident with your tax return) and take a moment to analyze the message and assess its authenticity. Keep computers, devices, mobiles, and software up to date and patched to reduce the risk of attackers exploiting known vulnerabilities. * * * According to Egress, 30% of email is "graymail", unsolicited but unnecessary bulk messages that are rarely opened, such as notifications or newsletters. 📫 Spam and graymail: what is the difference? The main difference between spam and graymail is intent: ○ Spam or junk mail has a malicious intent: they are intrusive and unsolicited messages sent for the purpose of promoting products, services or scams... without the recipient's consent. ○ The intent of graymail is legitimate: they are solicited messages, although in many cases they are not necessary and are often ignored. Notifications, newsletters, or promotions from companies with which you have interacted at some point; for example, when shopping online, are considered graymail. Graymail can flood the inbox and become a problem due to its excessive volume, which makes it difficult to keep incoming mail under control. Image from Natanaelginting on Freepik. Cyber Security How Clean Email Business protects SMEs from email cyber-attacks September 13, 2023
October 9, 2023
AI & Data
Smart Tourism Destinations: Resource optimization and experience personalization
The tourism industry plays a vital role in the economy: according to the United Nations, tourism generates one in ten jobs worldwide. In this context, the World Tourism Organization (UNWTO), which celebrates World Tourism Day every September 27, recalls the need to innovate in tourism. This involves embracing digital transformation in order to increase the knowledge of tourists and improve decision-making. Smart Tourism Destinations (ITD) are places that make use of digitalization to optimize tourism management and provide tourists with a better experience through personalized, accessible, and sustainable services and products. Key technologies for smart tourist destinations Smart destinations implement next-generation digital technologies such as IoT, Artificial Intelligence, Big Data, and data analytics to collect real-time information, automate processes and improve the efficiency of tourism operations in an increasingly competitive and global market. One of the most important aspects of ITD is that it has the ability to improve the tourism sector for both tourists and destinations: They offer enriched and personalized experiences for tourists through digital services such as interactive guides, mobile applications, and online reservations to make the tourist experience more attractive and convenient. They provide in-depth knowledge for tourist destinations, covering resource management, sustainability and the promotion of local culture and businesses, encouraging the participation of residents and achieving a more sustainable and attractive environment. The importance of data analytics in smart tourism The key to seizing this opportunity lies in the data economy. This concept refers to the use of technologies that enable the generation, capture, storage, and analysis of large volumes of information. Data thus become valuable assets that drive informed decision making and improve tourism management, resources, and processes. Data sources in smart destination management In the travel industry, data generation and capture occur throughout the entire process, from the moment a tourist begins planning his or her trip until he or she returns home. This includes: Destination and flight searches and hotel reservations. Online and offline access to tourist information points Apps to search for places of interest, restaurants or plan routes and itineraries. Aggregated and anonymized data (that does not identify individuals) from cell phone networks. These data allow us to know variables such as tourist flows and the length of their stay and time spent in different places and points of the destination. They also reveal their preferences and behaviors, which allows us to offer personalized experiences tailored to their needs and demands. The application of machine learning algorithms and Artificial Intelligence on the captured data allows companies in the sector and tourism organizations to: Extract insights to understand travelers and destination demand. Anticipate trends and needs of both tourists and residents. Improve the accuracy of tourism statistics without compromising privacy. Develop tourism effectively, optimizing its economic, natural, and human resources. The knowledge and insights extracted also provides valuable information on promotion and marketing strategies, for the adequacy of public services and the promotion of culture or the development of new local businesses. Image from Wayhomestudio on Freepik Technological components for successful intelligent tourist destinations (ITD) Smart destinations share many of the ingredients of smart cities, including: The Internet of Things (IoT) enables the capture of data through connected sensors to get a clearer picture of the needs of tourists and destinations, allowing for more efficient management of resources and informed decision making. In addition, tourists can receive real-time information and recommendations based on their location. Cloud platforms allow applications to be managed and large volumes of data to be stored and processed in an efficient, scalable, and secure manner and are key to the data economy and data spaces applied to tourism. Data spaces enable the secure, organized, and transparent exchange of data between public and private tourism stakeholders, driving innovation and quality of services. Big Data and Artificial Intelligence. As we have seen, a smart destination generates huge amounts of data from numerous sources, from analyzing thousands of comments and reviews on the internet to the flow of tourists. Big Data makes it possible to manage and analyze all this data to obtain knowledge by applying AI models that help to detect trends and optimize local resources. Blockchain helps improve security and transparency in tourism transactions. Blockchain-based smart contracts can automate processes such as identity verification by reducing the need for intermediaries and minimizing fraud risks. In addition, Blockchain technology makes it possible to offer incentives and rewards to tourists and citizens who make use of city services. Digitalization and sustainability in the tourism sector: Telefónica Tech Telefónica Tech has the technology, knowledge, and experience to promote the digitalization and sustainability of tourism, both for companies and SMEs in the sector and for public entities and administrations, in order to build smart tourism destinations: Helping numerous companies in the sector to advance in the digitization process, so they can design more personalized and efficient experiences and better management of resources. Ensuring tourism businesses' continuity in and protecting the data economy with advanced Cyber Security solutions. Providing technological solutions to drive smart tourism with sector-specific technologies and tools for tourism data analysis, such as Smart Steps. Reducing our own and our customers' environmental footprint with the Eco Smart seal, verified by AENOR, to decarbonize the economy, promote renewable energy, and develop a green, inclusive, and fair digital economy. We also cooperate with the World Tourism Organization (UNWTO) to promote tourism through sustainable, safe and competitive development and to boost its digitalization. Conclusion The transformation of tourist cities into smart destinations is a necessary step to move towards a more competitive and sustainable tourism. In the new generation of digital technologies, such as IoT, Cloud, Big Data, and Artificial Intelligence, tourists receive more value, new businesses are created, and the environment, natural resources, and cultural heritage spaces can be managed more efficiently and sustainably, enhancing residents' involvement, participation, and quality of life. Image from Wayhomestudio on Freepik. IA & Data You are no longer just a company of products or services, you are a data company April 26, 2023
September 27, 2023
Telefónica Tech
#LadyHacker Tamires Abujamra is innovating at Telefónica Tech Brazil
With their effort and knowledge, Telefónica Tech's #LadyHacker demonstrate that progress has no gender. * * * Who are you and what do you do for a living? I'm Tamires Abujamra, an occasional player of board games (laughs). I am also Nico's mum, my dog and lifelong friend. I am a publicist by training, but since the beginning of my career I have worked in product, with technology. I am currently responsible for the Cross team of Telefónica Tech IoT in Brazil, and I am in charge of Mobility, Retail and Energy Efficiency products. How would you describe your career so far and what are the skills you use at Telefónica Tech? My professional career started in a telecommunications company in 2008 and to this day I still work in technology, where dynamism and change are constant elements. The skills I use the most are creativity, especially to face challenges in the best possible way and the search for innovation. The power of thinking about solutions in different ways combined with the search for the fresh can open up new paths and make the journey easier. What do you consider to be the specific challenges women face in the IT sector and how have you overcome these challenges throughout your career? There are many challenges, from matching salaries to proving that we are competent professionals. I have always been very clear about my professional goal and I always try to trust in my talent and potential, but in the end it is inevitable to put ourselves in check. I have had exceptional leaders throughout my career who have been important and empowering pillars in this process, in addition to the entire structure of Telefónica Tech Brazil, which has been doing a very solid job in this area over the years, generating empowerment and valuing diversity. Tamires Abujamra, Telefónica Tech Brasil. What makes Telefónica Tech a great place to work for women and how does the company promote gender diversity and inclusion? There are targets for women in leadership positions and the incentive to hire as diverse a team as possible. Diversity goes hand in hand with social equality and consequently generates a much wider repertoire. There are more and more women in leadership positions and this scenario is growing, we are encouraged to seek our positions and we are valued. What kind of initiatives or schemes do you think are needed to encourage women's participation and success in the technology field? The first step is to accept that there is no gender-related differentiation in capacity or level of knowledge. What really determines an individual's potential and capacity is his or her background, commitment, information, and experience. The second step is that women must seek to occupy their place, and the company and employees in general must act as facilitating agents for this. Information and re-education are essential elements in this transformation. It is a cultural process that requires transparency on all sides and communication is the key to this change. There is no gender-related differentiation in capacity or level of knowledge. Tamires Abujamra & Nico, Telefónica Tech Brasil. What does Lady Hacker initiative mean to you? It is another essential step to value and break paradigms, to give voice to women, to those who from their position have the opportunity to share ideas and experiences. #LadyHacker is the kind of action we need to engage and bridge the gap between genders and other paradigms. How do you think your experience and perspective as a woman have influenced your approach and technology projects? My experience helped me to be perseverant and to always look for new solutions, even working in something new like the innovation area of Teléfonica Tech Brazil. Telefónica Tech Meet #LadyHacker Karla Parra, Cybersecurity expert at Telefónica Tech June 20, 2023 Telefónica Tech Meet #LadyHacker Jess Woods, Cloud expert at Telefónica Tech March 8, 2023
September 14, 2023
Connectivity & IoT
How global IoT satellite connectivity works
July 28, 2023, marked a milestone in telecommunications. Sateliot, a pioneer in operating a constellation of 5G IoT satellites, Telefónica Tech, and Telefónica Global Solutions, successfully established a 5G roaming connection from space for the first time. This enabled global NB-IoT connectivity. The test, supervised by the European Space Agency (ESA), successfully demonstrated the satellite extension of our NB-IoT connectivity using the GSMA roaming standard. As it's a standard protocol (3GPP), IoT devices seamlessly and securely connect to both terrestrial and non-terrestrial networks. The architecture designed to enable this connection integrates Sateliot's satellite network with Kite's network nodes, Telefónica Tech's IoT connectivity platform. In the test, we used a standard SIM card installed on an IoT device. The SIM card, managed through our Telefónica Tech Kite platform, successfully and seamlessly connected to Sateliot's network. This validated the authentication of a conventional roaming connection through their LEO satellite network, in Low Earth orbit. Convergence between Terrestrial and Non-Terrestrial Mobile Networks This test utilized 'Store & Forward' technology and a two-step authentication method adapted to enable a conventional roaming connection between a terrestrial mobile network and a non-terrestrial LEO (Low Earth Orbit) network. LEO satellites orbit in low Earth orbit, typically at altitudes of 500 to 600 kilometers. Thanks to 'Store & Forward' technology, Sateliot's 5G network stores data collected from IoT devices while satellites are out of position to connect to a ground station. It then transmits the data when they enter the coverage area. Our collaboration with Sateliot not only represents a significant advancement in global NB-IoT connectivity. Also holds significant potential for the IoT device industry. 'Store & Forward' technology and the ability for IoT devices to transmit data through a roaming interface using the two-step authentication method will pave the way for commercial deployment in 2024. Connectivity & IoT 5 free online courses to learn IoT (Internet of Things) August 3, 2023 Global IoT connectivity applications and benefits The results obtained offer numerous opportunities to enhance the quality of life for millions of people worldwide. This is especially true for those living in areas without terrestrial networks access. It also offers the potential to increase operational efficiency for businesses, foster innovation, and improve how we interact with our environment. Some examples include livestock and agriculture monitoring, efficient management of natural resources like water or energy, applications in land or maritime logistics operations, infrastructure maintenance, combating poaching, preserving natural spaces and the environment, or detecting and responding to emergencies anywhere on the planet, among others. NB-IoT satellite connectivity enables maritime and on-the-move IoT coverage. Image: Yaroslav Danylchenko / Freepik. Starting in 2024, Telefónica will become the first operator to provide global NB-IoT connectivity through hybrid IoT connectivity, combining terrestrial and satellite NB-IoT networks. This connectivity can work with conventional and affordable IoT devices. Towards a more connected and efficient world Global IoT connectivity has become essential for a growing range of sectors, from agriculture to logistics to renewable energies. Our collaboration with Sateliot represents a significant step forward in creating a global communication network capable of addressing upcoming challenges and needs. Thanks to the convergence between terrestrial and non-terrestrial networks, we can offer NB-IoT connectivity in remote locations without terrestrial network coverage, as well as backup coverage in areas with existing wireless coverage, and maritime coverage for IoT use cases on the move. This further expands the potential applications of IoT technology to to build a more connected, sustainable, and efficient future. Connectivity & IoT Learn about our IoT Partners Program, a business opportunity to grow in the global market September 25, 2023
September 12, 2023
AI & Data
Machine unlearning: AI models that learn to forget
Unlearning in Artificial Intelligence (AI), or 'machine-unlearning,' is the process of updating or modifying the acquired knowledge of an AI model based on the latest information or changes in the scope of application or circumstances. It can also be motivated by technical considerations or regulatory requirements, for example. Unlearning refers to adjusting or eliminating existing data, rules, or connections in an AI model. It is an "emergent subfield of machine learning that aims to remove the influence of a specific subset of training examples from a trained model, ideally while preserving intact valid data and learning," as explained by Google. Machine unlearning tackles challenges such as data bias and adaptation to changing scenarios or regulations. Why machine-unlearning matters in AI Machine unlearning is fundamental because AI models often encounter changing, biased, contradictory, or excessive data. Maintaining previous connections acquired from such data can lead to incorrect or biased decisions. As explained in SemiWiki, "adaptability is a cornerstone of intelligence, both human and artificial. Just as humans learn to navigate new situations and respond to changing environments, AI systems strive to exhibit a similar capacity.” It is added that one of the advantages of adaptability through machine unlearning is the mitigation of a phenomenon known as 'catastrophic forgetting.' When AI models are trained with updated data inconsistent with the original training data, there is a risk of "forgetting" (overwriting or losing) valuable training. Proper and controlled unlearning helps address catastrophic forgetting by carefully eliminating obsolete or incorrect information while preserving previously acquired knowledge. Therefore, machine-unlearning enables AI models to be more flexible and adaptable. It also allows them to make more precise and fair decisions as they are used because unlearning enables: Correcting Biases and Errors Biases and errors in training data can lead to biased or ineffective AI models. Machine unlearning allows the identification and correction of these biases by eliminating or modifying connections that lead to unfair or incorrect decisions. Example: A personnel selection model with acquired gender or age biases can correct them through unlearning. Adapting to changes AI models, like humans, must adapt to new data, regulations, or circumstances. Machine unlearning enables models to update their knowledge as they encounter updated or changing information. Example: A news recommendation system can adjust user preferences as their interests change. Improve accuracy and generalization Machine unlearning can enhance models' generalization ability by removing obsolete or noisy information that could negatively affect their performance. Example: A machine translation model that has learned incorrect grammar rules can unlearn those rules to produce accurate translations. Benefits of machine unlearning Machine unlearning has been used to enhance AI models' accuracy and fairness in various applications, such as personnel selection, fraud detection, or medical diagnosis. Benefits of machine unlearning in AI include: Improving performance by reducing outdated data and models that slow down the system. Protecting sensitive information or individual privacy. Making AI systems more efficient in processing new information. Enhancing AI's ability to process and analyze complex data. Correcting biases acquired during training. Enabling more precise predictions and recommendations tailored to changes in user habits and preferences. AI of Things Ghosts in the machine: does Artificial Intelligence suffer from hallucinations? February 20, 2023 Challenges in AI unlearning Machine unlearning in AI faces significant challenges. Some experts even doubt its effectiveness: "To use a human analogy, once an A.I. has ‘seen’ something, there is no easy way to tell the model to ‘forget’ what it saw. And deleting the model entirely is also surprisingly difficult", says Fortune. Google also points out that "fully erasing the influence of the data requested to be deleted is challenging since, aside from simply deleting it from databases where it’s stored, it also requires erasing the influence of that data on other artifacts such as trained machine learning models". In the face of this difficulty, Google launched the first Machine Unlearning Challenge this summer. Machine unlearning faces challenges, just like other approaches such as retraining or regularization, including: Obtaining suitable, quality, and relevant datasets can be challenging, especially if data is scarce or difficult to collect. Modifying AI models' complex internal representations can be challenging, as is identifying which parts of the model need to be modified and how to do it properly. Improper or unbalanced application can lead to unwanted results, such as removing useful information (overfitting) or not removing enough biased information (underfitting). Understanding changes and consequences well to avoid opacity and ensure accountability, including for AI decisions before unlearning. Implementing techniques to measure accuracy, fairness, and other relevant aspects of the updated model and ensuring effective improvements. Even resetting an AI model to completely erase previous knowledge and enable new learning from scratch is a complex process. Training or retraining an AI model can be very expensive depending on its size and complexity. "GPT-4 was probably trained using trillions of words of text and thousands of computer processors in a process that cost more than 100 million euros," according to Wired. Conclusion Machine unlearning plays a fundamental role in improving AI models' accuracy, fairness, privacy, and adaptability. It enables models to update and modify their previous knowledge based on new information or changes in the environment. Therefore, as AI influences more aspects of our lives, machine unlearning becomes an essential tool to ensure AI models are fair, accurate, and adaptable. However, finding a balance between effective learning and unlearning in AI is still a major challenge. AI of Things What is AI-winter and how to avoid it June 6, 2023 Image: Freepik.
August 31, 2023
AI & Data
AI of Things for efficient and sustainable water management
The European Drought Observatory (EDO) monitors, analyses, forecasts and measures the impact of droughts and their consequences on the economy, agriculture and other areas. In its latest report of June 2023, EDO highlights the "serious drought" that hurts "broad areas of Europe" amid "warmer than average" summers and less rainfall, especially north and center. EDO recommends "close monitoring" of the evolution of the drought and adopting adequate water use plans in order to anticipate and mitigate its effects on water resources: The summer of 2023 currently has a high risk of being critical with respect to water resources According to the latest map of the Combined Drought Indicator 14% of the EU territory is in Warning conditions and 10% is in Alert conditions. Image: European Union / EDO Technology for the digital transformation of water resources Water is a natural resource that is as valuable as it is scarce and requires smart management of its integral cycle through the digital transformation of water management companies. Telefónica Tech's Smart Water solution applies technology to give water management companies greater control of water use, distribution and recovery in order to: Improve supply management to minimise leakage and adapt infrastructure investment plans. Guarantee the correct recovery, treatment and use of the used water. Raise awareness and empower consumers by giving them greater control and information on what their water consumption is and how to reduce it. All this is possible thanks to the intelligent sensorisation of water distribution networks. Telefónica Tech works with specialised partners such as Contazara and Idrica to implement IoT and data analytics technologies with Big Data and Artificial Intelligence aimed at the integrated management of the water cycle. In this way, management companies have much more metrological information and in-depth knowledge of what is happening in their network, from monitoring water quality at different points in the network to receiving detailed data on the volumes distributed or variations in flow rates. Advantages of more efficient water management For instance, detecting anomalies with respect to average measurements or differences between volumes distributed and volumes consumed makes it possible to identify fraudulent consumption, leaks or water losses at the time they occur, both in the network and at the household level. Digitalisation of the complete water cycle saves millions of m3 of water each year and reduces operation and maintenance costs by 20% and maintenance costs by 20%, according to Idrica. Data from the sensorisation of water infrastructures also allows water managers to improve their processes —such as prioritising interventions according to the criticality of the detected incident— and to make an accurate prediction of demand to meet consumption. Making strategic decisions based on data even makes it possible to accurately anticipate seasonal or one-off demands, such as the influx of tourists or the holding of large events, for example. This leads to more efficient water management and savings in operating and energy costs. At the same time, the environment is protected and an adequate supply of water in terms of both quantity and quality is guaranteed. How Smart Water's solution helps Canal de Isabel II improve its water distribution service Canal de Isabel II (CYII) manages more than a dozen reservoirs in six river basins to supply water to more than 6 million people. CYII is using Telefónica Tech AI of Things products and services to address its digital transformation, with the objectives of: optimise its sourcing processes and operations, provide a better service to its customers, protect this resource from inefficient consumption. In this case, as part of the Smart Water solution, Telefónica Tech, together with Contazara, is working on the deployment of smart meters that allow remote reading of water meters. Thanks to these connected IoT meters, the company receives a reading of consumption every hour or according to the customer's needs, instead of working with estimated consumption or with readings taken by hand every two months. Benefits of the Smart Water solution for consumers This amount of data provides Canal de Isabel II with greater knowledge of what is happening in its network, and of the consumption habits of its customers. For the consumer, this has benefits such as, for instance: know their consumption and compare it between periods and with the average consumption of households with a similar profile. pay for actual consumption and not estimates, which reduces complaints and billing incidents. receive personalised savings plans and recommendations to reduce water consumption, and therefore the amount of the bill. detect anomalous consumption (by default or excess), for example in second homes or in the homes of dependent persons. Remote reading of water meters has an additional environmental benefit by reducing fuel consumption and pollutant emissions from travelling to check, maintain and read meters. Technology is key to efficient and sustainable water management However, remote meter reading is only one part of Telefónica Tech's Smart Water solution that allows water managers to undertake a complete digital transformation of the entire water cycle. This allows them to address the smart management of a resource that is so sensitive to climate change, weather variations and increased demand. Today more than ever, it is not only the health and well-being of consumers that depend on the proper and sustainable management of the complete water cycle - from its collection, treatment and distribution to its recovery, purification and reuse or return to watercourses. The economy and sectors such as industry, livestock, agriculture, tourism... and, of course, the environment, also depend on it. IA & Data AI of Things (II): Water, a sea of data March 16, 2022 Photo: Gabor Koszegi / Unsplash.
August 21, 2023
Telefónica Tech
#LadyHacker: Female talent driving progress
Progress knows no gender. #LadyHacker is a global initiative that demonstrates and asserts its globalness. This movement highlights women's essential and necessary role in the technology sector. It also aims to inspire and empower young girls and women to explore their potential in STEM careers: Science, Technology, Engineering, and Mathematics. Diversity fuels innovation, and innovation leads to success. STEM disciplines form the foundation of innovation and progress. From creating algorithms powering Artificial Intelligence to designing technological solutions that tackle global issues, enhance health and well-being, or reduce our environmental footprint, STEM careers drive the transformation of our society. The push for education and careers in STEM not only accelerates economic development but also fosters a fairer, more inclusive, and sustainable society. At Telefónica Tech, women like Carmen, Elena, Jess, Dagmara, María, Karla, and many more #LadyHacker prove every day that gender doesn't define individuals' capabilities in the technical and scientific fields. Keep reading to meet them. Telefónica Tech #LadyHacker Tamires Abujamra is innovating at Telefónica Tech Brazil September 14, 2023 Telefónica Tech Meet #LadyHacker Karla Parra, Cybersecurity expert at Telefónica Tech June 20, 2023 Telefónica Tech Meet #LadyHacker Jess Woods, Cloud expert at Telefónica Tech March 8, 2023 Telefónica Tech «We are moving towards genderless professions», María Martínez August 8, 2022 Telefónica Tech “To be a hacker in life is to be a passionate, talented person who manages to influence the transformation of society", Carmen Alonso July 28, 2022 Image: Freepik.
August 17, 2023
Connectivity & IoT
5 free online courses to learn IoT (Internet of Things)
Internet of Things (IoT) is one of the new generation digital technologies with significant impact in multiple sectors, from industry to agriculture, health, and energy. IoT creates new business opportunities and is changing the way companies, industries, administrations, and also people, interact with the world. An IoT device is one that has the ability to capture or generate data (for example, a temperature sensor) and has internet connectivity to transfer that information. In this way, IoT allows capturing, processing, and exchanging information efficiently and effectively. Among other applications, IoT allows monitoring and optimizing from traffic to resources such as energy, water, or people's health, and is enabling the development of smart cities, autonomous vehicles, precision agriculture, or smart industry, among many other possibilities. Today, knowing the principles of IoT technology is essential for both students and professionals in the technology sector. But how to get started in IoT technology? Where to begin? In this post, we compile a selection of free and online courses that you can follow at your own pace to learn about this technology. Knowing IoT technology not only opens up new professional opportunities, but also allows understanding how technology works and how it influences people's lives and the environment. Introduction to the Internet of Things IoT (Curtin University): This free course (with the option to acquire a certification) explores the IoT concept and physical devices ('things') that make the Internet of Things possible, including how components communicate with each other, how to extract value from the data they generate, and some considerations related to IoT Cybersecurity and privacy. Requires 1.5 months dedicating 2-3 hours per week. Introduction to the Internet of Things and Embedded Systems: This 11-hour course, offered by the University of California and available on Coursera, is part of a specialized program 'An Introduction to Programming the Internet of Things (IoT)', which consists of a total of 6 courses with level advances. This first course explains the role of IoT, what are the most common devices, and trends for the future. It also deals with topics related to the components, both software and hardware of the devices, and their interface with the physical world. Finally, the key components of interconnection are explained. Introduction to IoT (Cisco Networking Academy): This free course (optional certification) of 20 hours addresses how digital transformation is creating economic opportunities and how IoT is changing the way companies operate and manage their processes and systems, without forgetting the Cybersecurity considerations that must be addressed when implementing IoT solutions. AWS IoT: Developing and Deploying an Internet of Things: Free (optional certification). 4 weeks. This Amazon (AWS) course covers general content such as "What is the Internet of Things and how does it work?" to more specific content about AWS services. To take this course, it is recommended to have at least one year of software development experience and basic knowledge of AWS services and console. Introduction to Azure IoT: This course is the first module of the Microsoft Certified AI Edge engineer. The first course in the program provides an introduction to the different services that can be condivd in Azure to design large-scale IoT solutions. All modules and courses are available for free on Microsoft Learn, excluding the certification. AI of Things How to start programming in Artificial Intelligence: languages, tools and recommendations January 18, 2023 Conclusion It's worth noting that, except for some cases, most of these courses are introductory. However, they provide knowledge of this technology and its components and practical applications. In addition, they cover topics such as network architecture, sensors, embedded systems, communication protocols, security and privacy measures, data analysis tools, cloud platforms, and an overview of Artificial Intelligence algorithms. One advantage is that they are free (except for certification, which is usually optional and requires a fee) and open to anyone interested. They are also self-paced, allowing them to be completed at one's own pace. The use of IoT will continue to grow in the coming years with the adoption of an increasing number of IoT devices in all types of sectors and industries, homes and businesses, and cities. Overall, these types of courses are an excellent way to acquire new skills and knowledge in a short amount of time, in just a few days or weeks. These courses also serve as a complement to other related technologies, such as Cloud and Edge Computing, 5G and NB-IoT connectivity, Big Data and data analysis, Artificial Intelligence and machine learning models, industrial automation technologies, smart city infrastructures, etc. Any of these courses is an excellent option to take the first steps in IoT. Connectivity & IoT Learn about our IoT Partners Program, a business opportunity to grow in the global market September 25, 2023
August 3, 2023
AI & Data
What are physical shops and how are they transforming shopping?
Digitalisation has led to the creation of smart spaces where the physical and the digital converge. These smart spaces can be public places with different purposes: from shops and shopping centres to hotels or football stadiums. In any case, the aim of smart spaces is to be more efficient and sustainable, and to offer the public a more interactive and personalised experience in order to adapt to a hyper-connected customer whose demands and preferences are constantly changing. Smart retail: physical and digital merging in retailing In this sense, one of the main trends is the smart retail, the digitalisation of physical retail spaces incorporating digital technologies. This is achieved with the sensorisation of the physical environment thanks to IoT (Internet of Things) and the incorporation of connectivity, smart and interactive screens, virtual and augmented reality... It is possible to offer customers a better shopping experience when they visit physical spaces by applying data analytics and Artificial Intelligence to the data captured by sensors. Companies and brands must take advantage of digital technologies to strengthen their business, generate new shopping experiences and be more competitive. Interactive, immersive, and efficient phygital shops The interactivity enabled by phygital shops provides users with real-time information, personalised offers and recommendations, and a high level of investment that enhances the experience, reduces lost sales opportunities, and increases loyalty. For example, augmented reality allows users to view products, access detailed descriptions and even 'try on' items virtually. This translates into more responsive consumers, longer dwell time in physical and online shops and increased conversion rates. Image: gpointstudio / Freepik Dynamic marketing platforms also make it possible to tailor on-screen messages to the context and profile of the customer, who can incorporate data from e-commerce such as product ratings or purchase history. In this way, the experience becomes omnichannel and more personal. The omni-channel shopping experience The omni-channel shopping experience removes the barriers between the physical and digital environments to create a unified experience, flowing between the digital and physical channels used by the shopper by incorporating digital technologies into physical spaces. This makes it possible to offer: Personalised offers and interactions according to the user's profile. Option to buy online and pick up in shop or vice versa. Connection of physical stocks with e-commerce. Services such as smart lockers to speed up deliveries and returns. Unified experience between the online and offline world. Brands can deliver seamless experiences, satisfy consumers who no longer distinguish between physical and digital channels, and increase conversion rates. AI of Things You can already maximise the impact of your campaigns in the physical space April 26, 2022 Internal improvements and process optimisation in smart retail Beyond improving the customer experience, digitalisation also optimises internal business processes. Technologies such as RFID and smart shelves or smart labels allow, for example, to: Real-time tracking of inventory and stock levels to identify consumption trends and enhance stock management. Utilize product traceability to optimize product logistics and supply chain. Optimize human resources allocation based on foot traffic and sales data. Brands gain profitability, reduce operating expenses and boost customer satisfaction by implementing technology solutions in inventory management and logistics. The power of data in phygital shops In the data economy, having the ability to identify consumer preferences and behaviours provides a competitive advantage for retail spaces by enabling, among other advantages, Quantify the traffic of potential customers and their conversion rates. Understand the paths within the shop and the interest shown in a product. Dynamically personalise content according to the shopper's offline and online profile. Identify which sections or areas of the shop generate more (or less) interest among visitors. Optimise opening hours and staffing based on context and occupancy data. This enables brands to make decisions that help them optimise their strategies, spaces, and resources to achieve more sales and reduce costs while responding to users' interests and providing a better shopping experience. AI of Things Socio-demographic segmentation and video analytics to improve shopping experience July 13, 2022
July 19, 2023
Cyber Security
Cyber Security is an essential skill in today's digital era, and Cisco offers a free course in it
In an increasingly digitized world, acquiring basic Cybersecurity knowledge is crucial to protect ourselves from growing cyber threats. We all need to understand how to identify risks, adopt secure practices, and safeguard our privacy. This applies not only in professional settings but also in personal situations. For example, charging our phones outside, using online banking, or scanning a restaurant menu with a QR code. A foundational Cybersecurity course allows us to learn the best practices for protecting our personal and financial information. Therefore, basic Cybersecurity knowledge is indispensable for anyone using technology. As our lives become more digitalized, cyber threats increase. Introduction to Cybersecurity Course, free and in Spanish In this regard, Cisco Networking Academy offers a wide range of courses related to Cybersecurity, designed to provide the necessary technical knowledge and skills in various areas such as cloud security, network security, and endpoint security. One of these courses is Cisco's Introduction to Cybersecurity, available for free on the Skills for All platform. This caters to individuals at any prior knowledge. This self-paced course, offered in multiple languages, requires approximately 6 hours of dedication and can be completed at your own pace. To access and complete the course, all we need is a computer or mobile device and an internet connection. The introductory course consists of 5 modules and 7 practical exams. It equips us with the knowledge required to protect our digital lives, understand security risks and how to mitigate them, as well as ethical and legal aspects. Through this course, we can strengthen our communications, devices, and privacy security. The Introduction to Cybersecurity course is just the beginning of a series of courses that form the Cybersecurity Learning Pathway. This learning path includes simulations and practical exams at different levels and topics, preparing you for the Certiport Information Technology (IT) Specialist Cybersecurity certification, specifically targeted at those starting out in Cyber Security. Cyber Security How language puts business Cybersecurity at risk June 1, 2023 The Key to Cybersecurity: Education + Awareness Any Cyber Security strategy includes awareness and education as fundamental tactics to strengthen digital security. However, it's imperative to note the difference between "awareness" and "education" in the field of Cybersecurity. Awareness aims to educate and cultivate a security mindset. Education focuses on technical knowledge and skills. This combination is essential to have the right mindset but also the knowledge that protects us against cyber threats. While awareness stems from education and knowledge, "awareness alone can induce fear, paralysis, or indifference," wrote our Cybersecurity expert Sergio de los Santos recently. __ Cisco is part of the Telefónica Tech partner ecosystem, the network of alliances that allows us to develop the most advanced solutions on the market for our clients. Image from Freepik.
July 17, 2023
AI & Data
Responsibility from design applied to AI
Artificial Intelligence has the potential to transform the way we interact with technology, as well as redefining the boundaries of what is possible. AI has demonstrated its ability to generate cross-cutting benefits in many areas, from improving human wellbeing and health to driving environmental sustainability. Its application in the operations of organisations, companies and industrial sectors enables new business models. It also changes the way we research and innovate, improves the efficiency and sustainability of production processes and supply chains, and redefines our capabilities and ways of working. However, along with the opportunities it offers, with AI also come ethical challenges and responsibilities that must be proactively addressed to responsibly harness the potential and benefits offered by Artificial Intelligence. The European Parliament says Artificial Intelligence can boost European productivity by 11-37% by 2035 In this sense, the responsibility by design approach applied to Artificial Intelligence is a strategy that ensures that AI systems and models are developed and used from their conception in an ethical, transparent, and accountable way. What is the 'responsibility by design' approach to AI? The principle of responsibility by design is a methodology that is applied in different sectors and industries, from scientific research to urban planning to the design of common products and services. In the field of Artificial Intelligence, responsibility by design refers to the integration of ethical and responsible considerations from the early stages of design to anticipate and address potential ethical, legal and social issues that may arise with the use of AI. This implies that AI designers and developers must take into account aspects such as transparency, fairness, privacy, security and social impact, both of the models and algorithms as well as the data used, in order to build a trustworthy Artificial Intelligence. Accountability by design is not only applied from the beginning of the process, but also throughout the entire development lifecycle to assess possible impacts and consequences in different evolutions, scenarios, cultures, and contexts. AI of Things Ghosts in the machine: does Artificial Intelligence suffer from hallucinations? February 20, 2023 AI responsible by design An accountability by design approach based on existing methodologies (e.g., privacy and security) raises some considerations to be taken into account from the initial conceptualisation and development phases, and throughout the entire model cycle, to ensure ethical and responsible development of AI projects, including: Having multidisciplinary and diverse teams (including experts in ethics, legislators, industry, civil society...) to incorporate different perspectives and knowledge, avoiding bias and discrimination in the results. Self-assess throughout its development and operation the possible undesirable effects it may have on users, society and the environment. Define security measures in the handling of information and personal data to ensure user privacy and regulatory compliance. Understanding at all times and throughout the development, learning and deployment process how it works and how it makes decisions, allows to understand and explain why it makes decisions and helps to detect biases. Constantly monitor throughout its lifecycle how it works to identify potential impacts as it evolves and is applied in different domains and contexts. Our Artificial Intelligence principles In this regard, at Telefónica Tech we adopt the ethical principles for AI that in Telefónica Group we defined in 2018 and updated in 2024. They form part of a broader methodology focused on responsibility by design and commit us to develop and use Artificial Intelligence that is: Fair, so that the models do not generate results with discriminatory or unfair biases or impacts. Transparent and explainable, disclosing the data we use to train the models and their purpose, ensuring that their decisions are understood. People-centred, respectful of Human Rights and aligned with the UN Sustainable Development Goals. Respectful by design of the privacy of people and their data, and information security. Truthful in its logic and in the data used, including by our suppliers. We further require our suppliers to have similar AI principles or to adopt our own, with the aim of building, together, AI models that harness the potential and benefits offered by AI while protecting human rights, democracies and the rule of law. AI of Things Can Artificial Intelligence understand emotions? May 23, 2023 Featured image: This is Engineering RAEng / Unsplash.
July 10, 2023
Cyber Security
AI & Data
Things you shouldn't share with ChatGPT
According to a survey conducted by Business Insider, 43% of workers use Artificial Intelligence tools like ChatGPT or Bing to assist them in their professional tasks, as these tools provide information and function as advanced search engines. However, as we increasingly rely on these tools, experts emphasize the need to protect our privacy and the sensitive information of companies. In this regard, it is imperative to note that while an Artificial Intelligence model provides answers, it also generally uses the question and information contained in the query to learn and improve its responsiveness. Although, in general, the obtained information is used in an aggregated and anonymous manner (without identifying the provider), it should be noted that it can be used later to better understand and respond to new queries, including those made by other users. Data you should not share with ChatGPT (and similar models) Therefore, with Artificial Intelligence models like ChatGPT, you should not share any information that can be considered personal, private, secret, or confidential. For example, Data that can identify a specific person, reveal personal information, or facilitate identity theft: full name, date of birth, postal address, ID number, phone number, email address... Confidential data, highly specific programming code from your development, information about undisclosed products, or trade secrets of your company, to prevent them from being exposed due to data leaks or unauthorized access to your model's account, and to avoid their appearance in future responses to other users. Banking data or information that, if leaked, could be used for fraud, identity theft, or social engineering attacks. Additionally, it is not advisable to share medical data, such as test results or diagnoses, and any other health-related information that could be recorded in your conversation history and linked to your user account in the AI model's app or website. Some measures for safer queries When sharing information with Artificial Intelligence models like ChatGPT, it is advisable to take certain measures to protect your privacy, data, and sensitive or confidential information: Include in your query only public and limited information, avoiding personal data that identifies an individual. If necessary, use fictional or generic information and avoid providing details that could link you to other people, companies, or places. While it is a growing trend, it is generally not a wise idea to share overly specific information about your vacation dates and detailed travel itinerary. For free Artificial Intelligence models like ChatGPT, consider having two accounts, one anonymous and not associated with your identity. When in doubt, it is always safer to use your mobile data connection instead of an open or public Wi-Fi network, such as those in cafés or hotels. Do not share login credentials and refrain from asking ChatGPT to generate a password that meets the specific length and character requirements of a particular service. Avoid sharing medical data, banking information, or any other information that is or should be protected. It is also important to familiarize yourself with the terms of use and privacy policies of Artificial Intelligence models (they are quite good at summarizing them, so take advantage of it) to understand how your account data and the information you provide to the model are used, and if they are shared (how and for what purpose) with third parties, such as other companies. ⚠ Some AI services warn about the possibility of 'human review' of queries, for quality reasons or because it is an experimental product, and ask not to enter personal data or confidential information. Artificial Intelligence models like ChatGPT can be extremely useful for a wide range of tasks. These tasks range from preparing Hollandaise sauce to drafting an email in another language. However, it is imperative to acknowledge that they come with the same security and privacy risks associated with other platforms, such as social media. Therefore, it is equally important to take measures to make the most of their benefits in the safest possible manner. Cyber Security IA & Data Cyber Security Evolution: AI as a Tool for Attack and Defence June 28, 2023 Photo: frimufilms en Freepik.
July 4, 2023
Telefónica Tech
AI & Data
Trending Techies: four presentations by experts in data and responsible AI
A new Trending Techies event was held yesterday, a face-to-face meeting organised by Telefónica Tech to generate community and conversation between professionals, students, companies and the public interested in new generation digital technologies. The event, hosted by Manu de Luna, data scientist at Telefónica Tech, was attended by four professionals and experts who spoke and discussed with attendees about the responsible use of data in Artificial Intelligence models. AI systems transparency The first presentation was given by Cristina Contero, from Aphaia, with her talk "Mirror, mirror, what's behind this AI". Cristina spoke about the different categories of Artificial Intelligence and how all of them are already subject to regulation through, among other laws prior to any regulation that may apply, the General Data Protection Regulation (GDPR). In this sense, Artificial Intelligence (or 'automated decision-making processes', for the purposes of the GDPR) is intrinsically related to data. Whether the data used to train AI models includes personal information or whether automating decisions affects the fundamental rights of individuals. "If our AI uses personal data today, it already has to meet a number of transparency requirements," she explained. AI cannot be understood without widespread adoption. Therefore, it is the developers themselves who are looking for confidence in these systems. AI-guided digital workers In his presentation "Digital Workers guided by AI: a challenge with many benefits", Marty Mallavibarrena, Senior Solutions Consultant at SS&C Blue Prism, explained how RPA (Robotic Process Automation) platforms work. These platforms allow software (non-physical) robots to be created and put to work to perform repetitive tasks. This can include, for example, reading ID card data when someone sends it to register for a service, or when filing a claim with the insurance company. These RPA platforms are increasingly applying Artificial Intelligence techniques to make these "digital coworkers" more effective and efficient and meet market demands. In his presentation, Marty explained some use cases and thought-provoking examples, and discussed the ethical and technical challenges for the responsible application of AI in this area. Among them, how to control biases in data and models at source and data compliance. Keys to harnessing AI tools in journalism Teresa Mondría, product manager at Igeneris, shared a talk on how to use new technologies to innovate in journalism. In this case, the use of AI as a journalistic tool affects both those who produce information and the people who consume it. Teresa explained with specific examples the real applications of AI in the news production process, such as in translations, verification (debunking hoaxes, checking data...) or in the research and analysis of large volumes of documentation. "Tasks where AI brings a differential value and can be very useful." It is also applied to news tagging, social listening, or content recommendation. Teresa shared the keys for journalists to incorporate Artificial Intelligence in the production of information: Respond to a target Treat the AI as a source (and sources are verified). Understand how and by whom the data is processed Give assurances to journalists and readers Be transparent Unbiased data: preventing bias The challenge of impartiality in AI in investigative journalism and data' was the fourth and final presentation of the meeting. Delivered by Antonio Delgado, co-founder of Datadista. He highlighted how AI helps to process and analyse data, especially when it comes to large volumes of information. What used to be slow clustering processes are now tasks that are done in seconds thanks to AI. However, he highlighted the importance of data fairness when training and using AI models. When it comes to data journalism, tools like ChatGPT bring significant ethical and practical challenges around fairness." Language models such as GPT have been trained by extracting data from the internet and can therefore learn and reproduce biases in that data. On the contrary, "when using an AI model trained with our own data and information sources, they allow us to perform tasks such as searching and summarising information in large volumes of documents efficiently, analysing large datasets to discover patterns and trends, and prototyping content quickly." In this case, nevertheless, "we must be aware and critical of the potential biases that we ourselves can introduce into our training data, to ensure the impartiality and accuracy of our work." Antonio shared three keys to avoid introducing biases into our models: Algorithms must be transparent and understandable. Apply ethical codes of journalism Verify the facts and data obtained (AI as a source) If you would like to participate or attend the next Telefónica Tech Trending Techies join the Cyber Security Techies or Data Techies meetup communities.
June 22, 2023
Telefónica Tech
Meet #LadyHacker Karla Parra, Cybersecurity expert at Telefónica Tech
Our series of interviews with #WomenHackers from Telefónica Tech continues. The purpose of this is to get to know a little better the experts who, with their knowledge and skills, make us more capable and innovative. * * * Who are you and what do you do? I am Karla Parra, systems engineer, runner, passionate about technology, cyber security, and digital transformation. I have more than 20 years of professional experience leading the management of operations, commercial, product development and pre-sales for cyber security services, IT services, IT and information security, technological continuity, and business continuity. I currently lead the Cybersecurity and Cloud Provisioning team at Telefónica Tech Hispam & USA. What is your specialisation and how did you come to do what you do now? The specialisation I have is aligned to Cybersecurity and management. However, my beginnings were not in technology. I studied forestry engineering, but life led me to the world of technology, drawing me to the analysis and research side, which is why I followed the line of cyber security. Was it clear to you at an early age that you wanted to work in the world of technology? Was it something vocational? When I was a child, it was not clear to me that I was going to dedicate myself to the world of technology. I dreamt of other professions, such as doctor, veterinarian, park ranger… The things that happen during our lives, and the fact that I am curious, allowed me to learn about an exciting career that few people were betting on. How would you describe your career so far and what are the skills you use at Telefónica Tech? Karla Parra, Head of Cyber Security & Cloud Provision Perú I would describe my professional career as challenging, passionate, and fun. I enjoy what I do, and I don’t stop learning every day, because technology is constantly evolving, especially in the field of security. The skills I use are respect, humility, and commitment. I believe these are essential skills in security, where professional ethics are your trademark and allow you to gain the trust of your customers, your team, and the company. It is crucial to achieve your goals. What do women bring to STEAM professions? Every individual brings something to the world of technology. In the case of women, I feel that we complement with our analytical approach and attention to detail. In the Cybersecurity sector we have female profiles, and we are seeing more and more participation. In fact, according to IT User, women occupied 10% in 2013, 20% in 2019 and 25% in 2022, these figures translate into an increase of women in Cyber Security of 150% in the last decade. The digital transformation is a valuable opportunity to balance the participation of women in the technological world. The digital transformation we are experiencing today is a valuable opportunity to balance the participation of women in the technological world due to greater flexibility, hybrid work or labour demand, among other factors. What makes Telefónica Tech a great place to work for women and how does the company promote gender diversity and inclusion? Telefónica Tech allows me to develop professionally and provides me with work facilities so that I can take on the different roles that we have to live as women. We currently have 30% female participation in the operational areas in Peru In this way, emphasis is placed on equal and fair participation between genders by promoting women’s participation. In this regard, we have recently held two events that have given us the opportunity to promote diversity and inclusion: Leading the Tech World, where I had the opportunity to share the conversation with two great professionals from the world of technology: Elena Gil – Global Director of Product Commercial Operations; and Carolina Navarrete – Director Marketing B2B HISPAM. The objective of the session was to transmit our experiences and challenges that we have as women in the technological world. HackaCyber, a hacking party full of challenges designed to discover the different strengths of the students, which allowed new talents to be identified. This experience speaks volumes about the importance of maintaining these spaces to discover and encourage the development of talent, and thus increase the presence and involvement of women in the technological world. Could you explain to us what the Lady Hacker initiative means to you? Karla Parra, part of the Telefónica Tech #LadyHacker initiative It means inspiring and motivating more and more women to show their talent in the world of technology and cyber security. It means contributing my professional experience to build a safer world. What advice would you give to other women who want to pursue a STEAM career? The advice I would give them is to be confident. To be challenging, self-learners and, above all, to enjoy what they do. In the ever-changing world of technology, our strategy must be to stay one step ahead. Telefónica Tech Meet #LadyHacker Jess Woods, Cloud expert at Telefónica Tech March 8, 2023 Telefónica Tech “To be a hacker in life is to be a passionate, talented person who manages to influence the transformation of society", Carmen Alonso July 28, 2022 Telefónica Tech «We are moving towards genderless professions», María Martínez August 8, 2022 Telefónica Tech #LadyHacker Tamires Abujamra is innovating at Telefónica Tech Brazil September 14, 2023
June 20, 2023
Telefónica Tech
Our participation in DES 2023, the leading event on trends and digitalization
The Digital Enterprise Show 2023 (DES) has come to an end after one of the most exciting and intense weeks of the year. As we say goodbye to Málaga, we also say goodbye to our headquarters for the Telefónica Tech Laboratory and Innovation Area, making the capital of the Costa del Sol a benchmark in cyber security and technological innovation. This year’s DES event celebrates its seventh edition, becoming an international meeting of reference on trends and technologies enabling digitization and digital transformation of companies of all sizes and sectors. Telefónica Tech has been a global sponsor at DES 2023, sharing space, knowledge, products and conversations with tens of thousands of professionals from around the world, including experts, managers and professionals from multiple sectors, including banking, health, energy, tourism, commerce, mobility or industry. Welcome to our booth at DES 2023 The Telefónica Tech booth at DES 2023 At DES 2023, we provided attendees with a networking area and product exhibition. There were also five experiential areas dedicated to our IoT technologies, Big Data, Artificial Intelligence, Blockchain, Cybersecurity, Cloud, that attracted a good number of professionals. Telefónica Tech stand institutional visit Our stand also hosted representatives from different institutions, such as Francisco de la Torre, the mayor of Malaga, Juanma Moreno, the president of the Junta de Andaluca, and Francisco Salado, the president of the Malaga Provincial Council, on the opening day. They were accompanied by María Jesús Almazor, CEO of Telefónica Tech, and Joaquín Segovia, director of Telefónica’s southern territory. Attendees enjoyed the demos we presented and our value proposition during their visit, including: The ThinX Lab At DES 2023, our applied technology laboratory The ThinX will be featured on Telefónica Tech’s stand Bringing a little bit of The ThinX, our open lab for AI of Things. In our pioneer space The ThinX, clients, partners, and organizations can test their IoT projects in real conditions, reducing costs and speeding up development and deployment. Visitors to our lab were able to see connected V16 beacons, Geotab tracking devices for fleet management, smart lighting solutions, and components of our Smart Water solution, among others, that have passed through our laboratory. Digital Operations Center (DOC) An area of the Telefónica Tech booth at DES 2023 was dedicated to our Digital Operations Center (DOC). Our DOC (Digital Operations Center) provides comprehensive and global services for the monitoring and operation of our clients’ Cybersecurity and Cloud services. Physically located in Madrid, Spain, and Bogotá, Colombia, the DOC, in coordination with our SOC, brings together thousands of cybersecurity and cyberintelligence experts. Demo: Quality Control with Industrial IoT This small demo combines advanced technologies such as our private 5G networks, Edge Computing solutions, vision algorithms and Artificial Intelligence. It automates a real-time industrial monitoring and classification task. Audiovisual Portfolio In addition, customers could firsthand, accompanied by our experts and through audiovisual means, learn about our portfolio of solutions and capabilities. In addition, customers could learn about success stories. Telefónica Tech eShop Telefónica Tech's eShop corner Because well-being and leisure time are also of importance, our visitors, clients, and friends explored and acquired some of the products from our eShop. Our participation in talks and presentations At DES 2023, we took part in various talks and presentations where our Telefónica Tech experts shared their knowledge, experiences, and projects: ◾ Daniel Ribaya, Director of Cloud Products and Services, gave a talk titled ‘Discover Edge Computing With Telefónica Tech. In this talk, he discussed the role of Edge Computing technology and the benefits of bringing Cloud capabilities to where they are needed. This includes reducing response times (latency), and ensuring data security. Telefónica Tech in Spain is at the forefront of implementing both Proof of Concepts (PoCs) and real customer deployments of Edge Computing, 5G connectivity, and the Internet of Things (IoT). ◾ Juan Campillo, Director of Product Marketing for Cybersecurity, participated in the ‘Inspiration Theatre’ with his talk ‘Guardians of the Digital Frontier’. I would like to bring some optimism for those of us who find ourselves in the ‘valley of despair.’ A formula for cyber resilience in critical infrastructure is risk = probability x impact. ◾ Alexis Hostos took the stage with his speech on ‘The Silent Revolution: Technology and Innovation in Future SMEs.’ Besides highlighting the importance of innovation and digitization to ensure SMEs’ sustainability and continuity. The question we should ask ourselves is not ‘what future our SMEs can have,’ but ‘what future we want them to have.’ ◾ María Jesús Almazor, CEO of Cybersecurity and Cloud at Telefónica Tech, participated in the event ‘Meet the Disruptors: A Conversation with Game Changers and Visionaries.’ There is no single order of importance for technologies. Beyond ensuring basic technologies are implemented correctly and building more comprehensive solutions, the greatest power of digital transformation is achieved. ◾ Alfredo Serret, our Global Managing Director of Business Development, and Manu Marin from Livall participated in the talk ‘Accelerating Towards a New Ecosystem: Mobility, Cities, and Data.’ ◾ José Luis Núñez, our Blockchain Lead, was present at the event ‘The Web3 Super App that Connects City, Businesses, and People through the New European Digital Identity (EU ID Wallet)’ at the Smart Cities & Urban Mobility Forum. Blockchain is the online digital notary for information exchanged between individuals, businesses, and services that was not public or verifiable until now. During the second day of DES 2023, we continued to share our knowledge and experience in different presentations, talks, and panel discussions, including: ◾ Maria Muñoz Ferrer, Business Development Director, and José Luis Domínguez, Sales Director, spoke about digital services and the use of enabling technologies such as Big Data, Cybersecurity, Cloud, and Artificial Intelligence in the presentation ‘Towards a Happy Citizen’. The application of next-generation digital technologies creates a new paradigm that also responds to digital transformation needs and the new challenges faced by relevant sectors of public management: health, employment, and tourism. All of this is aimed at providing a better citizen experience and improving service management. ◾ Carlos Martínez Miguel, Global Director of IoT, Big Data, and Artificial Intelligence Solutions and Services, was present with his talk ‘Embracing the Power of Big Data and AI. In this talk, he discussed how organizations are leveraging business transformation opportunities enabled by Big Data and Artificial Intelligence. Artificial Intelligence brings significant opportunities but also significant challenges. All stakeholders must work together to manage them. ◾ Peter Moorhead, Head of Security Pre-Sales at Telefónica Tech UK&I, participated in the panel ‘Challenges, Issues and Opportunities of Managing Global Health Locally. Big Data and Privacy,’ where he shared insights regarding data privacy in healthcare. One of the main challenges is knowing where healthcare data is stored and how it is used, with the goal of guaranteeing patient privacy. We couldn’t miss our appointment with the audience on the last day of DES 2023 either: ◾ Andrés Escribano, Director of New Business and Industry 4.0, spoke in ‘Revolutionizing Industries: Exploring the Power of Industry 4.0, IoT, and Digital Technologies’ about the technologies that are transforming the industry towards a more efficient, productive, and sustainable model. We can measure the real impact digitization and associated technologies have on companies’ results and KPIs. ◾ Alberto Sempere, Director of Product and Innovation, participated in the CEO & Leadership Summit with his presentation ‘Zero Trust Everywhere, all at once. He highlighted that the current level of disruption is an overhaul for all companies and enables innovation for “differentiation paths.” Applying a Zero Trust mindset helps us increase our resilience and digital sovereignty. ◾ Esther Cardenal, Senior Product Manager, participated in the Retail & Logistics Forum, in the session ‘The Wellness Center of the Future. In this session, she discussed the implementation of our video analytics and Artificial Intelligence solution in GO fit centers. Manuel Estébanez, CEO of GO fit, talked about the challenge of implementing this technology to extract knowledge from data while respecting user privacy. Technology plays an essential role in sports and wellness center design.
June 15, 2023
Cloud
AI & Data
Cloud AI vs. Edge AI: know their differences and choose the right approach for your AI project
As we discussed in a previous article, Edge AI and Cloud AI are two different approaches to implementing Artificial Intelligence developments or machine learning models. In a nutshell, Cloud AI stores and processes data on Cloud platforms or servers, where AI algorithms and models are executed. Edge AI captures or receives data and runs AI algorithms and models on local devices such as wearables, IoT devices, or Edge Computing servers. Since each approach has its advantages and disadvantages, the choice between the two models will depend on the project’s needs and requirements, context, environment, and even location. Edge AI advantages In Edge AI: Artificial Intelligence outside Cloud, we saw that even in environments with poor coverage or no connectivity, Edge AI can make decisions, execute instructions, or provide real-time insights or responses. This is especially crucial for critical IoT solutions in industries like manufacturing (IIoT), healthcare (IoMT), or mobility. This is made possible because Edge devices can capture, process, and analyze data locally, close to where the data is generated or needed. For example, in factories, offices, hospitals, or farms, without the need to send large amounts of information to remote servers or cloud platforms. It can even function without relying on a permanent broadband connection or low latency. Additionally, Edge AI offers additional advantages: By processing and storing data on the device, the risk of interception or storage by third parties is reduced. It also reduces exposure during transmission or storage. This often provides greater control over the data and makes it easier to comply with local regulations regarding data protection, residency, or privacy. Operational cost savings, as only the most relevant or already processed data is sent to the cloud. This typically requires less computing power and storage space in the cloud. Edge AI devices can be configured and programmed according to the project’s needs at a relatively low cost, allowing for customization and adaptation to the solution to make them more efficient. Edge AI disadvantages Edge AI devices have limited data processing and storage capacity. Some tactics, such as algorithm optimization or the use of Application-Specific Integrated Circuits (ASICs) applied to Artificial Intelligence, can improve their performance. However, their capacity is still limited compared to cloud platforms’ virtually unlimited resources. This characteristic can limit the complexity of machine learning models and Artificial Intelligence algorithms that can be executed on Edge AI devices. This is particularly relevant to devices like wearables or IoT sensors that operate on battery power. Algorithm execution consumes significant energy, diminishing autonomy. Additionally, it’s worthwhile to consider that: Incorporating Artificial Intelligence or machine learning capabilities into specifically designed devices can result in higher hardware costs than equivalent devices or IoT sensors without this capability. For the same reason, these devices often require more maintenance, both at the hardware and software levels, to ensure they remain updated and operational. Although storing and processing data locally has advantages, Edge AI devices are also vulnerable to security or privacy breaches and attacks, exposing their data to potential attackers. Encryption or obfuscation can help protect Edge AI data. Cloud AI advantages Scalability, capacity, and accessibility are some of the main attributes and advantages of the Cloud. Cloud AI is capable of processing and storing large amounts of data, even massive volumes, adapting to the needs and demands. Cloud AI also: Often, they utilize well-known platforms with abundant training and informational resources. This streamlines the deployment of Artificial Intelligence models through intuitive interfaces and training and deployment tools that simplify implementation. For this reason, implementing Cloud-based Artificial Intelligence solutions can be simpler and cheaper than implementing Edge AI solutions on proprietary hardware. The cloud allows companies to pay only for the resources they need (pay-as-you-go), saving costs on infrastructure, maintenance, and personnel. It is accessible from anywhere and at any time, as long as there is an internet connection. This enables remote and real-time access to processed data, facilitating integration with other existing applications and business processes. Additionally, automatic hardware and software updates provided by Cloud providers ensure that Cloud AI solutions run on up-to-date platforms with the latest security updates. Cloud AI disadvantages Availability and latency (the time it takes for data to be sent and received between the device and the cloud) can be problematic for applications that require real-time response. Data processing and machine learning consume high resources, increasing operational costs. Sending data and information to Cloud platforms can pose risks, regulatory non-compliance, and even data exposure, especially when dealing with sensitive data transferred to third-party Cloud platforms in cross-border regions. Cloud AI applications require a robust and capable internet connection. This can be a hurdle in environments or areas with limited connectivity, low bandwidth, or no connection or coverage. Differences in Security: Risks and Challenges As we have seen, both approaches entail risks and challenges regarding data security. Edge AI is worth noting for the following reasons: Edge devices, being located in accessible physical environments, may be more prone to physical attacks, including theft, breakage, or tampering. Edge devices are also vulnerable to interceptor attacks during data transmission or storage on the device. The diversity and quantity of Edge devices and proprietary developments can make security updates challenging, leaving them vulnerable to threats. Limitations of Edge devices can also hinder sophisticated security measures, like end-to-end encryption or biometric identification. Risks and security challenges in Cloud AI: Cloud service providers implement appropriate security measures to prevent data theft, leaks, or security breaches. In some cases, sensitive data may be subject to specific regulations regarding data location and residency. If cloud providers fail to comply with these regulations, they risk regulatory non-compliance. When using third-party cloud services, data protection policies and privacy regarding data protection depend on the Cloud provider. The time required to transmit data to and from the cloud (latency) can create delays, which can be problematic for applications requiring real-time responses. Additionally, lack of connectivity or interruptions can affect cloud services availability and access. When is Edge AI more suitable? When is Cloud AI better? The choice between Edge AI and Cloud AI depends on the specific use case. This will depend on factors such as connectivity availability, scalability, or data sensitivity. Edge AI is highly suitable for situations where real-time response is required or network connectivity is limited. For example, In industrial environments, Edge AI allows for real-time processing of sensor data without latency, ensuring no interference with the production process. In healthcare, Edge AI can process patient information without medical data leaving hospital premises. This provides an immediate response that is vital in critical situations. Self-driving robots, autonomous vehicles, drones, or AGVs (Automatic Guided Vehicles). On the other hand, Cloud AI is more suitable in situations where scalability is essential to handle large volumes of data. For example, In logistics or e-commerce companies, Cloud AI enables efficient processing and analysis of vast amounts of information, including user data, customer data, and transactions. Banking risk analysis and fraud detection require processing large amounts of data to identify complex patterns. This is done to detect suspicious or fraudulent operations. In services like automatic translation or voice recognition, Cloud AI can perform large-scale language analysis and understanding, improving response accuracy and quality. Some applications will require immediate response and less dependence on network connectivity, while others will require extensive processing and data analysis capabilities. Ethical considerations: a common need for both approaches Both Edge AI and Cloud AI require careful consideration of data security and privacy. In the case of Edge AI, it is imperative to ensure that data is adequately protected on Edge devices and that there are no risks of privacy breaches or unauthorized access. In the case of Cloud AI, it is necessary to verify that Cloud service providers have appropriate security policies and measures in place to protect data and comply with data protection regulations. In both cases, Artificial Intelligence development and models should avoid biases and discrimination. This can happen because the data used to train the models may be biased, or because the algorithms themselves unintentionally introduce biases. To mitigate biases and discrimination, it is necessary to conduct testing and evaluations that identify and address potential biases. This ensures that AI systems are fair and unbiased. Furthermore, Artificial Intelligence implementation, whether in Edge AI devices or Cloud AI, requires careful consideration of responsibility and accountability. It is critical to clearly define who is responsible for AI systems’ outcomes and functioning, and to establish mechanisms to address any issues or consequences that may arise. Lastly, it is always critical to understand and explain how AI models work, how they make decisions, and how they generate results. Therefore, ensuring transparency and explainability of the algorithms and AI models used is essential, particularly in critical domains such as healthcare. Artificial Intelligence ethical considerations should be comprehensively and carefully addressed throughout the entire development and implementation process. This is regardless of whether the model is applied using Edge AI or Cloud AI approach. The goal is to ensure that AI models are fair, transparent, explainable, responsible, and respectful of privacy and individual rights. AI of Things Raspberry Pi for Edge AI: Artificial Intelligence at the edge for everyone January 4, 2024 Featured photo: DilokaStudio on Freepik.
June 14, 2023
AI & Data
These free Google courses will get you started with generative-AI
Google Cloud has recently launched some new free courses focused on Generative Artificial Intelligence. Generative Artificial Intelligence is a branch of Artificial Intelligence (AI) that focuses on content creation, enabling a "more interactive way of interacting with information" and data. Some of the most well-known examples of generative artificial intelligence include ChatGPT, Midjourney, or Bing, which demonstrate the potential of this technology for automatic content generation. In the past few weeks, Google Cloud has released up to ten online free courses related to Generative Artificial Intelligence. The introductory courses are suitable for any level of technical knowledge and can be completed in minutes. Others, though fewer, are intended for intermediate levels. The courses are available and accessible to anyone interested in learning. After each video, there is a short quiz to validate what has been learned. AI of Things How to start programming in Artificial Intelligence: languages, tools and recommendations January 18, 2023 Introduction to Generative and Responsible Artificial Intelligence The learning path in Generative Artificial Intelligence designed by Google Cloud includes a general introduction to this field. It also includes image generation, and language models such as LLM. Furthermore, more specific topics are addressed, such as Google Cloud's Vertex AI platform for building, training, and deploying machine learning models. There's also Generative AI Studio, a tool that allows prototyping and testing of generative models. Among the offered courses, Introduction to Responsible AI focuses on the responsible development of Artificial Intelligence. This microcourse addresses the importance of data privacy and ethical considerations in all phases and processes of model development. It also explains why we need ethical Artificial Intelligence systems to achieve trusted AI. Google Cloud's course on Responsible Artificial Intelligence emphasizes the importance of incorporating ethical considerations in data and throughout the development stages of AI models. While there are numerous options to get started in Artificial Intelligence, Google Cloud's offering is an excellent starting point for understanding the foundations of generative Artificial Intelligence. This knowledge is useful for adopting these tools capable of improving our work processes and making us more productive and efficient. It also serves as a stepping stone to continue learning about Artificial Intelligence. Connectivity & IoT 5 free online courses to learn IoT (Internet of Things) August 3, 2023 The 7 principles of Google for responsible AI Within this course, Google's 7 principles are also explained to ensure Artificial Intelligence models: Be socially beneficial. Not create or reinforce biases or prejudices. Be built and tested for safety. Be accountable to people. Incorporate privacy protection principles. Maintain high scientific excellence standards. Be available for uses that align with these principles. __ Google Cloud is part of the Telefónica Tech partner ecosystem, the network of alliances that allows us to develop the most advanced solutions on the market for our clients. Featured image from Freepik.
June 8, 2023
Cyber Security
Typosquatting: how to detect and protect yourself
Typosquatting is a type of cyberattack that involves creating a domain name that is very similar to that of a well-known, legitimate website with the intention of deceiving users. This type of misleading domain name can also be used in e-mail addresses to make them appear legitimate to the naked eye, but which are fake. Typosquatting is a combination of the words "typo" and "squatting". The attacker can exploit common typos, misspellings or typing errors to carry out the deception. These errors may omit or change the order of letters. They can also substitute characters, replacing visually similar letters. Anything to create a misleading domain name. For example: Change an "l" or an "i", the "o" for a "0" (zero) or use "rn" instead of an "m". Register the same domain name but with a different extension, such as ".co" instead of ".com". Use a domain name with a similar appearance, such as goggle.com instead of google.com In addition, "in this type of cyber-attacks it is very common to use alternative spellings or words with double spelling and also the use of special characters, such as hyphens", explains Susana Alwasity, Threat Intelligence Team Lead at Telefónica Tech. In this case, for example: If the legitimate domain is bankXonline.com a typosquatting could be bankX-online.com In either case, the attacker's purpose is to trick users into visiting the fake site believing they are on the legitimate website. It is from such a website (which may be the same or even a duplicate of the original site) that attackers can distribute apps and malware or steal information such as login credentials, bank card numbers or personal information. How is typosquatting used in cyberattacks? Typosquatting is a popular technique used by cybercriminals to launch different attacks, including: Phishing attacks: attackers can create a fake login page that looks like the page of a legitimate website. When users type their login credentials into the fake page, the attacker gets hold of them to use them for malicious purposes. Malware distribution: Attackers can create a fake website that prompts users to download a file or software. When users download and install that file, they are actually unknowingly installing malware on their computer or device. Ad fraud: Attackers can create a fake website that generates advertising revenue by tricking users into clicking on ads. The attacker earns money for each click, even if the ads are irrelevant or harmful. "In addition to the theft of personal information, which can lead to phishing attacks, typosquatting can also be aimed at redirecting the domain to another destination or blackmail and reputational attacks against companies or individuals," explains Susana. The case of the Icelandic national pólice An example of a phishing attack based on typosquatting took place in Iceland in 2018. Then, cyber attackers used a domain name similar to the official domain of the Icelandic national police (Lögreglan, in Icelandic) to deceive citizens. In this case, the attackers registered a domain that replaced the "l" with an "i" (logregIan.is instead of logreglan.is), making it appear at first glance to be the legitimate domain of the country's police. But it was not. As in the previous paragraph, in the email the "i" in the URL was capitalised ("I") to make it look like an "l". At first glance, and taking advantage of the fact that our brains sometimes read words that are not words, many recipients did not detect the deception. Converting the text to small caps reveals the deception: what looks like a lowercase "L" is actually a capital "i". They then used that domain to create a fake website that looked exactly like the legitimate website. And they sent phishing emails asking the recipient to access that URL and enter personal information. The full story and analysis of the attack can be found at Police Phishing Attack Targets Bank Credentials. How to detect and protect yourself from typosquatting? Often, as in the case of Lögreglan, it is not easy to detect and protect yourself from typosquatting. But following these recommendations from our experts at Telefónica Tech helps to reduce the risk: Look at the URL before accessing a website or the sender's email extension when it is an email: look for spelling mistakes or other suspicious elements in both the domain name and its TLD extension (.com, .es,.co...). Manually copy and paste the URLs of links received by email: this prevents the malicious destination URL from being hidden under a linked or anchor text. If in doubt about what is actually written in a URL or email address, copy and paste it into a text processor such as Word: changing the font helps to detect visual deception because some fonts reveal the differences between characters better than others. It also helps to convert the text to capital letters. Word also detects the language, which is very useful when typosquatting using letters of the Cyrillic alphabet. Although they look the same, the letters 'A' and 'J' of the Cyrillic alphabet are different characters from the 'A' and 'J' of the Latin alphabet. Use a password manager: password managers automatically enter access or login credentials only on legitimate sites, where the credential was generated. Install antivirus software: they are useful for detecting and blocking malicious websites that could be used in typosquatting attacks. Don't trust the little padlock 🔒 in the browser bar: at first glance it reveals neither the authenticity of the certificate, nor who it belongs to, nor its intentions. Enable two-factor authentication: two-factor authentication adds an extra layer of security, making it harder for attackers to steal login credentials on websites or apps. Enable Google Passkeys: This is a Google identification or login method that combines the benefits of password managers and two-factor authentication in the same process. It is important to check every little detail in a URL and to be suspicious, no matter how small the error may seem. It is advisable to invest time in prevention, as the consequences can be harmful and irremediable. -Susana Alwasity, Telefónica Tech. How does Google Passkeys protect against typosquatting? Google Passkeys generates unique and complex passwords for each website the user visits. When the user visits a website, Google Passkeys automatically fills in the login credentials, such as username and password. With Google Passkeys, there is no need to remember or type the password on apps and websites, minimising the risk of the user mistakenly typing their login credentials on a fake website. Cyber Security How language puts business Cybersecurity at risk June 1, 2023 Continuing with the example above, if a user trying to visit google.com mistakenly ends up at goggle.com Google Passkeys will not work because there is a difference in the domain. So it will not automatically enter the login credentials. If this happens, it is best to leave the website, without trying to enter the credentials by hand. Conclusion Typosquatting is a cyber threat that can result in identity theft, account theft, financial and reputational loss, or lead to ransomware, among other possible consequences. Therefore, it is always important to pay attention to URLs and email addresses that may be suspicious or of unknown origin, in order to minimise the risk and protect yourself from this type of cyberattack. Featured image: Freepik.
June 7, 2023
AI & Data
What is AI-winter and how to avoid it
Throughout its history, Artificial Intelligence (AI) has experienced ups and downs: periods when it received more attention and resources, and periods of disillusionment and stagnation. These periods of waning interest in Artificial Intelligence are known as AI-winters. Two AI winters have been identified in the past 50 years, and we are faced with the challenge of avoiding a third one. Consequences of an AI-winter During AI-winters, disillusionment with Artificial Intelligence translates into widespread disinterest. This disinterest leads to a reduction in attention and funding for research and development. In those periods that last years, there are few significant advancements. And the few that do occur are disappointing. Even after a moment of high enthusiasm, in an AI winter, Artificial Intelligence fades away from conversations and media coverage. Currently, Artificial Intelligence is experiencing a "Cambrian explosion" that some describe as hype or even a bubble. In any case, we are in a period of high expectations, the complete opposite of an AI winter. And precisely because it is a known pattern, the inevitable question arises: Are we on the verge of a third AI winter? AI-winters throughout history Experts have identified two AI-winters. Both occurred after notable advancements and moments of industry, media, and public excitement: Image generated with Bing First AI-winter, late 1970s and early 1980s: During this period, expectations for Artificial Intelligence were also high. However, the advancements did not live up to the exaggerated promises made by science fiction. As a result, there was a significant decrease in funding and interest in research and development of Artificial Intelligence. Second AI-winter, late 1980s and early 1990s: After the first AI winter, interest in Artificial Intelligence resurfaced. Once again, expectations exceeded achievements. The lack of significant progress and the gap between expectations and reality led to another period of disillusionment and disinterest. In 1996, IBM's supercomputer Deep Blue defeated the reigning world chess champion, Gary Kasparov, for the first time. What is AI-winter and how to avoid it Factors that could lead to a 'Third AI Winter' Some skeptics do not rule out the possibility of a third AI winter occurring after the current period of high expectations. This is mainly because new variables come into play this time, including those related to privacy, security, and ethics. This is because Artificial Intelligence is not only finding applicability in businesses this time. It is also proving useful for the general public, such as in the case of digital assistants or Generative Artificial Intelligence, two accessible forms of AI for end users. The widespread applicability of Artificial Intelligence should mitigate the occurrence of the next AI winter, but there are factors that could contribute to it. And the first one is fear. Fear: As Artificial Intelligence becomes more advanced and capable, fears arise, and news spreads regarding concerns and fears about its impact on society: fear of uncontrolled AI, job loss, invasion of privacy, lack of transparency... and the ever-present dystopian scenario. Mistrust and skepticism regarding technology can arise from these fears. However, there are more factors that could lead to a third AI winter: Restrictive legislation around Artificial Intelligence: If excessively restrictive or ill-conceived regulations are implemented, it could hinder research and development of Artificial Intelligence, limiting innovation and progress. Scarcity of high-quality data: Artificial Intelligence relies on large amounts of data, which are used to train and "teach" algorithms. If there is a lack of relevant and high-quality data in certain domains, or if the data does not consider demographic and social differences, it could hinder the development of reliable Artificial Intelligence models. Technical limitations: Limited computational power, poor energy efficiency, lack of scalability of algorithms, or technical phenomena such as hallucinations in Artificial Intelligence could slow down its progress. AI of Things How to start programming in Artificial Intelligence: languages, tools and recommendations January 18, 2023 Keys to avoid AI-winter Balanced legislation: Regulations should address legitimate concerns around Artificial Intelligence (including those related to privacy, security, and non-discrimination) without hindering its development and potential benefits. Collaboration between lawmakers, AI experts, and the industry is essential to achieving this balance. Support education and technological advancements: Investing in research and development to drive significant technological advancements requires fostering academic research and collaboration between industry and institutions. It is also critical to educate children and young students. Promote trustworthy Artificial Intelligence: It is essential to address concerns regarding privacy, security, and social impact. Ethics, transparency, explainability, along with responsibility and proper governance, are indispensable principles to avoid an AI-winter. Although predicting the next AI winter is difficult, it is necessary to learn from the past and take measures to maintain sustainable progress in Artificial Intelligence. Only in this way can we avoid a third AI winter and harness the full potential of the progress offered by this technology. Featured image generated with Bing.
June 6, 2023
Cyber Security
How language puts business Cybersecurity at risk
Cybersecurity is crucial for businesses and organizations of all sizes and sectors. Cyberattacks can have severe or even fatal consequences for businesses, such as data loss, operational disruptions, or regulatory non-compliance. They can also directly impact revenue, damage reputation, or undermine employees, clients, and suppliers' trust. For these reasons, executives must understand the significance of Cyber Security for companies and take appropriate measures to protect their digital assets. And the only way to achieve this is by speaking the same language. However, that is not always the case. The language problem in Cybersecurity On the contrary, according to a recent study, 44% of executives surveyed do not prioritize Cybersecurity in their companies. This is due to the confusing language used in this field. Specifically, because "the language used is confusing and hinders threat understanding," as reported by Europa Press. This is despite the fact that 45% of executives from large companies in Spain know cyber threats are the "greatest danger" their company can face. One-third of the surveyed executives stated that they do not understand the meaning of 'malware' (malicious software), and almost another third do not comprehend the term 'ransomware' (data hijacking). Cybersecurity is a complex and technical subject. However, the language barrier is "universal in other professions," says Sergio de los Santos, Head of Innovation and Laboratory at Telefónica Tech. "If we aim to be precise, we risk being too technical and distant. If we are too simplistic, we may trivialize the problem. Finding a middle ground is challenging but possible," he explains. Cyber Security Pay When You Get Infected by Ransomware? Many Shades of Grey May 9, 2023 How to overcome the language barrier To overcome the language barrier and professional jargon in cybersecurity, it is crucial to communicate effectively. This applies not only to executives but also to the general public, including young individuals. Using clear, accessible, and honest language. Avoiding technical terms and professional jargon. Explaining concepts simply. Using examples and real-life cases to illustrate the risks and importance of protecting company systems and data. Furthermore, to enhance understanding and raise awareness about the importance of Cybersecurity, experts can conduct training courses and workshops targeted at executives. These sessions can provide: Clear and practical risk information. Security recommendations and best practices. Real demonstrations of malicious actions and attacks. Explanation of companies' measures to protect themselves from cyberattacks. Training on the solutions, tools, and resources available to companies. Cyber Security experts can also collaborate with companies' communication departments to create communication materials about cybersecurity that are clear and accessible. These materials can include texts, infographics, explanatory videos, and other resources that help executives and employees better understand risks and security measures. Cyber Security Cybersecurity: “black swan“ events in a connected world March 21, 2023 Beyond language: Cyber Security importance for businesses Cybersecurity is an essential element for protecting the data, information, processes, and operations of companies against malicious attacks. It also encompasses technological and computer systems, mobile devices, and communication networks. Although 100% security is never possible, the most effective way to ensure maximum protection is through professional Cyber Security services. It is also critical to apply security best practices and provide training and awareness to employees to ensure the highest level of protection. Above all, executives must understand the significance of cybersecurity beyond language, as it is relevant to all aspects, including business continuity, of the company. Cybersecurity must be a strategic and corporate priority. Featured photo: Pressfoto on Freepik.
June 1, 2023
Cyber Security
How to use Passkey, Google's password substitute
Google’s Passkey offers users the possibility of using an access key to identify themselves on websites or apps without typing their username and password. Google explains that the passkeys replace traditional (and burdensome) passwords. A new mechanism in addition to Google’s “Skip password when possible” function, such as accepting a notification from another device. Passkeys are an alternative to username and password credential systems. The end of passwords is a desire and a necessity that many users have been demanding for years. Not only for convenience, but above all, for security. Passwords are "primitive" and have too many flaws. What is a passkey? n this context, when we talk about a passkey, we mean a digital credential that identifies us to a system (such as a web page or service, or an app) using a PIN or biometric identification. That is, instead of authenticating ourselves by typing something we know (username and password) we also identify ourselves with something we are or have: Drawing an unlock pattern on our device. Type a PIN number. Using our fingerprints. Identifying ourselves by our faces. The result is similar to when the operating system or web browser saves passwords. But with an additional layer of security that verifies the identity of the person trying to use those saved credentials. Cyber Security Cybersecurity: “black swan“ events in a connected world March 21, 2023 How does Google Passkey work? Unlike username and password-based credentials, passkeys use asymmetric or public key cryptography. In other words, the credential is made up of two mathematically related keys: The public key, which is stored on the website or in the app. The private key, which is stored on the user’s device. Since the website or app only stores the public key, password theft or leak does not expose the user’s credentials. An attacker with only the public key cannot do anything with it: neither identify the user nor access the user’s account. This is one of the reasons passkeys are more secure than passwords. Even with the device that stores the private key (the other “half” of the key) an attacker will still have to circumvent one of the identification systems we mentioned —PIN, unlock pattern, fingerprint, or facial recognition. Improved phishing protection Passkeys keeps a record that links each private key to the app or website that holds the public key, unique in each case. So a passkey is not activated when accessing a malicious site, because can only be used on the website or app where it was generated. Passkey are based on the FIDO standard and work on Android, Chrome, Windows, Edge, macOS, iOS, Safari, etc. … Passkey is based on the FIDO (Fast Identity Online) specification of the FIDO Alliance that applies more robust authentication factors than passwords. Its purpose is to promote a common mechanism that saves us from remembering or managing passwords. However, above all, it aims to strengthen security by better protecting users. This includes, for example, preventing attacks designed to steal passwords and access credentials, such as phishing attacks. Goodbye to the two-factor authentication (2FA) Passkey adds an additional benefit for users: passkeys make two-factor authentication (2FA) through mechanisms such as SMS codes, security calls or email links unnecessary. Example of two-factor authentication (2FA). Photo: Photo by Ed Hardie on Unsplash Two-factor authentication is now an essential mechanism to strengthen security and protect against credential theft or unauthorized access. However, it is also cumbersome because it requires taking a call or waiting for an SMS or email with a numeric code or link. How to set up Google Passkey To use Passkey the Android, iOS, Windows or macOS device must meet a series of minimum requirements that can be checked here. To activate Passkeys: Go to g.co/passkeys and log in with your Google account. Click on the Create passkey option. Choose which device you want to create the passkey on. Once created, the passkey is saved on the selected device. It can then be used to log in to Google without typing the username and password. CYBER SECURITY Artificial Intelligence, ChatGPT, and Cyber Security February 15, 2023 It is critical to note that once a key is created it can be used by anyone who has access to that device. For example, another person who has your mobile and knows the PIN or unlock pattern. The passkey is stored on that device and cannot be removed, although it can be shared with other devices belonging to the same user. In case of loss or theft of the device that stores the passkey, it will be necessary to access the Google account (this time with the usual password) and delete the keys associated with the lost device. Featured photo: Florian Berger / Unsplash.
May 17, 2023
Connectivity & IoT
AI & Data
Industrial digitalization: we share the keys at Advanced Factories
This week we are at Advanced Factories, the annual reference meeting on innovation and industrial automation, robotics, and digitalized industry or Industry 4.0. At Telfónica Tech, we are sharing our knowledge and experience. We are also showcasing technologies that enable process automation and digitalization of the industrial sector, driving efficiency and productivity. “Industry 4.0 consists of integrating technology into the process to produce more and better at lower costs.” —Darío Cesena, Geprom part of Telefónica Tech. To demonstrate how new generation digital technologies improve industrial competitiveness and how they are applied in the sector, at Advanced Factories we have four demos that incorporate our main enabling technologies of Industry 4.0: IoT sensors to collect real-time data. 5G and NB-IoT connectivity to provide high-speed and ultra-low latency connection. Cloud and Edge Computing to store and process large amounts of data. Industrial management software and data analytics to optimize process management. Big Data and Artificial Intelligence to obtain insights and improve efficiency. Cyber Security to protect the entire industrial process. Our interactive demos at Advanced Factories 1. Smart Lego Factory, an intelligent process factory Adapted to an event context, we built a process factory with Lego. Process factories produce goods that cannot be broken down into their elements. This would be the case, for example, with detergent or soda. These types of factories have highly automated processes. Generate lots of data. Capturing and processing data allows process improvement, cost savings, and increased efficiency and profitability. The Smart Lego Factory demo allows visitors to see and interact with our IoT/MoM Legato platform for manufacturing process management (MES). Legato provides real-time information about factory performance. This solution allows to know any parameter, both in the production process and in the business layer. As long as that process is sensorized, generates data, and is connected. 2. Fab Lab 4.0, a smart factory for discrete manufacturing Also with a scale model, we showed our visitors how to apply our knowledge and technology to a discrete manufacturing process. This is the type of factory that combines different parts to achieve the final product, such as a car or toy factory. These are usually factories with more involvement of operators, where knowledge resides, than machines, which generates less data. In this case, our MES/SGA Objective platform does not focus as much on data capture as on guiding the operator and monitoring the process. It also monitors the parts used, and the final result. For this scenario, we have a small toy helicopter factory at the stand that visitors can assemble following the instructions on a screen. At the same time, MES software tracks the steps and parts used. Upon completing the assembly, the MES software assigns a serial number to each finished toy to ensure traceability of the final product: who assembled it and when, how long it took, what parts were used, etc. Blockchain would add an additional guarantee of authenticity and immutability of that information. In this demo, the Legato software controls the production, monitoring in real time and making decisions to improve the efficiency and quality of the manufacturing process. 3. Connected preventative safety with Livall smart helmets At our booth, we also have a connected worker helmet. It is based on the same approach as Livall's connected helmets for motorcyclists or cyclists. For use in industrial environments, the helmet is equipped with sensors that measure air quality and ambient temperature. It also has an accelerometer to detect impacts and sudden movements. Thanks to these sensors, Livall's helmet can anticipate industrial risks such as environmental toxins or poor air quality. In addition, it can alert in case of detecting impacts or falls of the worker. The NB-IoT connectivity of the Livall helmet allows its use in large facilities such as a mine or a petrochemical plant. The data generated by the helmet is sent anonymized to a platform that processes it; for example, to activate safety measures. 4. Spot, the 5G-connected ‘robot dog’ for industrial environments monitoring Spot, the ‘robot dog’ by Boston Dynamics, is the perfect example of the possibilities offered by self-piloted robotics in the industrial sector. The Alisys unit that accompanies us these days at Advanced Factories incorporates 5G connectivity to communicate with our Edge Computing solution. 5G connectivity, in addition to telecontrol, allows the data captured by the sensors embedded in Spot to be sent: a thermal camera, a 360 vision camera, and control of a robotic arm. In self-piloted robotics, such as automatic guided vehicles or AGVs, drones, or other machinery, 5G connectivity provides: Ultra-low latency, almost imperceptible, essential for the robot to move and interact with the environment in real time. High capacity to transmit and process data, such as images captured by cameras. For the industrial sector, we are already defining use cases for self-piloted robots like Spot. Although not directly related to production processes, they have real applications in factories: In a large-scale car factory to inspect the facilities, Spot can examine for thermal differences or to identify objects. In a waste processing and recycling center, Spot can identify health risks in the area where the raw material (waste) is stored, such as toxic gases or hazardous materials. The same technologies that enable Spot (5G connectivity with network slicing, Edge Computing, sensorization) can also be applied to other purposes: logistics, surveillance, or goods movement. Other self-piloted robots, such as drones or AGVs, can also benefit from these technologies. The knowledge and experience of our experts and clients In Advanced Factories we also participate with lectures and talks of our experts and customers, to tell first hand their experiences and projects: Comprehensive digitization of production processes applying Artificial Intelligence: The case of Stolt Sea Farm Darío Cesena (Geprom part of Telefónica Tech) and Jorge Juan Alfonso (Stolt Sea Farm) Darío Cesena (Geprom part of Telefónica Tech) and Jorge Juan Alfonso (Stolt Sea Farm) spoke about aquaculture farm digitization. They highlighted how data is changing industrial manufacturing to increase productivity, meet customer demands, and better manage operational processes. A company's data is another asset," says Darío Cesena, Geprom part of Telefónica Tech. To achieve this, it is necessary to collect and analyze information. Artificial Intelligence models allow for insights and optimizing production, as we do in Stolt Sea Farm's aquaculture farms. By applying connectivity solutions, Artificial Intelligence, Big Data, and Cyber Security, which make industrial processes more efficient, Stolt also can predict supply and demand for their products. This is an especially important considering that some species take years to reach their desired size. Stolt provides "quality service and products to its customers while also promoting innovation in its own activity and in the food sector," explained Jorge Juan Alfonso, Food Operations Manager at Stolt Sea Farm. The intelligent helmet for preventive safety management of industry workers Alfredo Serret (left) from Telefónica Tech and Manu Marín from Livall Alfredo Serret (Telefónica Tech) and Manu Marín (Livall) talked about connected helmets in the industrial sector. Marín took a few minutes to discuss the beginnings and motivations of Livall, a platform built to "put the benefits of intelligent mobility at the service of people." From that idea other variants and applications emerged, such as a helmet for skiers and the connected helmet designed for accident prevention, which we showed at our Advanced Factories stand. Although the helmet aims to provide "preventive safety to avoid accidents," it also can sensorize a factory, saving large costs," explained Marín. "The digitalization of the industry benefits the people at the center of the process as well." - Alfredo Serret, Telefónica Tech. Both participants agreed that Livall's industrial helmet has many possibilities and use cases for the digitalization of the industry to also benefit "the people who are at the center of industrial processes," explained Alfredo Serret. Quality and maintenance: A leap in competitiveness supported by technology Javier Martínez Borreguero (Telefónica Tech) Javier Martínez Borreguero (Telefónica Tech) participated in a roundtable discussion on predictive maintenance with Artificial Intelligence. Borreguero listed some benefits of industry digitalization: Resilience. Sustainability and positive impact. New business models. "In the next five years, 95% of quality and maintenance management processes in the industry will incorporate Artificial Intelligence and massive connectivity," he added. These are available technologies that we can incorporate into the industrial sector to tackle challenges in quality and maintenance management. "5G connectivity and Artificial Intelligence are realities. They are available technologies. The question is: are they integrated into your production processes?" - Javier Martínez Borreguero, Telefónica Tech. Turn-key solutions for Industry 4.0 At Advanced Factories we share our complete vision of a digitized factory, end-to-end: from the entry of raw materials to the exit of the final product. 5G connectivity, Cloud, Cyber Security, Big Data or Artificial Intelligence models, among other enabling technologies, allow us to address any need across the entire spectrum, from the factory floor to data analysis. Further, we develop digitalization projects that connect factory software with business layers using our industrial knowledge and sensorization. As a result, we can provide customized solutions to meet the needs of each customer. AI of Things The digitalization of the industry: a practical guide October 4, 2023
April 20, 2023
AI & Data
Edge AI: Artificial Intelligence outside Cloud
Edge AI refers to the execution of AI algorithms near or on devices such as wearables, mobiles, cameras, IoT (Internet of Things) sensors or edge servers, which the ability to analyse and process data. In real-time and without depending on a server on the internet. In contrast, the use of Artificial Intelligence in Cloud (Cloud AI) requires sending data to a data center or Cloud platform where they are stored and processed. It usually requires a permanent, robust, and capable connection. Edge AI is therefore closely related to Edge Computing. Both concepts refer to processing and analysing data at the edge of the network. That is, processing takes place in close proximity to or even on the very devices that generate the data. With Edge AI, response times are shortened, and efficiency and security are improved when executing Artificial Intelligence models on Edge devices. Edge AI features In order for Edge AI to be possible, analytical models and AI algorithms must be able to run on local servers or on devices equipped with processors with a certain computational capacity, as described in the article Edge Computing and Machine Learning, a strategic alliance. However, Edge AI devices often have limitations in terms of computing power, memory, and storage space for data. Also, autonomy if they run on a battery, depending on the case. This limitation may require optimizing algorithms and, depending on their purpose, using specialized hardware. Optimising algorithms by applying techniques such as reducing the complexity of models, quantifying parameters, or eliminating unnecessary connections in the neural network (pruning) to speed up its operation. Also, using of specific architectures for low-power devices. Specialised hardware that makes use of specific processors for AI applications, such as computer vision and neural network inference processors. AI-focused ASIC (Application-Specific Integrated Circuit) chips, such as the IBM AIU processor, offer high performance and low latency compared to general purpose processors or CPUs. Security is also a critical aspect for Edge AI devices. Processing and analysing data on a local device reduces the risk of data exposure. But Edge AI devices are also vulnerable and equally exposed to cyber threats and attacks. The use of Edge AI can also have privacy implications for users, depending on their function. Therefore, Edge AI must implement mechanisms for encryption, authentication, anomaly detection and privacy management. Differences between Edge AI and Cloud AI As mentioned above, the main difference between Cloud AI and Edge AI is in where the algorithms runs: In Cloud AI, data storage and processing happens on conventional, centralised Cloud servers, often managed by a Cloud provider. In Edge AI, data and algorithms are stored and run on hardware peripheral to the network; from wearables to vehicles to IoT devices to Edge servers, such as Snowball de AWS. This difference means that each approach has its advantages and disadvantages as we will see more in a future post, but they can be briefly summarised as follows: Cloud AI Advantages: it usually has a greater capacity and computing power, which allows it to process and analyse large amounts of data. As a cloud platform, the power and capacity can be scaled up or down as needed. Disadvantages: The need to transfer data can cause speed, security and regulatory, data residency or privacy issues. Physical distance of hundreds or thousands of kilometres or network congestion can slow communications and increase data exposure. Edge AI Advantages: requires less bandwidth and has almost imperceptible latency. It offers immediate response and increased data security, but also has an impact on users' privacy. It can operate in remote areas or isolated environments without connectivity. Disadvantages: As mentioned above, limited computing power and memory can reduce the types and complexity of AI models they can run. The need to optimise algorithms and use specific hardware can increase development and deployment costs. Edge AI applications Edge AI can be applied in a variety of sectors. Some of the most obvious applications include security surveillance, voice assistants, Smart Industry or wearables. Some use cases would be: Security cameras with machine vision capabilities that analyse image content in real time to identify suspicious objects or people, events or behaviour. Voice assistants learn to recognise speech locally to detect when to activate or to respond to simple instructions. In Smart Industry, where sensors or vision systems monitor the quality of production or logistics processes or products, such as the demo we showed together with AWS at MWC. Wearables and medical devices, such as heart rate or glucose monitors, to analyse patient data in real time and alert on any anomaly. Edge AI can also be applied to driver assistance systems (ADAS). Or in autonomous cars, to process data captured by vehicle sensors, such as cameras or accelerometers, to detect and respond to traffic conditions. Conclusion Edge AI is a suitable approach in cases where data needs to be analysed and processed and an immediate response is required. Without delay due to latency or insufficient connectivity or bandwidth, while maintaining data privacy and security with the right implementation. Edge AI is still in its infancy with the continuous improvement of processing and storage capabilities in IoT devices and new developments in AI-specific hardware, such as AIU processors, Edge AI is still in its early evolutions. The combination of Edge AI and next-generation connectivity, such as 5G and LPWA or NB-IoT via satellite, will accelerate innovation and has the potential to extend the reach of AI-enabled solutions to a wider range of industries and regions. AI of Things Raspberry Pi for Edge AI: Artificial Intelligence at the edge for everyone January 4, 2024 Featured photo: Luke Chesser / Unsplash. Apple Watch is an example of an Edge AI (consumer) device capable of processing data and executing AI models locally.
April 13, 2023
Connectivity & IoT
Satellites with 5G technology to provide IoT coverage worldwide
📌 In July 2023, Sateliot and Telefónica brought 5G network coverage to space for the first time ever using a conventional SIM card integrated into Telefónica Tech's Kite platform. This paves the way for global IoT connectivity. Find out more. During the last edition of MWC, our colleague Javier Zorzano participated in the "5G IoT Summit: Hybrid NB-IoT and Satellite Solutions" alongside Shahbaz Ali, from Sateliot. In their talk, Javier and Shahbaz discussed the benefits and challenges around satellite connectivity for IoT devices. This technology, which we are jointly developing, expands and complements our portfolio of NB-IoT connectivity solutions via 5G and LPWA networks. This way, we are configuring a hybrid solution, with both terrestrial and non-terrestrial (NTN) satellite networks, to offer our customers global NB-IoT coverage. "Only 30 percent of the world’s surface has coverage from terrestrial networks" according to IoT For All. For this publication, satellite connectivity was one of the "dominant trends" at Mobile World Congress 2023. "The next step in NB-IoT connectivity is to provide global coverage, worldwide. And that step is what we are taking now." —Javier Zorzano (Telefónica Tech) The growing number of industries and sectors adopting IoT solutions worldwide makes it necessary to develop this hybrid IoT connectivity that provides coverage worldwide. Otherwise, there will be more and more IoT use cases that cannot be developed due to the lack of communication. For example, for the livestock sector operating in rural or remote regions, for the logistics sector when it needs to accurately track the status and location of its goods on transoceanic routes, or for the renewable energy sector that manages wind farms or solar farms in hard-to-reach places. Convergence of terrestrial and non-terrestrial networks As explained by Shahbaz Ali during the meeting, Sateliot is developing the first 5G NB-IoT constellation of LEO (Low Earth Orbit) nanosatellites. This constellation is made up of small and efficient satellites typically located between 500 and 600 kilometers in altitude to condiv an NTN (Non-Terrestrial Network) IoT connectivity network capable of integrating into our 5G terrestrial network. Sateliot's nanosatellite recreation. Image: SATELIOT Our collaboration with Sateliot consists of developing a technological solution that includes satellite IoT connectivity to offer an affordable and transparent solution for our customers: satellite IoT connectivity based on the same 3GPP standard as 5G and NB-IoT networks. This connectivity is also compatible with the same IoT devices currently used thanks to the development and certification of hybrid connectivity modules. "Thanks to standardization, our hybrid IoT connectivity technology is affordable and scalable, and will reduce frictions when adopting it." - Shahbaz Ali (Sateliot) This way any conventional IoT device can simultaneously work with terrestrial 5G NB-IoT networks and satellite networks. So service providers "will be able to connect with the nanosatellite network using a roaming service when they need 5G coverage to offer connectivity and follow, for example, the cargo of a moving ship, the trajectory of a mountain biker or alert emergency services in the case of an accident," explains Sateliot. Connectivity & IoT 5 free online courses to learn IoT (Internet of Things) August 3, 2023 Benefits and challenges of our satellite IoT connectivity The development of IoT connectivity via satellite provides two important benefits to any IoT solution: Global coverage in remote areas and in territories without network infrastructure or mobile coverage, helping to close the digital divide between regions. Backup coverage to reinforce mobile network coverage and to ensure service continuity in case of disruption due to incidents or natural disasters. The solution we are developing with Sateliot not only allows the use of the same IoT devices already on the market, but also the same SIM card. Whether connected through a conventional network or through Sateliot's network, connectivity can be managed through our Kite platform. Satellite connectivity managed with Kite Platform Kite Platform is our managed connectivity solution through which our customers can easily control and monitor their data line SIM cards in real time, via web or API. In this way, "our solution is equivalent to having a conventional roaming agreement, but with a satellite operator," explains Javier Zorzano. From a technical point of view, this simplifies and reduces the adoption time of this technology. From a commercial point of view, this roaming agreement is more affordable than existing solutions in the market. The first commercial pilots of satellite NB-IoT with customers are planned for the end of the year. With our technology and in collaboration with Sateliot, we address the major challenges of satellite connectivity: Connectivity management, with our Kite Platform technology. Service cost, a "very sensitive" aspect for the IoT B2B market. The massive adoption of IoT that demands coverage in places not covered by terrestrial networks. Conclusion Our partnership with Sateliot helps provide NB-IoT coverage on a global scale. It also contributes to the massive deployment of our IoT solutions and devices, especially in the B2B sector. This technological solution opens up new possibilities and use cases for IoT technology. From precise tracking of goods or fleet management anywhere in the world to the development of smart livestock and agriculture solutions in rural areas, as well as environmental monitoring projects or the management of natural resources such as water or energy. "Possibilities that we could not consider before are now on the table," concludes Javier Zorzano. Watch the talk with Javier Zorzano (Telefónica Tech) and Shahbaz Ali (Sateliot) in which they discuss the benefits and challenges of satellite connectivity for IoT devices (Hybrid NB-IoT), a solution that expands and complements our IoT connectivity portfolio. Featured photo: Stefan Stefancik / Unsplash
April 10, 2023
Cyber Security
Cybercrime, a constant threat to all types of companies
Cyber threats have existed since technology began to be used in companies and organizations. But the evolution of the technology world in the 21st century has changed the landscape: the famous "security perimeter" no longer exists, and our digital data and assets are located in different places and constantly moving, making it difficult to protect against threats. Mobile devices, cloud services, or the location of digital assets in changing places, sometimes outside our borders, have blurred that perimeter. This has led to a new era in which organizations face global risks. Cybercrime as a service (CaaS) Today, malicious actors have professionalized and many operate as international organized crime groups. These groups "rent out" their attack and encryption tools in affiliate models, meaning that criminals with lower levels of preparation can access powerful attack tools in exchange for sharing their profits. At the same time, the technological advance that protects organizations has been matched by malicious actors who remain at the forefront of the latest technologies and techniques. In some key legal issues, such as the practical impossibility of attributing criminal offenses in certain areas of the Internet, such as the dark web, the impunity of these actors remains. CYBER SECURITY Artificial Intelligence, ChatGPT, and Cyber Security February 15, 2023 Main threats today Among the main threats faced by organizations today are: Ransomware: Destructive attacks that encrypt an organization's data and demand ransom in exchange for the tools and secret keys that allow its recovery. Denial of Service (DDoS): Attacks aimed at stopping or deteriorating an organization's websites or systems. They can be motivated by activism, commissioned, rewarded, etc. The environment is artificially overloaded until it stops working or does so very poorly. Email-related attacks and identity theft: Phishing is one of the most used methods. Criminals send "deceptive" messages with links or malicious files that, once opened, infect systems and allow malicious actors to access valuable organization information. Data theft: Malicious actors take over large amounts of an organization's data and exfiltrate it (possibly using the company's own legitimate mechanisms) to be sold, auctioned, etc. Malware: Other families of malicious software are frequently used to harm systems (viruses), spy (backdoors, keyloggers, etc.), or profit. For example, "miners" are programs that mine cryptocurrencies in the infrastructure without the company being aware, generating economic benefits for the malicious actor. Insiders: Sometimes the "enemy is at home" and they are employees or collaborators who act out of revenge or to obtain economic benefit. How to protect yourself against these threats? For any company, SME, and organization, protection against these threats must be approached from a holistic and comprehensive perspective, considering all relevant and interrelated aspects. A solid Cybersecurity strategy must take into account both prevention and detection and response to incidents. Therefore, for companies and organizations, it is essential to: Carry out good information security management, which includes identifying the organization's critical assets, assessing risks, defining security measures, and implementing appropriate controls. Have clear security policies and procedures that establish the responsibilities and obligations of employees and other actors related to the organization, as well as how to act in case of an incident. Offer good training and awareness in Cybersecurity for all employees of the organization, so that they are aware of the risks and know how to act in case of an incident. Have monitoring and analysis systems for network and system activity that allow early detection of possible security incidents and enable quick action to minimize damage. Design a Cybersecurity incident response plan that establishes the procedures to be followed in case of an incident, including notification to authorities and the management of communication with customers and other stakeholders. Other measures that can help protect an organization against cybersecurity threats include the use of advanced technological security solutions, such as firewalls, antivirus software, intrusion detection systems, and vulnerability management solutions. Cyber Security Cybersecurity: “black swan“ events in a connected world March 21, 2023 Conclusion Cybersecurity poses real threats to all organizations, and it is essential to protect a company's assets and data. Threats are becoming increasingly sophisticated and dangerous, and organizations must stay up-to-date with the latest trends and threats in the field of cybersecurity to ensure adequate protection and be prepared to effectively face them. A comprehensive security approach, including technological measures, security policies, and staff training, is essential to minimize risks. Featured photo: Stefano Pollio / Unsplash
March 29, 2023
Connectivity & IoT
5G connectivity: Four real and practical use cases
According to data from GSMA, collected by the publication Redes & Telecom, by the end of 2022, over one billion 5G connections had been surpassed worldwide; this div will reach two billion in 2025, providing coverage to a third of the population in Europe. 5G connectivity is experiencing progressive deployment that is faster than its predecessors, 3G and 4G. Real use cases for 5G connectivity At Telefónica Tech, we are already implementing solutions to transform sectors such as industry by launching different projects that successfully leverage the advantages and enormous potential of 5G connectivity, such as: Gestamp: The smart factory of Gestamp is based on a digital twin. A digital twin consists of a virtual model of the factory that optimizes production and helps with decision-making. The physical elements of the plant are connected via 5G to generate a virtual copy of the entire factory that allows industrial processes to be validated, different scenarios to be tested, and decisions based on data to be made. APM Terminals: One of the largest operators of ports, maritime and land terminals in the world uses 5G connectivity to coordinate port traffic and improve safety through the deployment of 5G coverage at the APM Terminals terminal in the Port of Barcelona. The provision of 5G connectivity in cranes, trucks, and mobile staff allows all active actors to be located and visible in real-time, whether they are in motion or not, within the terminal. This helps reduce accidents among facilities, workers, vehicles, and goods. Navantia: The Spanish company of reference in the manufacture of advanced ships uses 5G connectivity to remotely assist maintenance officers through Augmented Reality (AR) glasses. It also uses 5G to support the shipbuilding process, including processing 3D scanning in real-time, thus optimizing its production processes. IE University: It has an immersive teaching center on its Segovia Campus that uses 5G connectivity in its implementation of virtual classes through streaming and from personal devices. This way, it can incorporate new educational resources such as Virtual Reality (VR), which allows specialized classes to be taught through immersive experiences for its students. IA & Data Key ingredients for today's Smart Cities November 16, 2023 5G Connectivity: Key Advantages The three key advantages of 5G connectivity are its transmission capacity and speed, imperceptible latency, and high concurrency of devices connected simultaneously in specific geographical areas. Capacity and speed: 5G can reach download speeds of up to 10 gigabits per second (Gbps), allowing for the transmission of large amounts of data in less time. Latency: Latency is the time that elapses from when a connection or request is initiated on one end until a response is received from the other end. For example, it is the time that elapses from when an industrial robot requests instructions to operate until it receives those instructions. With 5G, latency can be as low as 5 milliseconds, allowing for almost real-time communication. Concurrency: The high capacity and concurrency that 5G allows make it possible to connect multiple devices simultaneously, including IoT sensors and actuators: 5G supports up to one million connected devices per square kilometer. 5G technology is up to 90% more efficient in terms of energy consumption per unit of traffic. Because of its characteristics and advantages, 5G connectivity also has significant implications for other new-generation digital technologies, including Big Data, the Internet of Things (IoT), and Artificial Intelligence (AI). IoT (Internet of Things): 5G enables the reliable, secure connection of numerous devices without sacrificing its low latency. Big Data: Thanks to 5G's high data transfer capacity, it is possible to send and receive large volumes of information almost in real-time. Artificial Intelligence (AI): 5G connectivity enables automated systems to respond almost instantly. Cyber Security Consequences of a cyber-attack in industrial environments January 17, 2023
March 22, 2023
Telefónica Tech
Meet #LadyHacker Jess Woods, Cloud expert at Telefónica Tech
Today, March 8th, International Women's Day, we start a series of interviews with #LadyHackers from Telefónica Tech. Through their work and effort, these women are helping us become a more creative and innovative company. * * * Tell us a little about yourself: who are you and what do you do? I'm Jess Woods and I joined Telefonica Tech UK&I in October 2021 as a Senior Product Manager, which means that I'm responsible for our Cloud product strategy and roadmap. What is your specialization and how do you acquire new knowledge on a daily basis? I specialise in our Cloud services portfolio, comprising of Public Cloud, Private Cloud, Data Protection and Edge Computing. This is a set of broad and ever-changing technologies, so I am constantly learning through conversation, research and most importantly listening to others. What or who motivated you to choose a technology-related profession? I studied Sociology at university so whilst it wasn't a STEAM subject, I did a module that focussed on the impact of the ‘Internet on Society’ and it really piqued my interest. After I graduated, I secured a role in Customer Services at a locally headquartered MSP. An opportunity opened up to join a newly formed Product Management Team and I jumped at the chance to apply. Twelve years on, I'm passionate about the discipline of Product Management and also how technology can improve lives. Most of my technical knowledge comes from being curious, through constant communication with internal and external stakeholders, online training and a lot of learning on the job which I think never ends if you retain that growth mindset. Many think that not studying a STEAM subject prevents them from building careers in the technology sector but there are lots of routes available if you have the passion. What training would you recommend to someone who wants to pursue this specialty? Being a Product Manager is a diverse role and requires many skill sets so there isn't really one single training route to become one. As it's a relative new profession, there are different interpretations of the role so it may be industry specific or require different skills dependent on whether you are managing physical or software products. Personally, I really enjoyed doing a course ran by Product focus for Product Manager in Technical roles. It's increasingly important to make sure you can adopt agile approaches so any training that you can do in that space is really beneficial. It's also vital that you know your product and your market so taking vendor led training, reading industry whitepapers and research and keeping an eye on tech news are all important. Diversity encourages the search for novel information and perspectives, leading to better decision-making and problem-solving, improving the bottom line of companies. —Scientific American. Could you explain to us what the #LadyHacker initiative means to you? I couldn't be more encouraging of making the role of women in the tech sector more visible —there have been too many times where I am the only women in the room or amongst a select few. Gender stereotypes need to be broken for both, men and women. But we need society to recognise that it isn't a level playing field out there, and there is more work to be done when it comes to equity and inclusion for women. Particularly in technology professions. We can make a difference with initiatives like #LadyHacker to let more voices be heard. What do women bring to STEAM professions? Honestly, I think that women bring the same that men do to STEAM professions. We're all humans, we’re not all the same, nor do we have the same needs. But women bring energy, expertise, dedicated and determination to drive change within any STEAM profession. Jess Woods, Cloud expert at Telefónica Tech We need to work harder to ensure that our society encourages this from an early age and that when we do enter these careers that we're bought into environments that welcome inclusion and diversity. Also, we must recognise and address the need for equity. How is your experience in the work environment? I've had mixed experiences within the workplace over my career so far. Some have been not so great — one example is at a trade show one year: somebody came up to our stand and asked who the technical representative was. They were directed to me and visible scoffed in disbelieve, as I was at the time in my mid-twenties and clearly not who he was expecting to speak to. However, I have had so many fantastic experiences in my profession. I have had some fantastic mentors, built some excellent relationships and formed many friendships over my 15 years in IT. What advice would you give to the Jess Woods of 10 years ago? Ten years ago, I was two years into my career as a Product Manager. I think my advice would be to trust your instincts and that everything happens for a reason. I was hard on myself when I went through university clearing and, again, when I wasn't successful in securing a graduate role. Hard work and determination have led to be being in the same position in my career as those who were on graduate schemes, it just meant that I needed to take a different path to get there. Telefónica Tech “To be a hacker in life is to be a passionate, talented person who manages to influence the transformation of society", Carmen Alonso July 28, 2022 Telefónica Tech «We are moving towards genderless professions», María Martínez August 8, 2022 Telefónica Tech Meet #LadyHacker Karla Parra, Cybersecurity expert at Telefónica Tech June 20, 2023 Telefónica Tech #LadyHacker Tamires Abujamra is innovating at Telefónica Tech Brazil September 14, 2023
March 8, 2023
Telefónica Tech
MWC: All the innovations and expertise we shared
Following four intense days, Mobile Word Congress (MWC) 2023 bids farewell today until next year. Since opening its doors last Monday, attendees have had the opportunity to see the latest innovations in the field of mobile connectivity first-hand. Telefónica Tech participated in MWC 2023 with a wide range of activities this year, including presentations, demos and professional meetings on next-generation digital technologies and solutions for the digitalisation of companies at a stand that extended further than the physical space, into the metaverse and beyond. In this post we collect and summarise the main activities in which we have participated in this edition of the MWC. Demos of our digital technologies Smart Agriculture Solutions Making Smart Agro Happen, MWC2023 The Making Smart Agro Happen demo, applied to a vineyard at Bodegas Godeval, reiterated our commitment to bringing digitalization to all economic activities and sectors using solutions with the Eco Smart seal, certified by AENOR. Including agriculture, by showcasing how digitalization and new-generation technologies can improve productivity, resilience, and sustainability in agriculture. The demo included precision agriculture solutions, smart irrigation management, Artificial Intelligence algorithms to prevent diseases and pests, and the optimization of resources such as water and fertilizers. Furthermore, it was shown how Blockchain technology can certify and protect entries in the digital field notebook, enables the traceability of production processes and the certification of the origin of the products. Smart industry and digital twins Making Smart Industry Happen, MWC 2023 With the Making Smart Industry Happen demo, we show how a 'digital twin' works using technologies such as 5G, Edge Computing, data analytics, and machine learning to optimize industrial part production and make real-time decisions. The digital twin includes a robotic arm that simulates the construction of an industrial part, and its movement is synchronized with a digital representation of the part. This helps detect potential failures and improve efficiency and quality in the manufacturing process, which in turn reduces energy and material consumption, increasing profitability. Indoor Insights with Computer Vision Using C2RO's computer vision technology, which is part of our solution portfolio, and by placing cameras in our booth and applying Artificial Intelligence algorithms, we could constantly and in real time know how many people were visiting us in different areas and have knowledge of how things were unfolding. This Indoor Insights solution allows for understanding the flow and anonymous behavior of visitors, which aids in data-driven decision-making. Quality Control in Industrial IoT At the Amazon Web Services (AWS) booth, we demonstrated an example of Industrial IoT (IIoT) usage for quality control in manufacturing, production, and logistics processes (using cameras, 5G connectivity, and our Edge Computing solutions) to ensure that products and services meet the required quality standards set by the manufacturer, customers, and regulators. Talks, presentations and expert sessions Conference The Smart Factory: Manufacturing, Maintenance, and Logistics, with Darío Cesena (Geprom Connecting Industries | Part of Telefónica Tech), Pablo Martin García (NTT DATA), and Jesús Martín Tello (NTT DATA). Mobile identity APIs: the road to success. Our expert Glyn Povah, participated in a talk on Mobile Identity APIs, the next step to guarantee and protect the identity of users thanks to the mobile. This technology combines SIM-based solutions and mechanisms such as facial recognition or biometric authentication to identify the user. Mobile Identity APIs help prevent fraud, duplicate cards or identity theft and guarantee the user's identity in critical applications in sectors such as healthcare, finance and public administrations, among others. The smart factory: manufacturing, maintenance and logistics. Dario Cesena, CEO of Geprom, Part of Telefónica Tech, participated in this presentation dedicated to connected factories. Solutions based on connectivity technologies, Cloud and Edge Computing, Cyber Security and AI of Things (Internet of Things, Big Data and Artificial Intelligence) allow optimising operational management, digitising systems and processes, making production more efficient and sustainable or collecting, analysing and correlating data, helping to make business decisions in real time. Juanjo González, IoT connectivity product manager at Telefónica Tech AI of Things, participated in 5G IoT Connection: from space to the whole world, a panel dedicated to 5G and LPWA connectivity technologies that are driving the mass adoption of IoT devices. The session delved into our solution developed with SatelIoT that can provide satellite connectivity to IoT devices to provide global coverage, even in remote locations, as a complement to terrestrial communications. In the Aristeo session the industrial God of OT bees, our colleague José Cascallana, manager at Telefónica Tech's C4IN Cybersecurity Centre, demonstrated how Aristeo approaches industrial Cyber Security in a native way, deploying real industrial systems all over the internet as honeypots. In this way, Aristeo contributes to generate intelligence and knowledge that helps to identify and stop threats when they pose a risk to our customers' infrastructures. In the Digitalisation of the mining industry: the success of our customers, our Director of Industry 4.0 and New Business, Andrés Escribano, and Jorge Azaldegui, Head of Sales Specialists, discussed with Klever Morvely, manager at Minera Las Bambas, the growing need in the mining industry to adopt digital solutions that make their operations more efficient and sustainable, that add value to their processes and optimise their results through data, while reducing the human risks associated with mining activity. José Luis Núñez, Global Head for Blockchain & web3 at Telefónica Tech, anticipated in the DeFi-ning Opportunities session the next generation of financial services thanks to decentralised finance (DeFi, powered by Blockchain and web3 technology) that are challenging centralised financial systems for the benefit of people thanks to disruptive, accessible and secure solutions. Javier Zorzano, Head of Technology of Telefónica Tech AI of Things, and Shahbaz Ali, Head of Product of SatelIoT, participated in the 5G IoT Summit: Hybrid NB-IoT and Satellite Solutions. This event was dedicated to 5G connectivity as a key technology for interconnecting and operating billions of IoT devices for countless purposes. At the 5G IoT Summit we heard use cases and success stories that implement these technologies, enabling new business models, making production processes and company operations more efficient, saving time, resources and money, and reducing their environmental impact. The Smart and sustainable agriculture session by Ana Pociña and Paz Revuelta, product manager at Telefónica Tech AI of Things, was dedicated to how new generation digital technologies, such as Smart Agriculture solutions, are essential for the agricultural sector, to address challenges such as water scarcity, rising prices of resources and raw materials, increased demand, climate change or the growing interest of consumers in consuming local, healthy and sustainably produced food. In 5G Revenue monetization business models and platforms for non-linear growth, our colleague Bernardo Campillo, Head of Industry Partnerships at Telefónica Tech AI of Things, addressed how to tackle some of the biggest challenges for adopting cutting-edge digital technologies, such as high investment or the difficulty of demonstrating return on investment in some cases. The Open Gateway initiative, in collaboration with GSMA, aims to align all ecosystem actors to promote collaboration and innovation, and to share risks and benefits. Our new Transformation Handbooks As usual, on the occasion of the MWC we have published two new Transformation Handbooks that explain with use cases and success stories why and how digitalisation is key to the progress and sustainability of our society. You can download them in English, in PDF format, directly from here: Making Progress Happen Making Innovation Happen —You can also freely access and download the Transformation Handbooks from the 2022 edition and 2021 edition. José María Álvarez-Pallete, President of Telefónica and GSMA, presented at Mobile World Congress the GSMA Open Gateway. This initiative brings together more than twenty major operators so that telecommunications companies can share open and standardized APIs with the industry, large technology companies, hyperscalers, aggregators, and service developers. This will provide universal access to 5G networks and telco infrastructures from different operators. In this way, the networks of operators become platforms on which to develop new services and business models, benefiting the entire digital ecosystem, users, and companies. Among the uses and applications of networks as platforms are the automation of industry, autonomous driving, remote surgeries, gaming, emergency management, holographic communications, or virtual and immersive worlds.
March 2, 2023
AI & Data
The 9th century mathematician who laid the foundations of Artificial Intelligence
Algorithm is a very trendy word, as it relates closely to automation, data-driven decision making, and Artificial Intelligence in the digital realm. Algorithms are used in many machine learning systems and neural networks to perform tasks such as image classification, pattern recognition, or data generation, including content like text, images, or music. An algorithm is a defined series of specific steps to perform a particular task or solve a problem. Nowadays, algorithms are present in many common systems and applications: they determine what posts we see on social media, which route we take with GPS, what news we read, or what suggestions are shown to us on online stores or streaming platforms, to name just a few examples. Al-Juarismi and the origin of algorithms However, algorithms are not a modern phenomenon. Their origins can be traced back to ancient times. They were probably used as early as ancient Babylon in 2500 BC, and continued to be developed in later cultures including Egyptian, Greek, and Indian. One of the most well-known is Euclid's algorithm, developed around the 3rd century BC. More "recently," in the 9th century, the Persian mathematician Al-Juarismi (or al-Khwarizmi) compiled his mathematical works in a compendium that he published around 825 AD, contributing to laying the foundations of algebra and modern mathematics, including the numerical systems we use today. AI of Things How to start programming in Artificial Intelligence: languages, tools and recommendations January 18, 2023 Al-Juarismi's publication included a series of solutions to algebraic problems and described methods for solving linear and quadratic equations (of first and second degree, respectively, essential in programming and problem-solving in science, engineering, and technology) using a set of rules and procedures that we now know as algorithms. On the origin of the term algorithm, the most widely held belief points to the Latinization of the name Al-Juarismi (Algorizmi), which later derived into algorismus, although its exact etymology is not entirely clear. Al-Juarismi was a highly influential author in the Middle Ages due to his contributions to mathematics, astronomy, and geography. In any case, the term 'algorithm' was adopted in medieval Europe to refer to the methods of the Persian mathematician, an author of significant influence for his significant contributions to mathematics, astronomy, and geography. Due to his contributions to modern mathematics, Al-Juarismi can also be considered one of the primordial fathers of computing, as his work in solving problems through algorithms has been fundamental to the development of computer science. Currently, algorithms are essential for programming and the functioning of most computer systems. They are used to solve a wide variety of problems in fields such as Artificial Intelligence, cryptography, computer security, or process optimization, among other uses. Algorithms and Artificial Intelligence In the field we are concerned with, algorithms provide a framework for Artificial Intelligence systems to learn and evolve. Thanks to them, Artificial Intelligence can perform complex tasks such as natural language processing, identifying anomalies and objects in images, generating content, or detecting patterns among large volumes of data and making decisions based on that information. AI OF THINGS Ghosts in the machine: does Artificial Intelligence suffer from hallucinations? February 20, 2023 However, algorithms are not the only thing needed to develop Artificial Intelligence systems. Large amounts of relevant and high-quality data, as well as computer resources with the necessary power and capabilities to process them, are also necessary. In addition, ethics, privacy, transparency, and security are also important aspects to consider when building trustworthy Artificial Intelligence, as its application can have a significant impact on people and society. Featured photo: Charles Deluvio / Unsplash
March 1, 2023
Telefónica Tech
We are now live! Discover the new Telefónica Tech website
We have redesigned the Telefónica Tech website to represent who we are as a digital solutions integrator. We also want to share what we do, who is behind it and how next-generation digital technologies are making companies and organisations more efficient, competitive, and resilient. The new Telefónica Tech website focuses on our most valuable assets: The people that make up the Telefónica Tech team in different countries and regions: more than 6,000 professionals who work every day to provide innovative solutions to millions of customers. The technology partners we share two commitments with: to develop the best products and services, and to help customers achieve their goals. Our customers, who we support in their digitalisation process by integrating the solutions they need to grow into their systems and processes. Because their success is our success. Our technologies in connectivity, Cyber Security, Cloud, IoT (Internet of Things), Big Data, Artificial Intelligence and Blockchain... And the innovation and development labs we have in different cities in Europe and America. We also give visibility to analysts in the technology sector who know (and acknowledge!) our capabilities and the value we bring to our clients. “We have improved Telefónica Tech's website to continue positioning ourselves as the best technological partner in the digital transformation for our customers" María Díaz. Head of Marketing & Communications, Telefónica Tech We have also developed a News section that brings together the initiatives, events and multimedia content you need to stay informed about how digital technologies are creating new growth opportunities for companies and society. We would love to hear from you. We invite you to visit the new Telefónica Tech website and join the conversation or share with us any comments that will help us improve. Thank you!
February 14, 2023
AI & Data
Smart urban lighting: business opportunities and benefits
Smart street lighting is one of the pillars of smart cities. In fact, it is one of the best examples of what the term Smart City means: the application of next-generation digital technologies to improve the lives of citizens with more efficient and sustainable public services. The concept of smart street lighting brings together all these advantages thanks to the confluence of LED lighting technology, connectivity, IoT (Internet of Things) devices and management platforms with Artificial Intelligence (AI). In this way, each luminaire or streetlight in the public lighting network is connected via 5G or NB-IoT communications networks to send the data captured by different sensors (lighting, environmental or consumption, among others) to a platform capable of automatically optimising the operation and individual efficiency of each luminaire. Why should energy be consumed when there is no one in the street? Smart street lighting not only can adjust the intensity of light depending on the presence of people on the street. It also has the ability to predict, thanks to Artificial Intelligence algorithms, when and with what intensity it will be necessary to switch on each luminaire. Or the lighting of a building or monument, for example. This improves the efficiency of the network and also the feeling of safety and the perception of service for citizens. Smart street lighting has the potential to reduce electricity consumption and pollutant emissions significantly. These savings are in addition to the lower consumption implicit in LED light sources. LED lighting is not only cheaper and more efficient, but also has a significantly longer lifetime than conventional light sources, gas or filament lighting. Savings and new business opportunities for municipalities Smart street lighting allows municipalities, councils and city governments to take advantage of the ubiquity of streetlights to provide additional services, including: Sensors to combat light, noise and environmental pollution. Charging points for electric vehicles. Security and traffic control cameras. Wifi access points to municipal networks. Recharging points for public transport cards. Information or advertising panels. Mobile phone and 5G antennas. Some of the additional services have the potential to generate extra revenue for municipalities. It also means better service and attention to the citizens, fewer calls for warning or complaints about broken luminaires, more efficient maintenance.... And, of course, it increases the attractiveness of the city. Both for citizens and tourists, by promoting social and cultural activities, as well as for businesses and companies. Benefits of Smart urban lighting for citizens A smart street lighting network has a positive impact on citizens and on the image of cities. Adapting the lighting to the needs of the streets, for example, improves the safety of public spaces and the mobility of both people and vehicles. Smart public lighting also reduces light pollution and the nuisance that fixed and constant intensity lighting sometimes causes to nearby premises and dwellings —including flora and fauna— improving the quality of life of citizens. It also benefits citizens by enabling the new services already mentioned, such as charging points for electric vehicles for residents without a garage or domestic charging point. Featured photo: Vlado Paunovic / Unsplash
January 25, 2023
AI & Data
A concept popularized by a former Google CEO is more relevant today than ever
A few years ago, Google's then CEO Eric Schmidt popularised the concept of “augmented humanity”. This term refers to the ability of technology to "enhance human capabilities" to make us more efficient. To achieve better results with less effort. In short, “'Augmented humanity is about computers helping us with tasks we are not very good at, and us helping computers with tasks they are not very good at." —Eric Schmidt, former Google's CEO In his talk, Schmidt highlighted the potential that Artificial Intelligence was beginning to prove to improve our lives. A technology that Google had already been applying for some time in some services. Artificial Intelligence as a tool and not as a replacement Schmidt then highlighted how Artificial Intelligence —a term first defined in the 1950s by John McCarthy— was beginning to show real progress in automating tasks that were complex, repetitive and monotonous. Even unmanageable for humans. Their adoption would thus allow us to focus on tasks related to "critical thinking and creativity. “Augmented humanity is about embracing AI as a tool to improve people's lives, not as a replacement.” —Eric Schmidt, former Google's CEO But what are those repetitive and monotonous tasks that we can automate with Artificial Intelligence? The following are some examples: Analysis of large amounts of data from multiple sources to extract insights to make better decisions and optimise processes. Pattern recognition in data to automate production, prevent incidents, anticipate future events or identify trends or preferences. Image analysis to extract information from photographs and videos and enable machine vision. Fully or partially autonomous driving and driver assistance systems to reduce the number of accidents and optimise transportation. Robotics to automate physical tasks, machinery and manufacturing, production or logistics processes. Natural language processing to automate communication-related tasks such as customer service and support, translation or content generation. Artificial Intelligence is increasingly present in the digital transformation processes of companies. From rejection to normalisation: the pilgrimage of disruptive technologies In 2010, Schmidt's talk on the growth of Artificial Intelligence generated scepticism and rejection among some people. Even fear. But, as they say, “change is only scary when you are not willing to change with it”. The reasons are numerous, however, and also legitimate. Common fears include job loss, risk to the privacy and intimacy of individuals and their data, lack of regulation, increased inequality and risk of cyber-attacks, ethical considerations to be resolved... Not to mention the fear of artificial superintelligence, a recurring theme in science fiction. Throughout history, even before the industrial revolution, there have been other cases of technologies that have first generated rejection in some people or in society, and then proved their usefulness. A couple of recent examples include mobile phones, because of fears that they could endanger health; or the internet, because it could be used to commit crimes. Both technologies have nevertheless proven their usefulness and are now essential for business and the economy, individuals and society. The landline telephone came to be seen as unnecessary because of the existence of messengers, and raised concerns that other people (the operators) could listen in on conversations. Technology, reskilling and lifelong learning are key in the golden age of AI The arrival of new work tools that arise with "the golden age of Artificial Intelligence", in the words of Satya Nadella, CEO of Microsoft, requires changes. These changes are not only to develop them, but also to incorporate them into work processes, adapt to them and manage them. Just as happened before with computing or the internet. Also, to anticipate a potential gap between demand and the availability of suitable professional profiles. In this sense, "reskilling workers will help to overcome the impact", in the words of José María Álvarez-Pallete, CEO of Telefónica. Training and updating skills is what will make it possible to fill the jobs generated by the digital transformation and capture the new opportunities. “Technology has historically been a net job creator. The introduction of the personal computer in the 1970s and 1980s, for example, created millions of jobs.”—McKinsey Global Institute. For companies, “the winning long-term strategy is to create, cultivate and nurture” technology talent, says José Cerdán, CEO of Telefónica Tech. “It's not about competing to capture talent, but about developing internal recycling and training programmes to strengthen and update skills,” he adds. Achieving this, says Álvarez-Pallete, requires massive retraining through training programmes adapted to new skills. It also requires the promotion of a culture of continuous learning in order to manage the transition to the new digital world in a socially responsible way. Featured photo: Nguyen Dang Hoang Nhu / Unsplash
January 23, 2023
AI & Data
How to start programming in Artificial Intelligence: languages, tools and recommendations
There is a very close relationship between Big Data and Artificial Intelligence (AI): Big Data is about capturing, processing and analysing large amounts of data. When this data is combined with capabilities such as machine learning and predictive analytics, more value is extracted from that data. This makes it possible, among other things, to find patterns in this data that are "invisible" to the human eye, allowing us to predict and prevent events, offer personalised experiences of use and consumption of products and services, hold care conversations and even create content. Knowing how to program Artificial Intelligence allows you to develop countless solutions and take advantage of the enormous potential offered by Big Data, Artificial Intelligence and Internet of Things. Furthermore, in 2023 the demand for professionals qualified in the development of Artificial Intelligence solutions will continue to grow, because "AI will be present in all the digital transformation processes of companies" in numerous sectors, according to data from Fundación Telefónica's Employment Map. Programming languages for AI development Elena Díaz, head of the Centre of Excellence in the AI of Things product team at Telefónica Tech, is passionate about programming languages focused on exploiting data. As an expert, to program in Artificial Intelligence Elena recommends learning these programming languages: Python is the most widely used programming language for the development of Artificial Intelligence applications. It has many libraries and tools for machine learning, such as TensorFlow, PyBrain or PyTorch, among others. You can start with Python with an experiment for everyone. R is also a programming language widely used for data analysis, data visualisation and machine learning, especially in the field of statistics. SQL is the standard query language for relational databases, widely used in the fields of Big Data and Artificial Intelligence. Knowledge of SQL is essential for the management and analysis of large datasets, essential in the field of Artificial Intelligence. However, as Elena explains, although right now these are the most widely used programming languages in the field of Artificial Intelligence, "we always have to be aware of evolutions and adapt new languages and always be in a continuous learning process". "Once we learn to program, it is relatively easy to switch from one language to another". Elena Díaz, Telefónica Tech. Other languages that also apply to the development of Artificial Intelligence include: Java and C++ are more advanced programming languages and are also used for the development of artificial intelligence applications, including high-performance developments such as neural networks and machine learning algorithms. JavaScript, a very popular language in web development that is increasingly used in the field of Artificial Intelligence, especially for the development of machine learning applications oriented to users accessing them through apps or web browsers, for example. The programming language will depend very much on your specific preferences and needs - it will even depend on your previous programming experience, if you already have some - and also on what you want to achieve or the project you are going to work on. Connectivity & IoT 5 free online courses to learn IoT (Internet of Things) August 3, 2023 How to develop your Artificial Intelligence skills The first thing you need to do if you are interested in starting programming in Artificial Intelligence is to learn the basics of it. In this sense, it is important to "overcome conceptual, mathematical or technical barriers" and understand basic concepts of artificial intelligence, such as machine learning, computer vision and natural language processing. In addition, you will also find it helpful to: Learn a programming language such as the aforementioned Python, R, Java and C++, which are widely used in the development of artificial intelligence applications. Choose one and dedicate time to learning it. Practice with problems and projects: It is important that you practice with real problems and projects. You can find datasets and problems on websites such as OpenAI or Kaggle. Learn about AI tools and libraries: there are many AI tools and libraries, such as TensorFlow, PyTorch, scikit-learn and Keras. They allow you to build and train AI models easily and you can use them in your projects. Take every opportunity to keep learning: Artificial Intelligence is constantly evolving, so it is important to keep acquiring and updating skills. You can keep up to date with trends, new techniques and technologies through blogs and articles on the subject, webinars, talks and courses (there are many free ones) and by participating in online groups and forums. Elena recommends those interested in Artificial Intelligence to "discover what you like the most and go deeper into it. Specialising in what most motivates you and helps you to continue to grow.” Featured photo: Kelly Sikkema / Unsplash
January 18, 2023
AI & Data
Blockchain for increasing security in the buying and selling of vehicles
Digital technologies such as connectivity, Internet of Things or Artificial Intelligence are becoming increasingly common in vehicles. For example, in the form of digital assistants, mobility services, or advanced driving assistance systems. In fact, the automotive industry is one of the main drivers of innovation. The adoption of technology not only improves car manufacturers' products and processes, but also allows brands to differentiate themselves from the competition and attract the interest of drivers by adapting to their needs and improving aspects such as efficiency (reduced fuel consumption and emissions), user experience and safety. Blockchain also has enormous potential as an innovative and useful technology for the industry from the user's point of view. In fact, its adoption in the form of NFT is already beginning to be seen, although for now it is very incipient and almost anecdotal. Blockchain adoption in this area still needs to address challenges such as the infrastructures, regulations and standards that enable its adoption and use. Advantages of using Blockchain for recording vehicle information However, there is little doubt about the advantages and potential of Blockchain technology in the automotive sector, both for manufacturers and the ancillary sector (workshops, parts and spare parts manufacturers, insurers...) and also for users and vehicle owners. Using Blockchain, the information stored and shared benefits from features such as immutability, transparency, and traceability. Therefore, when applied to vehicles, Blockchain technology can be used to record all information relating securely and transparently to the life of the vehicle from the time of its manufacture, including: Date of manufacture, origin, serial number, technical characteristics, configuration, options or body colour, among other data. Maintenance services, including dates, work carried out, parts replaced, modifications, bodywork repairs, recalls or MOTs. Origin of spare parts used and details of replacement elements, to ensure authenticity and prevent the use of second hand or counterfeit parts. Vehicle mileage and battery charge cycles in the case of electric vehicles, to avoid the risk of tampering with the odometer or battery charge cycle counter in buy and sale transactions. Ownership of the vehicle to certify ownership, as well as the registration certificate, reservation of title, payment of taxes, insurance, etc. and to keep a historical record of the owners and uses of the vehicle throughout its life. As we said, the advantages of storing these and other data using Blockchain lie in the fact that this technology improves the security and integrity of the information recorded. Blockchain stores information in encrypted blocks that are stored in different nodes of a network, which means that it is very difficult to change or delete any data without leaving a record. The use of blockchain technology not only backs up vehicle documentation that is susceptible to loss or alteration, such as the maintenance book, but also provides greater transparency in the process of recording information. All of it is also available for auditing and verification. Blockchain to securely track the status and history of a vehicle In this way, by having access to all this information, anyone can know with greater certainty the condition of a vehicle and its maintenance history, both when selling and buying it. This improves confidence and security in buying and selling transactions by reducing opacity and the risk of fraud. Also based on this information recorded with Blockchain, car workshops have access to a detailed and reliable history of repairs, modifications, maintenance needs or possible incidents with a vehicle. This same information would also allow insurance companies to assess the vehicle in detail and make a more accurate calculation of its condition and value, as well as the services and the most appropriate premium for its owner. Featured image: Popescu Andrei Alexandru / Unsplash. Blockchain AI of Things What was traced first in Blockchain, the chicken or the egg? October 27, 2022
January 11, 2023
Cyber Security
Cybersecurity: 13 posts to stay informed and protected from cyberthreats
With the digitization of companies and organizations and our increasing reliance on digital technologies, data and information protection must be prioritized. The adoption of cutting-edge digital technologies is not only transforming our lives, economy, society and improving our relationship with the environment: it also means an increase in the number of malicious attacks and the sophistication of cyberattacks. Because as Cybersecurity techniques and technologies improve, so do the tactics and methods used by cybercriminals. This forces companies to increase their resources to protect their data and systems, which are increasingly valuable and critical assets for the operations and continuity of any organization. Data and IT and OT systems are becoming more and more valuable for companies, and are crucial to ensure their continuity and resilience. We have selected a few posts to say goodbye to this year and welcome 2023. Contents that address the different types of attacks and threats, techniques to prevent them and the tools to detect and respond to security incidents. They also refer to the best practices that organizations should apply to protect themselves from threats. Additionally, they emphasize the need for a robust security strategy to protect them from intrusions and attacks. Cyber Security Where is your company on the cybersecurity journey? April 20, 2022 Cyber Security 3 Key Cyber Security Considerations July 3, 2023 CYBER SECURITY The risks of not having controlled exposure to information (I) January 12, 2022 CYBER SECURITY Attention: Data leak! (In search of lost data) November 3, 2022 CYBER SECURITY Human factor key in cyber security September 28, 2022 Cyber Security Attacking login credentials June 22, 2022 CYBER SECURITY XDR, the cybersecurity trend that dominated the RSA Conference 2022 June 15, 2022 Cyber Security 'Insiders' in Cybersecurity: “Catch me if you can” April 25, 2022 Cyber Security Where does ransomware attack? Three main pillars September 29, 2021 CYBER SECURITY XVI STIC Conference: 5 trends in Cyber Security highlighted by our analysts for 2023 December 12, 2022 CYBER SECURITY Digital Identity Wallets against identity theft fraud February 8, 2022 Cyber Security Choosing a managed security services provider (MSSP): everything you need to know October 5, 2023 Cyber Security Cybersecurity in OT: a need with differences December 19, 2022
December 29, 2022
AI & Data
AI of Things: 9 essential posts to say goodbye to 2022
AI of Things (Artificial Intelligence of Things) is the set of technologies that bring together IoT devices —physical objects with sensors, software and connectivity— with Big Data and Artificial Intelligence solutions that allow processing, analyzing and extracting insights from large volumes of data. Some of these IoT (Internet of Things) objects can range from thermostats and household appliances, vehicles, buildings, or wearables (activity bracelets, watches, medical devices...) to infrastructures, supply networks, machinery, or industrial systems. This way, thanks to the sensorization of the physical world, it is possible to build a data-driven digital version of the environment, allowing us to recognize patterns or predict results, automate tasks, optimize processes, anticipate failures and needs or generate insights that help to make data-driven decisions. The application of AI of Things technologies multiplies the value of connected things and creates new business opportunities. Throughout this year, we have published a number of posts related to the AI of Things technologies that we develop, use and implement at Telefónica Tech: AI of Things Smart football stadiums: the world's greatest show, made even better May 25, 2022 AI of Things AI of Things (VIII): socio-demographic segmentation and video analytics to improve shopping experience July 13, 2022 IA & Data AI of Things for efficient and sustainable water management August 21, 2023 AI of Things Generative Artificial Intelligence, creating music to the rhythm of perceptron July 18, 2023 AI of Things How IoT solutions help us deal with rising energy prices? August 24, 2022 AI OF THINGS Can Artificial Intelligence understand emotions? May 23, 2023 AI of Things 10-minute delivery: how Artificial Intelligence optimises delivery routes September 26, 2022 AI of Things Endless worlds, realistic worlds: procedural generation and artificial intelligence in video games August 22, 2022 AI OF THINGS Big Data and Artificial Intelligence solutions for the tourism industry September 27, 2022
December 27, 2022
Connectivity & IoT
Ethical IoT: principles for an implementation that respects people's rights
IoT (Internet of Things) is one of the fundamental technologies in more and more digital transformation processes and for the development of new business opportunities. It consists of the use of connected devices and sensors that generate or capture data through common objects such as water and electricity meters, sports and wearable clothing, vehicles, industrial systems, home automation sensors, etc. This large network of connected IoT devices generates a huge amount of data from the physical environment. This data is captured and can be processed with technologies such as Cloud, Big Data and Artificial Intelligence to help make data-driven decisions that will in turn have an impact on the physical environment and people. A simple example would be the case of a smart HVAC system that takes into account data from different sensors (thermostats, sunlight intensity, occupancy and circulation of people, etc.) to calculate and adjust the optimal temperature in a commercial space IoT devices can create virtual copies (digital twins) of physical infrastructures, facilities, factories or services and environments that accurately recreate their state, operations, and behaviour. It is important, therefore, as with Artificial Intelligence, to ensure that the data captured by IoT sensors and devices is recorded, stored and used in an ethical and responsible manner. Particularly when personal data is involved; as in the case, for example, of a smartwatch capable of measuring physical activity, sleep and other health-related parameters. Three ingredients for an "ethical IoT" Public trust, as with all technologies in general, and digital technologies in particular, is essential to drive mass adoption to harness the power and potential of IoT to improve the environment, the economy and society. Transparency: As an essential element of trust, manufacturers and companies must be honest and clear about what data they will capture, how it will be used, for what purpose, and how it will influence decision-making. By knowing this, users also understand how their data will be used and can make an informed decision. Accountability: Companies make a commitment by receiving user consent that makes it necessary to ensure that data are captured, stored, and processed in a responsible manner. Especially when personal information is involved, data should be kept confidential and secure and always processed for the benefit of the user, not just for financial return. Security: a company must ensure that in order to keep data private and confidential the information captured by IoT sensors and devices, companies must ensure that they set up an ecosystem that protects the user and their information. This includes ensuring that both IoT devices and data are not exposed to cyber threats and malicious attacks, and that any data is only used for the intended purpose and always with the user's consent. IA & Data AI of Things for efficient and sustainable water management August 21, 2023 Initiatives promoting an ethical Internet of Things Ethics is essential for any company developing, implementing or making use of IoT devices. Ensuring that data is protected and that it is captured and used in an honest and transparent way, always with the knowledge and consent of users, allows companies to respect people's digital rights and develop a trusting relationship with the public. There are a number of international initiatives, both public and private, that guide and participate in the development of ethical IoT technologies, including the European Internet of Things policy, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems or The Centre for Cybersecurity del World Economic Forum. Many large companies and big-tech are also demonstrating their efforts and commitment to protecting people's digital rights and implementing the principles of ethics and transparency required by the massive use of data generated by sensors and IoT devices, and their convergence with Big Data and Artificial Intelligence. Many large companies and technology companies are demonstrating their efforts and commitment to protecting people's digital rights In this regard, New America's Open Technology Institute's Ranking Digital Rights (RDR) initiative ranks the leading telecommunications, internet, and mobile ecosystem companies whose decisions collectively affect billions of people around the world. RDR annually compiles the Corporate Accountability Index which, in 2022 and for the third consecutive year, leads Telefónica for, among other categories, having "clear and robust policies regarding the collection and use of data" and for being the only company among those analysed with a commitment to respect human rights in the use of Artificial Intelligence.
December 20, 2022
Connectivity & IoT
LPWA and 5G networks enable new IoT solutions
LPWA and 5G networks enable the interconnection of millions of objects and IoT (Internet of Things) devices such as connected sensors to send captured or generated data to Cloud platforms where this data is managed and processed. Thanks to these networks, IoT technology offers new business opportunities in multiple sectors or enables efficient management of resources such as water or energy, among other use cases. What is LPWA connectivity Low-Power Wide-Area (LPWA) networks and 5G connectivity are key to the mass adoption of IoT solutions. Both connectivities are complementary and enable the mass use of connected IoT devices when thousands of them concur in a limited area, such as in a smart building or connected factory, and up to millions of IoT devices in the same geographical area, such as a city. LPWA connectivity stands out for: Low power consumption, which guarantees the autonomy of IoT devices for as long as 10 years or more even when running on batteries. Long range, allowing sensors and actuators to be installed in remote locations or difficult to access terrain, such as crops in rural areas. LPWA also has a high penetration capability in case of physical obstacles, both indoors (such as basements or garages) and outdoors. Another key aspect is that the cost of the hardware required to connect IoT devices to LPWA is relatively low, which makes it cheaper and simpler to deploy and provides an additional driver for mass adoption of IoT solutions. An important difference between LPWA and 5G is that LPWA, unlike 5G, is not designed to transmit large amounts of data, such as that needed to make a voice call or stream video. In contrast, LPWA is very efficient and gives connected devices the ability to send small data sessions for years; such as measurements of water consumption, soil moisture or temperature or gas pressure, to name a few examples. Therefore, as we say, LPWA and 5G are complementary connections and both play a key role in the mass adoption of IoT. Connectivity & IoT Satellites with 5G technology to provide IoT coverage worldwide April 10, 2023 NB-IoT, the LPWA and 5G connectivity from Telefónica Tech Telefónica Tech is committed to providing IoT technology with robust, secure, and efficient connectivity through two communications technologies that fall under the umbrella of LPWA and 5G communications: NB-IoT (Narrow Band-IoT) for mass deployment of sensors, meters or IoT telemetry. LTE-M (LTE for Machines) for uses requiring mobility, such as fleet management. Both connections support 5G and LTE and benefit from features such as end-to-end security and authentication offered by LTE, among other advantages. Connectivity & IoT Kite Platform, the solution for managing IoT devices and connectivity. December 19, 2023
November 30, 2022
Connectivity & IoT
New business opportunities using Internet of Things (IoT)
Internet of Things devices and sensors enable the sensorisation of the environment, the fusion between the real world and the digital world. Thanks to IoT, smart devices generate a large amount of data that can be captured and processed with technologies such as Cloud, Big Data and Artificial Intelligence, allowing better decisions to be made and connected objects to act automatically and accurately, without the need for human intervention. The development of technology has always revolutionised the business sector in one way or another. In the case of the Internet of Things, its use in the internal organisational aspect of companies can be a revolution when it comes to optimising work, production or processes. The connectivity provided using IoT devices linked to Cloud platforms makes it possible to create new business models, products and services, and market strategies. In the business world, this also brings benefits such as optimising the use of assets, improving the customer experience, saving on operating costs and creating jobs IoT devices generate a wealth of information that can be captured and processed with technologies such as Cloud, Big Data and Artificial Intelligence, helping to make data-driven decisions. On the one hand, this technology allows companies to know what their weaknesses and strengths are, as they know their customers' activity and even their interests, due to the data provided by this type of device. The advantage of this is that the company can adapt the products or services offered to the specific needs of each customer. In addition, it also makes it possible to analyse the strengths and weaknesses of the employee in order to make the most intelligent decisions and distribute the work in the most efficient way. The remote monitoring provided by IoT reduces operating costs by automating internal and external processes. It also enables the redeployment of available resources in a way that is more consistent with business objectives. AI of Things Smart football stadiums: the world's greatest show, made even better May 25, 2022 Examples of IoT technology in industrial and business environments IoT technology, in combination with technologies such as Big Data and Artificial Intelligence, depending on the needs and characteristics of each organisation, is already being used successfully in different market sectors, such as: Logistics: The installation of a connected device in a company's distribution vehicles provides real-time information on the use and status of the vehicle and its precise location at all times thanks to the GPS system. This enables the optimisation and protection of personnel, vehicles and cargo and the optimisation of delivery routes. The logistics of the companies are also improved, as the coordination with distributors and suppliers’ benefits. Agri-food: The use of connected IoT sensors and actuators in the agriculture and livestock sectors enables improved productivity, efficiency and decision-making processes based on data captured by sensors and machinery that make up the value chain to form smart agriculture. The data collected will allow production models to be updated more frequently, shortening planning and improvement cycles, while historical data will allow trends, patterns and opportunities for improvement to be understood with lessons learned and combined with information captured by moisture sensors, soil, satellite and drone imagery Tourism: it allows both public and private entities to have a complete view of the anonymised behaviour of tourists, allowing them to anticipate their needs and adapt to their demands and preferences, as well as to provide and improve the tourism offer. Healthcare: there are major challenges facing the healthcare sector, in particular the ageing population and demographic growth, which make it necessary to seek efficiency in the delivery of healthcare services, while maintaining economic sustainability and offering a better patient experience. Industrial: IoT sensors can capture information from any process and turn it into value through the application of Big Data and Artificial Intelligence. LPWA and 5G Narrow Band (NB-IoT) wireless connectivity enables wireless and connected factories, improving employee productivity and safety, savings and efficiencies from IoT sensor data analytics, predictive maintenance for rapid response and planning, augmented reality environments and digital twins. Energy and utilities: IoT enables smart management of essential resources such as water, energy and gas, as well as technical infrastructures for greater operational efficiency and reliability of a large part of their processes, which in turn allows for a reduction in environmental impact and the development of new business models. Retail: to ensure a better, personalised and homogeneous shopping experience across all channels and to differentiate from the competition. Some of the main trends in the sector include localisation and hyper-personalisation in the shopping experience, the improvement of the physical shop experience and commercial spaces, the transformation of certain locations or points of sale into distribution centres and home delivery service to satisfy online sales. A good example of what IoT can offer to make business environments more efficient and productive. The big challenge facing companies that implement IoT technology is the control and management of this type of technology, employees must be trained to know the functions of these new devices in order to make the most of them and ensure that there are no errors in their use. In this way, the installation of this technology also means the creation of new jobs. In fact, new courses and master's degrees specialising in the operation of IoT are gradually being established. All this without losing sight of the main objective: to facilitate the daily lives of the people involved in the different processes. After all, the human being is at the heart of digitalisation and IoT transformation. Connectivity & IoT Learn about our IoT Partners Program, a business opportunity to grow in the global market September 25, 2023
November 29, 2022
Cyber Security
Trending Techies meetup: The Cyber Security Defence Line
A few days ago, we held the meetup "The defensive line in Cyber Security", a face-to-face event that we organised at the Ironhack space in Matadero Madrid, through Telefónica Tech's Trending Techies initiative. The meetups around the Cyber Security Techies and Data Techies communities generate conversation and networking spaces in a relaxed atmosphere (and some snacks) between professionals, students, companies, and the public interested in new generation digital technologies such as IoT, Cloud, Big Data, Artificial Intelligence, Blockchain and Cyber Security. On this occasion, the meeting was dedicated to the role of the Blue team as the first line of defence before, during and after a cyber security incident occurs. The master of ceremonies for the event was Martiniano "Marty" Mallavibarrena, Global head of incident response at Telefónica Tech. Or, for that matter, our "SWAT chief" when it comes to intervening quickly in a cyber security incident. During the meeting, the speakers shared their professional experience and knowledge through three different cases that brought together the technologies, roles, techniques, and professions involved in detecting, preventing and recovering from a cyber-attack. Intelligence (human and artificial) against cyberthreats Álvaro García-Recuero, senior researcher at the Barcelona Supercomputing Center (BSC), spoke about federated learning as an alternative for training AI models in a secure, private, and robust way. In his case applied to a project for the classification of sensitive content on the internet that is able to learn and protect itself from malicious attacks or user errors. Álvaro explained how this type of technology can also be applied to cyber security, for example, to detect or prevent cyber-attacks. And he addressed the need to use Artificial Intelligence in a "responsible and ethical" way, and to take advantage of the capacity of machine learning to protect us from cyber threats. Artificial Intelligence can also be applied in cyber security, both to attack and to defend Of course, assuming that these techniques and technologies are also used by "the bad guys", so their attack tools are getting "better and better at doing evil", in Marty's words. Silvia Hernández, Threat hunter at Telefónica Tech, dedicated her talk "Studying threats to defend ourselves against them" to talk about an intrusion related to Conti and some malicious artefacts to explain how human analytics combined with machine learning and EDR-type systems make it possible to extract information, analyse, block, and respond to threats and complex attacks, as in the case of ransomware attacks to hijack data. The figure of the threat hunter, although a relatively recent role, has become a figure without which “today's cyber security would be totally chaotic”. Cyber Security Policy for SMEs And lastly, Eugenio Martín, risk advisor at Jori&Tech, focused his presentation on preventive cyber security for small and medium-sized companies. He focused more specifically on the cases of educational centres and start-ups, two examples of organisations that are highly exposed and vulnerable because they have a very large "attack surface" (as happens when hundreds of devices and young people are combined in a school) and because of their lower technical and financial capacity to protect themselves and deal with a cyber-attack and its consequences. “We are witnessing ransomware attacks that in just three days, over a weekend, are capable of completely hijacking complex computer systems of all kinds of organisations, educational institutions and businesses." Eugenio presented the 'cyber insurance policy' or cyber security policy for SMEs and organisations. A resource that is still little known but essential, especially for small and medium-sized companies that, in general, are less prepared for common cyber-attacks such as ransomware, phishing or theft of credentials and information. The cyber policy focuses on both prevention (audit, implementation of basic resources, ...) and solution (coverage, risk management, incident intervention...) when all else fails. The problem is that cybercriminals also make use of these technologies, knowledge, and intelligence to increase the "firepower of their attacks" which, correctly applied, can be "impossible to stop", Marty explained, when there are no active cyber security measures in place.
November 21, 2022
AI & Data
World Energy Saving Day: Efficiency to drive progress
The 21st of October is World Energy Saving Day. During the last ten years, this day has raised awareness of the impact that energy consumption has on the environment and natural resources, to enable more efficient use of energy and offer "universal access" to "affordable, reliable and modern" energy. Energy is one of the pillars of any civilization. There is a relationship between energy and progress. In fact, the Kardashev scale measures the degree of technological evolution of a civilization based on the amount of energy it uses. Or rather, based on its ability to harness the energy resources at its disposal, both on and off Earth. According to the latest calculation, humanity is a 0.7 type civilization on the Kardeshev scale. That means that we can still make much more effective use of the power in our environment. Energy efficiency to accelerate progress and sustainability However, while energy consumption implies progress, its use also requires the use of limited natural resources. This means that it is usually harnessed at a high economic, social, and environmental cost. This trade-off can be solved by promoting energy efficiency: using as little energy as possible to get the intended result. Energy efficiency reduces environmental impact and carbon footprint, while boosting "economic growth, human development and environmental sustainability", says the World Energy Forum. Photo: Erik Dungan / Unsplash Digitalization for efficient use of energy resources Digitalization is one of the levers at our disposal to increase energy efficiency. The convergence of technologies such as the Internet of Things (IoT), Big Data, and Artificial Intelligence, among others, makes it possible to better manage energy from production to consumption, for example: Improving the management of energy and natural resources such as water, gas, or electricity, achieving enhanced operational efficiency and process reliability. Added an additional layer of intelligence to infrastructures also allows for the proper distribution and use of these resources, and reduces losses. Optimizing energy production with conventional and clean energy sources (such as wind and solar) by incorporating sensors and smart hardware at different stages of production. Also, by applying predictive production systems and solutions for the integration of distributed energy resources, among other possibilities. Reducing energy consumption in homes, public and commercial spaces and businesses by sensing the environment, taking into account the context and learning from user habits. Monitoring energy consumption enables the implementation of data-driven energy saving strategies and the design of accurate and customized savings plans. IA & Data AI of Things for efficient and sustainable water management August 21, 2023 Technological advances that improve efficiency and sustainability in digitization Digital solutions of this type can improve energy efficiency at different levels of energy resource management. Telefónica Tech also develops most of these solutions using ultra-efficient technologies with a lower resource consumption that reduces its environmental footprint. Thus, the products and services in our portfolio are designed to reduce their impact to promote green digitization in any type of sector, field, and activity.
October 20, 2022
AI & Data
How digitalisation makes it possible to accurately identify and meet the needs of an organic cherry farm
Before tackling the digitalisation of their organic cherry orchard, the Vicente brothers, the caretakers of the Frutas Mifra cherry farm, had to ride around the entire farm on a quad bike to open, close, check for leaks and supervise all the irrigation valves. Almost 40 valves in total had to be opened and closed by hand, one by one, then monitored to keep track of the amount of water reaching each tree. A laborious task that required time, effort and energy: Frutas Mifra's cherry orchard, located in the region of Codos, Zaragoza, is extensive, with a significant slope and has areas of difficult access that complicate the care of the fruit. Now all this process is carried out remotely, from the farm's office or from the mobile phone. This enables them to know at all times and attend to the changing needs detected by the connected sensors from anywhere, "even while you are doing other business, attending to personal matters or working on other tasks in the field", the Vicente brothers explain in the following video: Connecting farmers to data This remote management is possible thanks to the use of IoT sensors and actuators and 5G Narrow Band connectivity (NB-IoT networks) to condiv a smart irrigation system that allows accurate control of the amount of water reaching each cherry tree, which is key to determining the taste of the fruit. Smart irrigation also detects over- or under-watering due to terrain, soil condition, weather or due to leaks and losses. Frutas Mifra uses the technological solution resulting from the partnership between Spherag and Telefónica Tech to have a better knowledge of what is happening on its farm. Thanks to this knowledge, each cherry tree can be supplied with the water it needs to achieve a harvest of organic cherries with the quality and quantity demanded by customers in Abu Dhabi, Germany, Spain and France. Organic cherry growing requires attention, time and resources. Each tree is exposed to external factors that are difficult to predict, such as weather, and needs careful care day and night, every day. This is particularly important during the months between blossoming and the delicate period of fruit set until harvesting. This is the only way to achieve a bountiful harvest of perfect cherries. Going digital for a more efficient, sustainable and competitive agriculture Agriculture is a strategic sector exposed to numerous challenges: population growth, scarcity of water and generational replacement, rising costs of energy and fertilisers, nutrients or pesticides, loss of competitiveness, ... On top of this, there is the growing impact of the loss of arable land and weather anomalies resulting from climate change. According to Ecologistas en Acción, agriculture contributes up to a third of the total greenhouse gas emissions - between direct and indirect emissions - to the process of climate change. According to FAO, agriculture will need to produce almost 50% more food, fibre and biofuels than in 2012 to meet global demand by 2050 How digitisation is helping the agricultural sector to become more resilient and adapt to climate change For all these reasons, agriculture urgently needs to address its digital transformation process in order to: Increase productivity and shorten crop cycles to meet demand, save costs and be more competitive. Make efficient use of scarce resources such as water and reduce the use of fertilisers and pesticides to make crops more sustainable and environmentally friendly. Save energy and fuel in the production, supply and logistics chain to reduce costs and greenhouse gas emissions. According to CaixaBank Research, the agricultural sector in Spain uses more than 82% of water usage In this sense, new generation digitalisation technologies such as IoT (Internet of Things) devices and sensors, drones, 5G connectivity or Big Data and Artificial Intelligence, among others, already make smart and precision agriculture possible in most of the crop cycle. The overall use of these technologies already makes it possible to: Have a better understanding of the state of crops thanks to Artificial Intelligence and the combined use of data captured by IoT sensors and other sources of information, such as weather forecasts or aerial, satellite or drone images to anticipate and know what is always happening in the field. Incorporate useful and valuable data in decision-making and have powerful agronomic management tools to prevent pests or diseases in the crop and reduce the use of pesticides, fertilisers and water by applying resources precisely and selectively. Act with agility and adapt production processes to better manage economic, material, human and environmental resources, to increase yield and operational efficiency and to react to unforeseen events and sudden changes in the environment. Photo: Mae Mu / Unsplash
October 3, 2022
Cloud
AI & Data
World Environment Day: Green digital transformation as a lever for change
The UN has been celebrating World Environment Day on 5 June every year for 50 years. It is an initiative that seeks to raise awareness among the world's population of the need to respect, care and protect nature. For this year's edition, the United Nations Environment Programme (UNEP) has revived the slogan "One Earth". It is the same slogan it used for the first edition, in 1972, and it is a message that in 2022 is more powerful than ever: it invites us to live sustainably and reminds us that this planet is our only home and that we are responsible for safeguarding its resources. In this context, the technologies that underpin the digital transformation of companies —connectivity, the Internet of Things (IoT), Big Data and Artificial Intelligence, etc.— have proven their potential as a lever for change and their ability to resolve many of the environmental challenges we face. Among other examples, They allow us to improve the efficiency and sustainability of production processes, reduce energy and water consumption and reduce emissions of polluting and greenhouse gases, such as CO2. They help us to improve the management of natural resources and waste, to optimise transport logistics and mobility, to promote renewable energies and to encourage teleworking and the circular economy. However, these same challenges also force us to analyse the environmental impact of adopting these technologies in our businesses and processes, and to identify possible improvements to achieve a green digital transformation with a genuine positive impact on our environment and society. Technologies such as 5G, Internet of Things (IoT), Big Data and Artificial Intelligence or Blockchain, among others, can help reduce CO2 emissions by 15% by 2030. Source: European Commission, Sustainable technologies for (real) green digitalisation With the aim of developing technologies that promote green digitisation, most of the technological solutions offered by Telefónica Tech for the digital transformation of companies generate direct, significant and measurable environmental benefits for our customers or for the customer's users. These products are identified by the Eco Smart seal, certified by AENOR, an independent entity. This is a distinctive sign that visually shows, with an icon and a colour code, whether the product or service is associated with: Energy savings Water consumption reduction CO2 emission reduction Circular economy promotion Therefore, customers of Telefónica Tech products and services with the Eco Smart seal generate significant environmental benefits in their production process or daily activity, and also obtain a financial return from the digital transformation by developing their business in a more efficient and sustainable way. How Eco Smart solutions help the environment Eco Smart services are already being used by companies of different sizes in key sectors - such as tourism, industry, logistics and distribution, retail and banking - to reduce resource consumption and CO2 emissions. Here are some examples of technological solutions that bring direct, significant and quantifiable environmental benefits: Smart corporate spaces and workplace and teleworking solutions for companies and public administrations that reduce commuting: workplace digitalisation avoids millions of tonnes of CO2 emissions. The sensorisation of a building with IoT devices that take into account the context and the environment (occupancy of people, weather conditions, date and time...) to reduce lighting and air conditioning consumption. Smart and connected meters, reduces leakage and water losses in the public network, in irrigation communities or in households. A medical remote care service that saves patients from having to travel to the medical centre. It saves fuel (energy) and reduces pollutant and greenhouse emissions. Monitoring and predictive maintenance, in order to know the state of equipment at all times, makes it possible to anticipate incidents and predict machinery repairs, extending its useful life and contributing to the circular economy. Our customers avoided more than 8.7 million tonnes of CO₂ with Eco Smart solutions last year Solutions for aligning financial, social and environmental sustainability It is essential for companies and public administrations to opt for digital transformation solutions that have a certified environmental benefit, such as the Eco Smart label, to ensure their financial resilience and sustainability and to reduce their environmental impact. It is also to set an example by achieving your ESG goals such as decarbonisation, emission reductions or meeting the United Nations Sustainable Development Goals (SDGs) agreed for your sector. Our Eco Smart solutions saved our customers 8.7 million tonnes of CO2 emissions in 2021, equivalent to planting 158 million trees. Our goal is to avoid 12 million tonnes of CO₂ emissions by 2025. One thing that remains unchanged is our purpose: that the Eco Smart technologies that underpin the digital transformation bring our customers an economic benefit that is aligned with social and environmental sustainability. Featured impage: Photo: Mert Guller / Unsplash
June 5, 2022
AI & Data
Smart football stadiums: the world's greatest show, made even better
Stadiums are where the magic of football happens. They are the meeting place where fans and rivals experience the thrill of competition. Passion and joy, and also sorrow, are shared in the stadiums with the players and the team, with thousands of fans and even with millions in front of a television. That is why it is always exciting to go to a stadium to watch a football match. An experience that technology, applied to the digitalisation of stadiums (smart stadiums) has the capacity to turn into a memorable experience, increasing the attractiveness of sports venues for fans, viewers and sponsors. Innovative technologies such as 5G connectivity, the Cloud, Big Data and Artificial Intelligence, and the Internet of Things (IoT) offer football clubs everything they need to improve and extend their relationship with their fans, to attract new audiences and to increase the number of spectators in their stadiums. It also allows clubs to be more agile in responding to the demands of fans and advertisers; and to adapt to new football consumption habits, especially among young people. Technology to engage viewers and attract sponsors The digitalisation of stadiums and sports facilities allows clubs to provide services and entertainment ecosystems to attendees. It also enables new dynamic and proximity marketing actions to increase audience participation and loyalty at events or at the club itself. For example, Wifi or 5G connectivity inside the stadium makes it possible for spectators to connect with their mobile phone via the club's app to access exclusive content in high definition and in real time: camera views and additional shots without delay, on-demand replays, interactive content and even live and 360° virtual reality, or overlaying statistical and player performance data. Artificial Intelligence can automate sports broadcasts, enabling clubs to offer spectators a personalised mobile experience, giving them the ability to choose what they want to watch and what they want to share so that they don't miss out on what is happening inside and outside the stadium. This combination of high-speed connectivity and AI also enables new advertising media and formats to capture the attention of spectators and get a better economic return from sports facilities and communication channels, and to expand the sponsorship portfolio. Photo: Thomas Serer / Unsplash Digitalisation also makes it possible to offer 360° video scoreboards and turn stadiums into visual and acoustic spectacle generators in tune with the development of the match and the emotions of the crowd. Both during football matches and at events of all kinds, such as meetings, cultural events or concerts. Sensing the environment to connect with the crowd The use of IoT sensing and Artificial Intelligence also allows clubs to know in detail the behaviour and profile of those attending the course and its surroundings. In this way, clubs obtain insights about tourists or visitors to take into consideration when making operational, strategic or business decisions. For example, to decide what is the best date and time to hold a specific event or launch a promotion, and what type of promotion. Also to know with which public, at what times or in which areas of the stadium different commercial actions work best; or to optimise access control and the flow of attendees, improving their safety and that of the stadium. Sensorisation also increases the operational efficiency of sports facilities when combined with high-speed, delay-free connectivity (5G) and Cloud and Edge Computing technologies. Autonomous and automatic coverage of matches in smart football stadiums This combination of technologies enables autonomous and automated coverage of sporting events. Connected remote cameras and artificial intelligence algorithms are capable of performing real-time game analysis, tagging plays and orienting and switching cameras to produce a signal suitable for live broadcasting -on television, social networks, digital platforms...- that, in addition to: It facilitates the reporting and broadcasting of events, and even eliminates the need to deploy mobile TV units, streamlining operations and reducing broadcast costs while improving efficiency and sustainability. Managed broadcasting from the Cloud platform allows recording and broadcasting to be carried out from multiple locations and in near real-time, thanks to high-speed data transfer. Through the Cloud, production can be done from anywhere in the world, with the best specialists and the best resources. This means an improved experience for the viewer, providing them with a greater number of viewing options, shots and effects to enrich broadcasts and make them more personal. Cloud platform stores a historical audiovisual archive that remains available and easily accessible for the club, the media and the fans, erasing the limits to the capacity to store and manage all the content generated by the smart stadium. The possibilities offered by digitalisation are many, both for football clubs, their fans and spectators. Also for football: the digitalisation of football stadiums turns the world's greatest spectacle into a memorable experience.
May 25, 2022
AI & Data
The artificial intelligence Dall-E turns any idea expressed in a sentence into an image
Generating photorealistic images from naturally expressed concepts such as "an astronaut riding a horse" or "a bowl of soup that looks like a monster". Or anything else you can imagine, no matter how surreal. That's what Dall-E 2 does, the latest evolution of the artificial intelligence (AI) system announced by research and development company OpenAI, with Elon Musk among its founders. It's true that we've seen similar apps and AI systems before, which generate images from text or keywords. However, Dall-E's latest demo generates images that leave no one indifferent due to their quality and realism, as well as their dreamlike and surreal style. The name Dall-E combines the names of the Pixar character Wall-E and Salvador Dalí, the master of surrealism. The tool is available to the public through registration, but its results can be seen on the OpenAI website and on Instagram. Some images generated by OpenAI's AI model Dall-E The company has shared examples of the images Dall-E produces when concepts, features and styles are combined into a short phrase. Thus the phrase "a bowl of soup that looks like a monster made of play dough" would result in this image and its variants. "A bowl of soup that looks like a monster made of play dough" according with Dall-E. Image: OpenAI Whereas "a bowl of soup that looks like a monster woven out of wool" would result in this other image - and its variants. "A bowl of soup that looks like a monster woven out of wool". Image: OpenAI Different combinations can be tested on the OpenAI website, and this video shows other examples and explains a bit more about what Dall-E is and how it works. Dall-E's neural network “has learned the relationship between images and the texts that describe them,” the researchers explain. “It not only understands individual objects such as a horse or an astronaut,” they say, but has also learned “how objects and actions relate to each other". This is how Dall-E 'knows' how it should realistically represent an astronaut riding a horse. Image generated by Dall-E artificial inteligence when you ask for an astronaut riding a horse. Image: OpenAI To generate the image Dall-E uses a process called “diffusion” which starts by rearranging random patterns of dots and modifying them until the desired result is obtained, producing “images that have not existed before”. Dall-E is an example “of how human imagination and systems can work together to create new things, amplifying our creative potential” Dall-E aims to be an example of “useful and secure” AI According to the researchers, the development of Dall-E fulfils three essential premises for the development of a “useful and secure” AI: It allows the audience to express themselves in a way that was not possible before. It reveals whether the AI system “understands” what is asked of it in writing, or whether it just repeats what it has learned. Helps to understand how the AI system sees and understands the world. Compared to the first version of Dall-E, announced just over a year ago, Dall-E 2 adds new features, as well as increasing the comprehensibility and the quality and complexity of the images and the speed at which they are generated. Start from an existing image and create complex variations, such as changing the angle of a portrait and its style. Edit an existing image to replace one object with another, to add an object that does not exist in the original image, considering styling, shadows, reflections and textures. You can even change the meaning of the image. However, limitations in the use of Dall-E lead to bias In addition to limiting its availability --the tool will be available to a small group of people, mainly AI researchers and some non-commercial artists-- OpenIA has implemented some restrictions on the use of its new artificial intelligence model. These restrictions are intended to avoid harmful or offensive use of the tool by preventing it from generating violent, sexual or politically charged images. It also prevents the generation of images that include known or recognisable people. Avoiding bias and stereotyping is one of the great challenges for artificial intelligence These limitations can, however, encourage bias in AI models such as Dall-E. OpenIA researchers found that removing sexual content to prevent Dall-E from producing adult images causes Dall-E to generate fewer images of women overall. This is not a good thing, reports the publication Vox, “because it makes women invisible." But this is not a problem unique to Dall-E: avoiding bias and the persistence of stereotypes is now one of the biggest challenges “for the entire AI developer community.”
April 21, 2022