Generative AI as part of business strategy and leadership

September 20, 2023

The arrival of Generative Artificial Intelligence (AI) has been a turning point at all levels, not only because of the possibilities it offers but also because of its mass accessibility. Practically all major technology companies are currently working on Generative AI solutions and it is even integrated in several products (e.g. Bard in Google and ChatGPT in Bing and Office).

The great feature of Generative AI is the ability to create new content, to provide a solution or result from other problems or situations, without being specifically taught or trained on that problem.

The question for businesses should not be whether to adopt Generative AI, but how to adopt it.

We have already talked about the paradigm shift that Generative AI represents for both society and industry. A study by KPMG states that 77% of business leaders believe that Generative AI is the emerging technology that will have the biggest impact on business in the next 3-5 years, ahead of other technological capabilities such as advanced robotics, quantum computing, augmented reality/virtual reality (AR/VR), 5G, and Blockchain.

The study also shows that despite all the excitement around generative AI, most business leaders do not feel ready to embrace the technology or realise its full potential. It indicates that 69% expect to spend the next 6-12 months focused on increasing understanding of the goals and strategies for generative adoption as a top priority.

In this article, we will focus on the issues that businesses need to consider when making the decision to use these types of Generative AI-based tools in order to reap the benefits but manage the risks that come with them.

Guidance for implementing Generative AI-based tools in companies (such as ChatGPT)

The team leading the Govertis Emerging Technologies Competence Centre (which is part of Telefónica Tech) has been able to perceive a significant lack of planning and study when deciding and incorporating this type of solutions into the catalogue of corporate tools, so it designed a roadmap as a guide with the steps that should be followed by any entity that is thinking of implementing Chat GPT or other solutions based on Generative AI.

This is a decision that cannot be taken in isolation, but must be taken in a considered manner, developing a plan and implementing a governance system that allows for the management of technical, legal and reputational risks.

Along these lines, PWC's study The power of AI and generative AI: what boards should know (2023) highlights the importance of developing a board-level approach to AI, precisely because there are risks that must be supervised and managed at the level of corporate strategy.

69% of managers do not feel ready to embrace Generative AI or fully exploit its potential.

The Directive on measures to ensure a high common level of cyber security and repealing Directive (EU) 2016/1148, known as the NIS 2 Directive, which updates the deficiencies highlighted by the NIS Directive in the face of the new challenges resulting from the digital transformation of society, although with a specific scope of application.

Among its new features, it has included the responsibility of the management bodies of essential and important entities in terms of cybersecurity risk management, having to supervise its implementation and be liable for the failure of the entities to comply with these obligations. It even establishes the obligation for the members of the management bodies of essential and significant institutions to attend training courses on the subject.

Therefore, when integrating a Generative AI solution in the company, in our opinion, the following issues should be addressed:

1. Holistic approach and top-level leadership

AI is not an issue to be addressed by a particular department or area of the company but must be led at the highest level and addressed on the basis of a corporate strategy that establishes the means to manage the risks arising from its implementation and oversees its management.

2. Implementing the necessary foundations

Adopting a generative AI tool necessarily involves a series of prior steps that lay the necessary foundations to implement its use, both at the strategic level of management, technological capabilities that can support such a system, as well as staff training and equipment configuration.

Internal communication is very important, so that all company personnel are aware of and participate in the corporate strategy and the rules of use of these tools.

3. Governance system

The adoption of a Generative AI tool implies multiple risks, of different natures, which must be managed periodically and systemically. This will involve establishing the necessary human and technological foundations to enable decision-making and the implementation of a governance system to manage all the associated risks.

4. Regulatory compliance

There are multiple rules that converge in a governance system, and it is not an easy task to comply efficiently with the regulatory map. This is why clear coordination must be established between the various roles responsible for compliance, such as the Data Protection Officer, Compliance Officer, information security and cybersecurity director or CISO and their respective areas of action.

It is worth highlighting at this point the proposal for a European Regulation on Artificial Intelligence that establishes different regulatory requirements depending on the AI system that is adopted. Regarding Generative AI solutions, the current text establishes obligations very similar to those required for high-risk systems.

5. Risks to be managed

The main risks that need to be addressed are, without being exhaustive, as follows:

  • Cyber security: With the exponential increase in security vulnerabilities and their sophistication, cyber security is at the forefront of risk management.
  • Data protection: It is essential to establish the necessary bases regarding the information that may be used by the Generative AI system, to guarantee the non-inclusion of personal data.
  • Intellectual and industrial property, trade secrets: In addition to complying with data protection regulations, the company's information is a very important asset that must be adequately protected and must be ensured when using this technology.
  • Legal liability: The use of Generative AI may raise intellectual property issues due to unauthorised use of content protected by intellectual property laws. In fact, Microsoft has recently announced the extension of its AI customer commitments to include intellectual property claims arising from the use of Copilot.
  • Incorrect or biased information: It is essential to train employees in the use of the Generative AI tools made available to them through clear rules of use that specify in which cases they can be used and how to use and interpret their results.
  • Reputational risks: In the event of a problem that has a direct impact on the reputation of the entity, there must be plans in place to manage the crisis, especially at the communication level.

There is no doubt about the possibilities offered by Generative AI at the corporate level, so the questions should focus not on whether or not to embrace the technological advance, but on how to approach it.

It is therefore essential that the governing bodies of companies lead and supervise the adoption and implementation of this type of tools, developing the appropriate strategy, providing the necessary resources and relying on the advice and supervision of the corresponding internal managers and external professionals as required.

Foto de Steve Johnson en Unsplash.

Responsibility from design applied to AI