No items found.

How to Unlock the Value of Generative AI Responsibly

Marcel Deer
Marketing Journalist

The World Economic Forum (WEF) recommends a use-case-based approach for AI deployment and advocates for responsible scaling strategies. So, what does this look like for leaders hoping to leverage the benefits of AI quickly?

The WEF’s AI Governance Alliance aims to address AI’s challenges and opportunities. It has published a briefing paper series with collective insights to steer AI’s development, adoption, and governance. Paper two is Unlocking Value from Generative AI: Guidance for Responsible Transformation. The first paper, Presidio AI Framework: Towards Safe Generative AI Models, concentrates on risk, balancing innovation, safety, and ethics. 

In this article, we’ll concentrate on drawing out the premise of paper two, which is how to unlock AI’s value responsibly, produced in collaboration with IBM. 

Assessing Each Organisational AI Use Case for its Benefits to Customers, the Workplace, and Society. 

Organisations are quickly deploying AI to improve products, services, and internal processes and to provide hyper-personalised customer experiences. There’s a trend toward starting small with AI, employing use-case-based approaches, experimenting then rolling out what works.

“In many instances, generative AI experiments may yield unexpected learnings about where value, and often also cost and challenges, truly lie.”

The WEF says numerous factors must be assessed before an organisation moves AI from concept to implementation. Although evaluation criteria will be different for each, it will orient around three “evaluation gates,” which are business impact, operational readiness, and investment strategy. 

Business Impact

Key takeout - align an AI strategy with organisational goals and revenue objectives as well as budgets and the impact on resources, including the workforce.

As with any transformation strategy, an AI use case must be aligned with overreaching goals and objectives. Impacts can then be assessed, per the WEF, in three areas: scaling human capability, “raising the floor” by increasing organisation-wide access to new technologies, and “raising the ceiling.” Whilst this latter point may apply more to ground-breaking advancements using AI, such as in medicine or conservation efforts, it also does apply to businesses as organisations use AI to create new products and services or significantly increase their competitive edge.

Other considerations include the opportunity cost of not being an early mover or the impact on reputation as a “pioneer of innovation.” Employees are also attracted to working for organisations that use AI to automate tiresome administration and allow more time in the workday for creativity and tasks which require expertise. 

Operational readiness

Key takeout - consider technical talent and infrastructure, cybersecurity, data, and AI model tracking, as well as the structural ability to govern and manage risk 

Leaders must assess if their internal talent and technological infrastructure, including their raw data, is ready for AI integration. 

“Before organizations expose generative AI to their data, data curation is essential to ensure it is accurate, secure, representative and relevant.”

Moreover, human feedback loops are necessary. The WEF refers to fine-tuning AI models, but this is just as relevant for using off-the-shelf and public AI models to mitigate the risks associated with AI output. 

“Organizations will be held responsible for the outcomes of their AI technology and must, therefore, ensure compliance with the global complexity of regulation and policies, as cited in Generative AI Governance: Shaping the Collective Global Future. This will require new skills and roles for accountability, compliance, and legal responsibilities as a multistakeholder approach.”

Notably, the fast pace of AI evolution demands guardrails and downstream implications to be continually re-evaluated. 

Investment strategy

Key takeout - balance AI development with its potential, time to value, and a “complex regulatory environment.”

AI investments can be hefty, and the ROI is somewhat unpredictable, given potential future regulation and the evolution of the technology. Part of assessing a use case is considering whether to use open-source or third-party AI models or to develop in-house. The decision must align with “use case, speed to market, requisite resource investments, including capital and talent, licensing and acceptable use policies, risk exposure and competitive differentiation offered.”

Organisations should take into account the costs of monitoring AI compliance and mitigating risks, as these factors can demand significant and continual expertise. An existing workforce will require a change management strategy and may need upskilling. New employees with the skills to work alongside AI may need to be hired. 

Adopting AI Responsibly

Use case assessment, and selection is just the first part of AI adoption. Deploying AI responsibly demands pitching the benefits against “downstream impacts,” such as the effect on a human workforce, sustainability, and the risks of AI, including hallucinations. Implementation and subsequent scalability rely on maximising opportunities while mitigating risks. 

“From perpetuating biases, introducing security vulnerabilities and spreading misinformation – causing severe reputational damage – irresponsible generative AI applications and practices not only threaten the organization itself but can also negatively impact society at speed and scale.”

Because skills and workloads are changing, organisational structures also need to “evolve at pace,” but this also creates an opportunity to evaluate and upskill workers. 

Accountability

Key takeout - the WEF recommends multistakeholder governance with distributed ownership.

Multistakeholder governance involves leaders including “legal, governance, IT, cybersecurity, human resources (HR), as well as environmental and sustainability representatives requiring a seat at the table to ensure responsible transformation.” 

Organisations will need to self-regulate and may nominate AI governance leads. AI requires substantial human oversight for responsible application, to address and mitigate risk, as well as to ensure the quality of AI’s output. 

Trust

Key takeout - transparency and a “cohesive narrative” is essential.

Communication is the key to establishing trust associated with AI use. Communications teams and leaders have a significant responsibility to ensure transparency, educate stakeholders, shareholders, end users, and, indeed, customers. Trust stems from internal accountability and advocacy from the top down and then out into the wider organisational environment. It’s critical that an ethical use culture is created and portrayed. 

Scale

Key takeout - scaling effectively requires structures that roadmap and cascade use cases “to extract, realize, replicate and amplify value across the entire organization.”

AI transformation roadmaps must include how use cases will be scaled to achieve the promised benefits of AI. Again, the multistakeholder approach, with accountability, communication, and governance, is essential to transferring value throughout an organisation safely and responsibly. 

Human Impact

Key takeout - change management must be value-based and it’s essential to ensure workforces are engaged and up-skilled. 

Leaders must manage the significant impact of AI on human workforces. Depending on the size of the organisation this may include participation from HR, IT, and other team leads to ensure staff have both the technology and training to use AI effectively. The WEF says this begins with communication and will require professional development pathways for employees. Change management is crucial in any size organisation and the risks of AI and associated fears of employees for their jobs make this process all the more essential. 

Conclusion

Unlocking the value of AI responsibly requires a realisation of the costs and benefits of each use case while mitigating downstream impacts and risks. The WEF is planning future publications as technology advances “in pursuit of artificial general intelligence.” This approach is indicative of the evolving strategy and agile approach required by any organisation to understand and effectively implement a technology that’s evolving at a pace never before seen. The WEF says:

“New technologies driving productivity have always been positioned as repurposing workers to higher value work, which has traditionally required human oversight and creativity.”

The sheer ability of new AI models to replicate “human skills and capabilities” will be truly transformational for organisations and not just in a single isolated event or period but on an ongoing basis. No matter the size of a business or its leadership team, it’s vital to carefully strategise AI deployment, balancing value with risk, then restructure technologies and processes and reassure, motivate, and up-skill workers where appropriate. 

Recent post