Are CEOs ready for the AI era?

Pam Maynard, CEO, Avanade

Steps CEOs can take on their journey to becoming AI-first leaders.

Over the next decade, generative AI will transform the lives of all of us. But are we ready for it? In the workplace today, there’s a significant disparity between CEOs, their senior leaders and the teams who will be most likely impacted by AI about how prepared people really are for it.

The majority (56%) of executives in the C-suite are highly confident in their organisation’s leadership’s understanding of AI and its governance needs—but only 36% of CEOs share that sentiment. CEO confidence over AI readiness is in direct conflict with the pace of AI adoption they seek to achieve. Most business and IT leaders (92%) agree they need to shift their organisations to AI in 12 months or less— but CEOs are even more bullish, with half wanting it within the next six months. They’re also the least confident that their organisation will harness the benefits of AI faster than their competitors.

Contrary to the hyped fears that AI will replace the work humans do—we see lots to be positive about AI. Our AI readiness reports indicates that the majority of leaders (64%) disagree that AI will replace people’s jobs—rather, they expect to increase their headcount (by 9%) in 2024 as AI becomes more pervasive. The impact on jobs should be transformative rather than displacing.

How, then, can CEOs feel more confident about the state of their organisation’s AI readiness while also accelerating the pace of innovation? And what’s needed to bridge the gap between those leading AI adoption and scale and the people who must make it a reality?

Put people first in the era of AI.

Successful AI adoption keeps its focus on the organisation’s most important asset: its people. By using AI in their day-to-day jobs, people can gain up to three hours per day as AI makes quick work of more mundane tasks. In Avanade’s internal pilot group, users also reported a 50% improvement in collaboration and teamwork, a 40% increase in problem resolution, and a 70% greater likelihood of fostering a creative approach to tasks.

CEOs have work to do in building consensus around what’s needed. Most business and IT leaders (63%) believe employees will need some new skills or completely new set of skills to work with generative AI like Microsoft Copilot; conversely 41% of CEOs think employees will need fewer skills since AI copilots will do more of their work. However, less than half of employees completely trust augmenting their work with AI, suggesting there’s much more work to do to win the hearts and minds of the people using it in the enterprise.

AI is not a one-size-fits all: what works for people in human resources will be different than for marketing, finance, and IT. It’s critical to include stakeholders from across the business and incorporate their feedback as AI experiments take hold and be honest about where it’s fallen short of its intended goal.

Disrupt your organisation with AI but do it responsibly

Most senior leaders say they’re already using AI regularly in the workplace, but lack consensus around whether their people, processes and platforms are using it responsibly with clearly defined governance.

Only half (52%) of senior leaders believe their organisation has the human capital and workforce planning processes in place to safeguard roles as generative AI is scaled—and 49% admit they’re not very confident that their organisation’s risk management processes are adequate for an enterprise-wide technical integration of AI. Confidence also varies widely by industry; energy and banking industries are most confident, with government at the bottom of the list.

The first step to readiness is getting everyone on board with a responsible AI framework that ensures trust and transparency and that the necessary guardrails are in place before AI pilot projects even begin.

Such a framework clarifies your ‘why for AI,’ pinpointing where AI has the most potential to solve a business challenge and deliver the most immediate impact. Within a responsible AI framework, establish clear guiding principles that translate corporate values into guidelines for AI—including the critical risks not worth taking. Create clear processes for managing and mitigating risks, set clear performance management objectives and document all proposed and implemented AI use cases in a center of excellence that can manage, and provision technology resources as needed. And finally, ensure that employee skills and culture are ready to fully embrace AI by reinforcing guiding principles, providing training resources and reviewing ethical considerations.

When it comes to responsible AI, CEOs must realise that their work, and that of their people, is an ongoing journey. They must continually revisit what it means to use AI safely and ethically to sharpen their approach and principles as it evolves from largely an automation and productivity play today to something much more transformative tomorrow.

Ground your company’s use of AI within your purpose and values.

An articulated purpose helps guide organisations through good times and challenging ones—whether it’s navigating a global health crisis or the rollout of transformative technologies like generative AI. Grounding your organisation’s AI journey in your purpose puts into clear focus what you will and won’t do with the technology.

The ethical and safety considerations of AI will continue to crop up for leaders, but a company’s purpose never wavers. A responsible AI framework includes a set of guiding principles rooted in purpose and values.

Taking the first steps is the way forward

The success or failure of AI in organisations depends on its first few steps, which are the most critical. CEOs set the tone for how successfully their organisations will embrace and adopt AI, and there must be support for employees and customers to use AI successfully in their jobs.

spot_img

Explore more