Premium

Baker McKenzie AI Attorney: Get a Chief AI Officer ASAP

Companies absolutely need to appoint a Chief AI Officer (CAIO). That’s the message of Bradford Newman, leader of Baker McKenzie’s machine learning and AI practice in its Palo Alto office. In his opinion, every company needs a CAIO, and that person needs to be a C-suite level executive who reports to the CEO and is at the same level as a CTO, CISO, or CFO. This CAIO would have ultimate responsibility for the oversight of all things AI in the company.

When Newman first began advocating for the position in 2015, people laughed at him or said, “What’s that?”

But, he says, as artificial intelligence spreads across enterprises and raises all sorts of issues around ethics, bias, risk, regulation, and legislation, the need for a CAIO is more urgent than ever.

A Baker McKenzie report released in March, called “Risky Business: Identifying Blind Spots in Corporate Oversight of Artificial Intelligence,” surveyed 500 U.S.-based, C-level executives. These executives all self-identified as part of the decision-making team responsible for their organization’s adoption, use, and management of AI-enabled tools.

The survey found significant corporate misconceptions and blind spots surrounding AI risk. C-level executives overestimated the risk of AI cyber intrusions, but underestimated AI risks related to algorithm bias and reputation. Just 4% of executives called the enterprise risks presented by AI “significant.” More than half considered the risks “somewhat significant.”

In addition, when managing implicit bias in AI tools (like hiring algorithms) in-house, just 61% have a team in place to up-rank or down-rank data, while 50% said they can override some AI-enabled outcomes—but not all.

Finally, the survey also found that two-thirds of companies do not have a CAIO at all, and only 41% of corporate boards have an AI expert on them.

Newman is concerned that if C-suites do not put a greater focus on AI, they will be unprepared for coming regulations—to the detriment of their companies.

“We’re at an inflection point where Europe and the U.S. are going to be regulating AI,” he said. “I think corporations are going to be woefully on their back feet reacting, because they just don’t get it—they have a false sense of security.”

Unfortunately, according to Newman, AI in enterprises tends to be very fragmented.

While there are many constituents within a company that are affected by AI and have a claim to its governance, ideally the CAIO would be in charge of bringing them all together and then reporting to shareholders as well as to regulators and governing bodies.

A CAIO does not need to be an expert on every aspect of AI within their company, just as a CFO does not know every calculation that takes place. Rather, the CAIO would simply be in charge of the much-needed oversight of AI within the enterprise.

“There’s a growing awareness that the corporation’s going to have to have oversight, as well as a false sense of security that the oversight that exists in most organizations right now is enough,” he said. “It isn’t going to be enough when the regulators, the enforcers, and the plaintiffs lawyers come—if I were to switch sides and start representing the consumers and the plaintiffs, I could poke giant size holes in the majority of corporate oversight and governance for AI.”

If companies do not exercise appropriate caution and oversight when it comes to implementing AI throughout their organizations, they will open themselves up to risk of all kinds—from not hiring the best candidate to decreased profitability to litigation.