Questions Boards Should Ask Management on Agentic AI

The role of the board is to be a steward for a shareholders, and to perform oversight to be sure our company will be competitive and mitigate risks.

In this new world of agentic AI, boards now have to learn to operate at human speed, but in our near future, everything will be  automated using agentic AI operating at a machine scale.

The biggest question the board will need to think about is where do we insert the “human-in-the-loop?”

How and where do we make sure there is a “kill switch“ on automated agents?

How do we have enough monitoring on our automated agents? (Should we have a super-agent?)

If we go back a bit to the first principles of artificial intelligence, we have to be sure that we are working from data that we trust. How do we define what percentage of confidence/accuracy we need to trust the data enough to empower an agent to act?

So, for example, if it’s a call-center where we have chat bot agentic agent, perhaps being 85% accurate data is good enough (when historically call-center data is much less accurate). However if we are using agentic AI in our financial systems, obviously 85% accuracy is not adequate.

How will companies partner to find the remaining 10% or additional 5% reaching the 15% data gap in this example?

Let’s go back to how we train our AI and our Large Language Model, and our  agentic AI. How trusted is our initial data set?

For example, if we are training all of our AI on our own in-house corporate data that we trust, then we can feel confident, because this is our own sovereign data.

For example, if you are an insurance underwriter and you have 30 years of history on the risks, the payouts, and the statistical outcomes of underwriting insurance in particular type of property or casualty, then you will have a much higher degree of confidence than you would on external data that is not yours.

Management teams will have to identify how they fill the last hypothetical 15% gap of information, and who they partner with for that data,  and how credible that data is.

The biggest take away for boards who may not be deeply technical is that you have to have a clean data set to train AI. Just like a corporation could not enter the digital age without clean data. The same is true to enter the AI and agentic age.

If we look at the major digital disruptor boards and digital companies like Airbnb and Uber, they use their own data. It is cleaned and in the format where it is actionable to drive training and insights. If you look at Uber, the reason they have been able to innovate is because their data is so clean and complete. They can now do ride batching where they’re able to get more rides and more dollars per trip. They can optimize their pricing based on demand. They’re able to do more pick-ups and drop offs because of AI.

Boards have to have a data strategy. We need to ask management how clean is our data? Clean data is an enormous undertaking. It could easily take a company 1 to 2 years to clean part of their data. In some circumstances, it could take 10 years to clean all of the data. Cleaning the data could represent almost 1/3 of the IT total budget. (As a reference point, cyber is almost 1/3 of the IT budget.)

The most efficient way to think about this for a company is to internally cost share the budget cost of cleaning the data. Perhaps your marketing business unit, supply chain function, finance, customer support might share some of the cost of cleaning the data.

The second step after you have cleaned, your data is to identify the first argentic use cases. One of the obvious ones  that has proven traction is to use an AI agent to assist with software cogeneration. Companies are reporting a 30 to 50% software code, efficiency and product development speed using agentic AI. Salesforce has stated they are not hiring more software engineers

Other low hanging fruit agentic AI use cases include procurement, HR benefits, automating finance, (Accounts Payable, Accounts Receivable), customer experience where 80% of the calls can be handled by a software agent. The agentic bot connects you to the human in the loop for exceptions, discounts, billing questions,  compliance, filings, etc.

Since everything will be a multi agent environment going forward, this raises the question for the board to ask management to share what is  our companies AI agent policy?

There are lots of compliance, questions and policy questions to be considered.

Here are some of the  questions the board should ask early in the process:

    • Do we have a data policy and have we cleaned our data

    • Have we started using agentic AI?

    • Do we trust the data that we are training the AI on?

    • Do we have enough information to have a high level of trust?

    • Do we have an AI policy?

    • Do we have a large language model LMLLMAI policy?

    • Do we have a agentic AI policy?

    • Where and when do we insert a human in the loop to avoid errors?

    • Did we build a kill switch into our agentic AI?

    • Is there an air gap between what we do and before it becomes public so we can catch any issues?

    • Who orchestrates monitors in overseas all of the agents?

    • Should we be doing our agentic AI on premise systems are in the cloud?

    • If we use other people’s data, who is responsible if something goes wrong?

While these are daunting questions, and obviously only a beginners list, I think it is time for our boards to begin to come up the learning curve together with our executive leadership team. We need to start to delve more deeply into the age of AI,  large language models and the current new phase of agentic AI.

These are very big challenges, and a very exciting moment. We have gone through phases in rapid succession, with increasing velocity. The Internet, the mobile, the cloud, the digital phase are all behind us. We are on the precipice of the AI phase in the exponential technology adoption curve.

We must all now immerse ourselves to understand the speed of these impacts. We likely are using old models of the rate of change. We see things are changing at a geometric pace.

If we are to help mitigate the risk for our companies and lets view this challenge an opportunity as one of the top priorities for 2025.

Special Thanks to R “Ray” Wang “Technology Futurist” for all of his invaluable help in pulling these ideas together.

  • PRINT:
  • SHARE THIS :