By Mike Whitmire, CEO and Co-founder, FloQast
AI is already transforming the finance and accounting functions, from automating routine tasks to generating strategic insights. But with great power comes great responsibility. As AI begins to influence financial figures, forecasts, and filings, one uncomfortable question looms: Who’s on the hook if it gets something wrong?
The short answer is: the human at the end of the process is.
As the CEO of a company that builds accounting transformation solutions, and as a former auditor myself, I believe the responsibility for AI-generated financial outputs will ultimately rest with finance leaders – CFOs, controllers, and their teams. AI doesn’t absolve responsibility; it amplifies it. And that’s why auditability of AI is no longer a nice-to-have – it’s a fundamental requirement.
The mirage of AI objectivity
Today’s AI can produce balance sheets, income statements, and journal entries with astonishing speed. But speed without scrutiny is a recipe for disaster. These systems can hallucinate, misclassify transactions, or base calculations on flawed logic, often with little indication that anything’s amiss.
The problem is compounded by opacity. Most AI models, especially large language models (LLMs), are black boxes. Their decision-making processes are buried in layers of statistical probabilities and training data. Even developers can’t always explain why a model made a certain decision. That’s a non-starter for finance, where compliance, traceability, and accountability are paramount.
So, what happens when AI-generated financials don’t add up? When numbers are wrong and decisions are made on faulty outputs, boards won’t be pointing fingers at OpenAI, Microsoft, or Google. They’ll be looking at the CFO.
Why traditional audits don’t work for AI—Yet
Historically, auditors have validated financials using sampling techniques or manual checks. But if an AI system is involved, especially one with opaque logic, those methods break down. How do you verify a calculation when the logic behind it is buried in a neural network?
One option is to audit the output by re-performing all calculations manually. That’s neither scalable nor realistic, especially amid a growing talent shortage. Another option is to ask auditors to learn how AI models work. But good luck with that; keeping up with constantly evolving LLMs isn’t feasible.
So, we need a third path.
A practical alternative: transparent, traceable AI
Instead of relying on black-box models, finance teams need AI platforms that function more like a collaborative partner and less like a mysterious oracle. At FloQast, we’re pioneering a new approach: systems where accountants define their workflows in plain language and AI generates custom, visible, testable code in the background.
This architecture flips the AI trust equation. Rather than auditing results after the fact, auditors can examine the process. Every calculation, control, and input is visible, and changes are tracked. Once a workflow is tested, auditors can trust the output with a higher degree of confidence, because it’s not just AI doing the work, it’s AI following clearly defined, auditable rules.
This isn’t just about building smarter tools. It’s about building trustworthy ones.
Trust is the real currency
For AI to become a pillar of modern finance, it needs to inspire confidence at every level, from the analyst reviewing an automated journal entry to the regulator evaluating the final report. That means showing your work, tracking your changes, and making every output explainable.
Consider something as mundane as a journal entry. If AI classifies a transaction, it should also explain why. What logic was applied? What controls were in place? Who reviewed it? In a future audit, the system should be able to answer all of those questions. Without that transparency, you’re just hoping the machine got it right. And that’s not good enough.
The stakes are too high
Unchecked AI in finance isn’t just a risk, it’s a liability. Misstatements can lead to restatements, fines, and reputational damage. More importantly, they erode the trust that underpins financial reporting.
The future of AI in finance must be built on traceability, transparency, and accountability. AI should make audits easier, not harder. It should reduce risk, not create it. That means designing systems with auditability at their core, not bolted on as an afterthought.
So, who’s on the hook if AI gets the numbers wrong? You are. But with the right systems in place, you’ll also be the reason it gets them right.