Beyond ROI: Measuring AI's True Value
- •Dr. Cornelia Walther introduces Hybrid Return on Values (ROV) to track broader AI impacts
- •Proposed 'ProSocial AI Index' evaluates systems across purpose, people, profit, and planet dimensions
- •New framework targets four critical systemic risks: agency decay, bond erosion, climate impact, and societal division
For years, the gold standard for evaluating any corporate investment has been the Return on Investment (ROI). It is a clean, quantitative metric—simple to track, easy to present on a slide, and deeply rooted in financial accounting. However, as artificial intelligence becomes increasingly embedded in the fabric of our daily lives, this financial shorthand is proving dangerously inadequate. We are currently witnessing a mismatch between how we measure success and how the world actually functions. Traditional ROI tracks only what happens inside a company's ledger, ignoring the social, psychological, and ecological externalities that occur beyond that thin boundary.
Dr. Cornelia Walther, an associate professor at Sunway University, argues that we need a fundamental shift toward 'Hybrid Return on Values' (ROV). This framework proposes that we stop looking at financial gains in isolation and instead evaluate AI tools across a quadruple bottom line: purpose, people, profit, and the planet. The core premise is that artificial intelligence does not exist in a vacuum; it operates within human systems. When an algorithm automates customer service, it might save money in the short term, but it simultaneously hollows out the meaning of work and alters the quality of human connection. These are not merely 'soft' considerations; they are the substrate upon which markets, societies, and biological health ultimately depend.
To turn this philosophy into action, the proposed 'ProSocial AI Index' creates a 16-cell assessment matrix. It forces organizations to grapple with difficult questions before deploying new systems: Does this tool empower individuals or encourage dependency? Does it strengthen or erode community bonds? By implementing a 'veto' system—where catastrophic performance in one category cannot be offset by high profits in another—the index serves as an early warning system for unintended consequences. This isn't just about ethics in the abstract; it is a direct response to rising systemic risks like 'agency decay'—the subtle, quiet loss of human initiative and critical thinking in the face of automated decision-making.
For university students and future leaders, this shift is critical. We are currently living through a period where the 'exposome'—the cumulative environmental and social conditions of our lives—is increasingly influenced by digital and AI-driven inputs. If we continue to view technology through the narrow lens of financial efficiency, we risk building systems that thrive on paper while degrading the complex, interdependent environments we actually inhabit. The four-question test proposed by Walther—evaluating alignment with values, human dignity, total costs, and ecological limits—is a necessary baseline for anyone designing or procuring the next generation of intelligent tools. Decisions that cannot survive such scrutiny, she suggests, are likely too costly for society to bear in the long run.