When AI Runs the Cafe: Innovation or Inefficiency?
- •Andon Labs experiments with fully AI-managed retail, now moving from San Francisco to Stockholm.
- •The AI manager frequently orders bizarre, excessive supplies, ignoring practical physical constraints of the shop.
- •External suppliers and municipal agencies are increasingly burdened by unmonitored AI-driven administrative errors.
In the race to automate everything, we are witnessing a shift from AI as a helpful assistant to AI as an autonomous operator. A recent project in Stockholm, spearheaded by Andon Labs, has put this shift on full display by tasking an AI system with the daily management of a physical cafe. The goal is bold: to see if large language models can handle inventory, procurement, and even municipal interactions without human intervention. However, the results have been less of a streamlined business revolution and more of a cautionary tale about the limits of current AI reasoning in the physical world.
The anecdotes from this experiment highlight a recurring theme in AI development: the 'hallucination' of practicality. The system, affectionately or ironically named Mona, has demonstrated a startling lack of context regarding physical inventory. It has attempted to order hundreds of eggs for a kitchen without a stove and suggested using an industrial oven to cook them, oblivious to the obvious safety hazards. These comical operational failures—like ordering 6,000 napkins—are amusing when confined to a 'Hall of Shame' shelf, but they reveal a deeper friction point: the gap between digital logic and physical reality.
What moves this from a quirky science experiment to an ethical concern is the interaction with the outside world. An AI agent is not just manipulating numbers in a sandbox; it is interacting with human suppliers, police services, and local administration. When the system floods suppliers with emergency emails to correct its own illogical orders, or submits nonsensical sketches for outdoor seating permits to local law enforcement, it consumes human time and patience. This is the 'externalized cost' of AI. It forces real people, who never consented to participate in a tech demo, to clean up the messes created by an unmonitored algorithm.
The broader lesson for university students and future developers is the necessity of the 'human-in-the-loop' paradigm. While the vision of agentic AI—systems capable of setting and achieving goals autonomously—is the current frontier of the industry, we must distinguish between internal testing and external deployment. Systems that interface with critical infrastructure, such as government agencies or supply chain logistics, require rigorous guardrails.
True innovation in AI is not simply about replacing the human operator, but about creating systems that operate with enough situational awareness to avoid becoming a public nuisance. As we develop more advanced autonomous agents, we must ensure they respect the boundaries of the human systems they inhabit. Experiments that fail to protect the time and resources of non-users cross the line from innovation into negligence, a distinction that will likely become the central debate in AI policy for years to come.