New Research on AI Regulation, Neural Computing, and Economics
- •Institute for Law & AI proposes 'radical optionality' to govern transformative AI without premature overregulation
- •Meta and KAIST prototype 'Neural Computers' that unify computation, memory, and I/O in a single neural network
- •New study finds 13% automation across sectors could trigger explosive economic growth via recursive self-improvement
The Institute for Law & AI recommends a regulatory approach termed 'radical optionality,' advocating that governments invest in institutional capacity rather than implementing restrictive laws prematurely. Key recommendations include establishing information-sharing mandates, implementing whistleblower protections for employees at frontier labs, and developing flexible regulatory frameworks that allow for evolving definitions of AI risk and capability.
Researchers from Meta and KAIST introduced 'Neural Computers,' a conceptual prototype where a neural network functions as a traditional computer by unifying computation, memory, and input/output in a single learned runtime state. Using models like Wan 2.1, the researchers demonstrated prototypes capable of executing basic command-line interface (CLI) and graphical user-interface (GUI) tasks, aiming for a future where software resides within the weights of a neural network.
Economists from Forethought, Columbia University, and the University of Virginia modeled how AI-driven automation could trigger explosive economic growth. The study suggests that automating 13% of all sectors, or 20% of hardware research alone, could initiate a compounding feedback loop where AI systems automate their own subsequent development. Hardware research appears to be the primary driver, with returns roughly five times greater than those in software automation.
Google introduced 'Decoupled DiLoCo,' a distributed training technique that enables resilient model training across geographically separated datacenters. The system allows asynchronous training across separate 'islands of compute,' ensuring that hardware failures in one location do not halt the entire process. In testing, Google successfully trained a 12 billion parameter model across four U.S. regions using 2-5 Gbps networking speeds.