Simon Willison Demonstrates Running LLMs via Script Shebangs
- •Simon Willison detailed methods for invoking LLM command-line tools directly within script shebang lines.
- •The technique supports passing natural language prompts, external tool calls, and YAML-defined system templates.
- •Scripts can execute Python functions and perform calculations, such as computing 2344 * 5252 + 134 to return 12,310,822.
Simon Willison (software developer) outlined methods for embedding LLM commands directly into script shebang lines, enabling the execution of AI prompts as standalone scripts. This technique utilizes the /usr/bin/env -S command to pass arguments to an LLM CLI (command-line interface) tool, allowing for diverse operational patterns.
The simplest configuration supports direct prompt execution, such as generating an SVG image. Advanced implementations incorporate external tool calls using the -T flag, which enables models to access specific functions—such as fetching the current time—while processing natural language requests.
The approach also supports YAML-based templates that define system instructions and custom Python functions. An example calculation script using this method successfully computed 2344 5252 + 134 to return 12,310,822. These patterns allow for programmatic, LLM-driven automation within standard shell environments.