NVIDIA and Siemens Unveil AI-Powered Adaptive Ultrasound Imaging
- •NVIDIA and Siemens Healthineers introduce NV-Raw2Insights-US for physics-informed, real-time ultrasound reconstruction.
- •System processes raw probe signals directly, replacing traditional hand-engineered beamforming pipelines.
- •Architecture utilizes NVIDIA Holoscan and Blackwell-class GPUs for low-latency, patient-specific adaptive image focusing.
For decades, medical ultrasound imaging has relied on a standardized, hand-engineered pipeline. This traditional approach reconstructs final images by compressing vast amounts of raw data while making generalized assumptions—like the constant speed of sound—that often overlook the unique physical characteristics of a patient’s tissue. In a significant shift toward AI-native diagnostics, NVIDIA and Siemens Healthineers have collaborated to move beyond these limitations with a new system: NV-Raw2Insights-US.
This approach fundamentally changes how ultrasound data is processed. Instead of feeding the AI pre-reconstructed images, the model ingests the raw signals directly from the ultrasound probe, allowing it to "listen" to the echoes more carefully. By estimating the actual speed of sound for each specific patient in real time, the system can generate a personalized sound-speed map, which then corrects and optimizes the image focus instantly.
The deployment of this technology relies on an advanced hardware-software stack designed for the high-performance demands of medical environments. The team uses the Holoscan Sensor Bridge, an open-source FPGA IP that enables high-bandwidth, low-latency data streaming from traditional scanners directly into a GPU. Once the data reaches the system, it is processed by an accelerated inference engine running on Blackwell-class GPUs. This allows clinicians to receive high-fidelity, adaptive imaging without the massive computational delays associated with older, manual reconstruction methods.
Beyond the immediate gains in image clarity, this development marks a transition toward "software-defined ultrasound." Because the processing is no longer locked into rigid, hardware-coded pipelines, the system can be improved continuously through software updates and modular AI expansion. By effectively turning the ultrasound machine into an adaptive, AI-aware platform, this collaboration establishes a new, scalable foundation for the next generation of diagnostic imaging tools.