NVIDIA has unveiled foundation models capable of running locally on its RTX AI PCs. Powered by GeForce RTX 50 Series GPUs, these systems deliver up to 3,352 trillion operations per second of AI performance and include 32GB of VRAM, with the GPU producer claims provides advanced content creation, productivity, and development workflows.

The GPUs are built on the Blackwell architecture and incorporate FP4 compute technology. According to NVIDIA, this innovation doubles AI inference performance and allows generative AI models to run locally with reduced memory requirements. FP4 compute, which utilises 4-bit floating-point precision, optimises both memory usage and computational efficiency, making it especially effective for handling large AI workloads on consumer devices.

NVIDIA AI Blueprints support workflows such as converting PDFs into podcasts, generating 3D-guided images, and creating digital humans. These workflows cater to a range of industries, including education, where AI can automate learning tools, and media production, where generative AI simplifies complex tasks like video editing. NIM microservices enable localised AI processing, working seamlessly with frameworks like LangChain, ComfyUI, and AnythingLLM. NVIDIA has also introduced the Llama Nemotron family of models, tailored for tasks including instruction-following, coding, and conversational AI.

“AI is advancing at light speed, from perception AI to generative AI and now agentic AI,” said NVIDIA founder and CEO Jensen Huang. “NIM microservices and AI Blueprints give PC developers and enthusiasts the building blocks to explore the magic of AI.”

Competitors drive innovations in AI PCs

To bring these products to market, NVIDIA has partnered with leading PC manufacturers, including Dell, ASUS, Lenovo, and HP, as well as system builders like Falcon Northwest and Origin PC. These NIM-ready RTX AI PCs will initially support the GeForce RTX 50 Series, RTX 4090, and RTX 4080 GPUs, with additional hardware compatibility expected in the future.

The launch of RTX AI PCs positions NVIDIA at the forefront of the growing AI PC market, which has seen increasing competition as other major players introduce their own AI-enabled computing solutions.

AMD has introduced its Ryzen AI Max processors, equipped with embedded AI engines capable of delivering up to 50 trillion operations per second. These processors are being integrated into Dell Technologies systems to power applications such as real-time language processing and predictive analytics. The company has positioned these processors for enterprise use cases, including customer support and operational optimisation through AI-driven insights.

Intel is advancing its Lunar Lake platform, designed to deliver over 100 trillion operations per second of AI performance. The platform emphasises hybrid AI processing, combining localised computing with cloud-based capabilities to optimise scalability and efficiency. Lunar Lake processors are expected to power AI-enhanced workflows such as live video editing, automated transcription, and multi-tasking for professional users.

Qualcomm, known for its ARM-based processors, is focusing on lightweight devices with its Snapdragon X Elite series. These processors bring AI capabilities such as transcription and video editing to mobile PCs, targeting a growing market for ultra-portable AI-enabled systems. Qualcomm is collaborating with PC manufacturers to deliver devices tailored for remote workers and students who require efficient on-the-go AI processing.

Read more: What’s the point of AI PCs?