At CES 2026, ASUS set a decisive benchmark for the future of local artificial intelligence by unveiling a deskside system engineered to redefine how advanced AI workloads are developed, trained, and deployed. The introduction of the ExpertCenter Pro ET900N G3 signals a fundamental shift in AI infrastructure one where data-center-class performance becomes accessible at the desk, without compromise.

This landmark system integrates the NVIDIA Grace Blackwell Ultra Superchip, an architectural breakthrough that unifies CPU and GPU computing at unprecedented scale. With 775GB of coherent unified memory and up to 20 PFLOPS of AI performance, the platform delivers supercomputer-level capabilities in a form factor designed for researchers, enterprises, and innovators who demand immediate, private, and uncompromising compute power.

Redefining Deskside AI with Data-Center-Class Performance

We are witnessing a pivotal moment in AI computing where cloud scale alone is no longer sufficient. Modern AI development increasingly requires local execution, low latency, and absolute data sovereignty. The showcased system addresses this demand head-on by delivering petaflop-scale AI compute directly to the desktop.

Powered by the NVIDIA Grace CPU and Blackwell Ultra GPU, tightly coupled via NVLink-C2C, the system achieves ultra-low-latency, high-bandwidth communication between compute elements. This architecture eliminates traditional bottlenecks, enabling seamless execution of large language models (LLMs), multi-modal AI, deep learning pipelines, and advanced simulations without relying on remote infrastructure.

The NVIDIA Grace Blackwell Ultra Platform Advantage

At the heart of this deskside supercomputer lies a platform engineered specifically for next-generation AI. The Grace Blackwell Ultra Superchip integrates compute and memory into a unified coherent system, enabling workloads that were previously constrained to large data centers.

Key advantages include:

  • Massive unified memory capacity that allows entire AI models to reside in memory without partitioning
  • Consistent performance scaling across training, fine-tuning, and inference
  • Superior energy efficiency compared to multi-GPU workstation clusters
  • Native support for AI-first workflows, including LLM fine-tuning and generative AI experimentation

This combination empowers developers to iterate faster, test larger models, and achieve production-ready results directly from a deskside environment.

775GB Coherent Memory: Eliminating AI Workflow Barriers

Memory has become the defining constraint in modern AI. As models grow exponentially in size, traditional workstations struggle with fragmentation, offloading, and latency penalties. With 775GB of coherent CPU-GPU memory, the ExpertCenter Pro ET900N G3 eliminates these limitations entirely.

We see immediate benefits across:

  • Large-scale model training without memory sharding
  • Real-time inference on massive parameter models
  • Accelerated experimentation cycles for data scientists
  • Improved reliability for mission-critical AI workloads

This memory architecture delivers more than double the effective GPU memory of a four-GPU workstation, redefining what is possible in a single deskside system.

DGX Station Architecture Comes to the Desk

Built on NVIDIA DGX Station architecture, the system inherits the design principles used in elite AI research environments worldwide. Unlike traditional desktops or servers, this architecture is purpose-built for AI acceleration, combining compute density, memory coherence, and software optimization into a single, unified platform.

We recognize this as a turning point for organizations that require supercomputer-level capability without server-room complexity. The deskside form factor enables rapid deployment, simplified maintenance, and immediate integration into existing workflows.

Pro-ET900N-G3_03

Enterprise-Grade AI Software Integration

Hardware alone does not define AI performance. The platform is paired with the NVIDIA AI software stack, delivering a turnkey environment optimized for:

  • Data science and analytics
  • Machine learning model development
  • LLM fine-tuning and inference
  • Simulation and high-performance computing

This software ecosystem ensures compatibility with industry-standard frameworks while maximizing performance through deep hardware-software integration.

Solving the Last-Mile AI Compute Challenge

Despite the growth of hyperscale cloud infrastructure, many AI professionals face a persistent “last-mile” challenge. Cloud platforms introduce latency, data governance concerns, and escalating operational costs. Traditional workstations lack the scale and memory required for advanced models.

We address this gap by bringing data-center-class AI compute directly to the desktop, enabling:

  • Private, secure AI development
  • Immediate access to full compute capacity
  • Reduced dependency on cloud resources
  • Faster innovation cycles

This approach transforms AI development from a remote, constrained process into a local, empowered experience.

Scalable Supercomputing at the Desk

The architecture supports system interconnection, allowing users to expand supercomputing power twofold when required. This scalability ensures long-term relevance as AI workloads evolve, protecting investments while enabling future growth.

This as a strategic advantage for enterprises and research institutions planning multi-year AI roadmaps.

Designed for Researchers, Enterprises, and Innovators

This deskside AI supercomputer is engineered for a broad spectrum of advanced users:

  • AI researchers developing next-generation models
  • Enterprises deploying private AI pipelines
  • Data scientists requiring rapid iteration
  • Advanced creators pushing generative AI boundaries

By delivering extreme performance in an accessible form factor, the system democratizes capabilities once reserved for elite data centers.

A Milestone in Accessible AI Supercomputing

The introduction of the ExpertCenter Pro ET900N G3 marks a defining milestone in AI infrastructure evolution. Exceptional compute density, massive coherent memory, and enterprise-grade software converge to create a platform that fundamentally reshapes deskside computing.

As AI models continue to scale and real-time, private compute becomes indispensable, this system positions itself as the definitive solution for professionals who demand uncompromising performance at the point of innovation.

The Future of AI Lives on the Desk

We see a future where AI supercomputing is no longer confined to distant data centers. By delivering up to 20 PFLOPS of AI performance, 775GB of unified memory, and DGX-class architecture in a deskside system, ASUS establishes a new standard for local AI development.

This is not an incremental upgrade it is a transformational leap that empowers innovators to build, test, and deploy the next generation of AI directly from their workspace, with confidence, control, and unprecedented capability.