This Tiny Lenovo Box Brings Supercomputer Power to Your Desk

If you buy something from a link in this article, we may earn a commission. Learn more

Lenovo ThinkStation PGX 1

Inside that 150mm cube lives NVIDIA’s GB10 Grace Blackwell Superchip, the same architecture powering data centers around the world. Lenovo claims up to 1 petaflop of AI performance, which sounds like marketing speak until you dig into what it actually delivers. Real-world AI workloads run at around 500 teraflops using FP8 precision, and that’s legitimately impressive for something that weighs 2.6 pounds and runs off USB-C power.

The chip itself is fascinating. You get a 20-core ARM processor split between 10 high-performance cores and 10 efficiency cores, all sharing 128GB of unified memory with the GPU. That unified memory architecture is the secret sauce here. Instead of shuffling data back and forth between CPU and GPU memory, everything lives in one big pool that both processors can access directly. Memory bandwidth hits 273 to 301 GB/s depending on workload, which keeps those AI models fed with data. Storage tops out at 4TB, all running through NVMe for speed. The whole system sips 240W of power, which means you can run it off the same USB-C charger that powers high-end laptops. No special electrical work, no cooling infrastructure, just plug it in and start working.



To put the performance in perspective, a single unit can handle AI models with up to 200 billion parameters. Link two units together using the QSFP ports and you can tackle models with 405 billion parameters. That puts you in Meta’s Llama 3.1 territory, the kind of large language model that would normally require serious cloud infrastructure.

Lenovo ships this ready to roll with NVIDIA DGX OS and the full AI software stack pre-installed. PyTorch, Jupyter Notebooks, CUDA 13, TensorRT, and AI Workbench are all there out of the box. You can literally plug it in, power it up, and start coding within minutes instead of spending your first day wrestling with dependencies.

The back panel packs three USB-C ports running USB4 at 20Gbps each, an HDMI 2.1a port for driving up to four displays total, 10 Gigabit Ethernet, and those QSFP ports for linking units together. Wi-Fi 7 and Bluetooth 5.3 cover wireless needs.

Who This Makes Sense For

Let’s cut through the hype. Most people don’t need this. If you’re experimenting with ChatGPT or running the occasional stable diffusion model, your laptop will serve you just fine.




This machine is aimed at AI developers who are tired of watching cloud bills pile up, researchers working with data that legally cannot leave their facility, students learning machine learning without wanting to mortgage their future, and teams that need to prototype AI applications locally before deploying them at scale. The privacy angle matters more than you might think. Healthcare data, aerospace designs, financial models, anything in a regulated industry where data sovereignty is a compliance requirement, none of that can just live on AWS or Azure without serious legal headaches. The ThinkStation PGX keeps everything local. Your training data, your fine-tuned models, your proprietary IP, all of it stays right there on your desk where you can see it. Plus, no network latency, no waiting for API responses, no surprise cloud bills.

Lenovo’s asking $3,800 for the 1TB model and around $4,300 for the 4TB version. That’s real money, but consider the alternative: renting equivalent GPU time in the cloud for a few months will cost you the same amount, and that’s recurring. Every month, another bill.

Lenovo ThinkStation PGX 2

With the ThinkStation PGX, you pay once and then use it as much as you want. The break-even math gets interesting fast. If your current cloud AI costs run $1,000 a month, you’ve paid for this machine in four months. Everything after that is essentially free compute time.




Now, the limitations. This machine runs ARM architecture, not x86. For AI work, that’s totally fine because CUDA and all the major AI frameworks work great on ARM. But if you need x86-specific software for other parts of your workflow, you’ll need a separate machine. And if you’re just getting started with AI or running small models for personal projects, this is massive overkill. Cloud notebooks or a decent laptop with a modern GPU will serve you better and cost way less.

Why This Matters

What’s interesting about the ThinkStation PGX isn’t just the hardware specs or the price point. It’s what it represents: the return of local compute for AI workloads.

For the past few years, the narrative has been “everything moves to the cloud.” And for a lot of use cases, that makes sense. But AI development has specific needs around data privacy, latency, and cost predictability that cloud solutions don’t always address well. Lenovo and NVIDIA are making a bet that there’s a real market for developers who want to prototype locally, keep their data under direct control, and not worry about variable cloud costs. Based on conversations with AI developers, they might be right. The complaints about cloud bills and data sovereignty issues come up constantly.

Whether this particular model becomes the standard or just opens the door for competitors to build similar systems, it’s pushing the industry toward giving developers more choices in how and where they do their work.




Price: $3,800-$4,309
Availability: Now shipping (7-19 day delivery)
Where to buy: Lenovo ThinkStation PGX



Leave a Comment

Your email address will not be published. Required fields are marked *

Available for Amazon Prime