Joe Capes didn’t set out to build a data center cooling empire, but his company, LiquidStack, has emerged as one of the key players enabling the next generation of AI infrastructure—trusted by hyperscalers, backed by major investors, and named to NVIDIA’s Recommended Vendor List. In this conversation, which has been lightly edited for space and style, Capes explains why the company chose Texas to double its manufacturing footprint, what makes liquid cooling essential in today’s data center market, and how staying bold—and fast—is key to keeping up.
Area Development (AD): For readers who may not know LiquidStack, give us a quick snapshot of the company and its evolution.
Joe Capes (JC): The company actually began as a Bitcoin mining firm in 2012, founded in Hong Kong. I joined in 2019 as we started to pivot out of crypto—too volatile—and into enterprise-grade liquid cooling for data centers. We spun out from Bitfury, rebranded as LiquidStack, and stood up the new company. We have a holding company in the Netherlands, with subsidiaries in the U.S. and Hong Kong. Since those days, we’ve raised capital from WiWynn (part of Wistron Group), Trane Technologies, and more recently Tiger Global. We started manufacturing in Dallas in early 2024, and with the AI boom, we just announced a second facility nearby to double our production capacity.
AD: That second Carrollton facility is just two blocks from your original one. Why did you choose to expand so close?
JC: We were looking globally—Europe, Southeast Asia—but more than 50% of new AI-related projects are happening in the U.S. We wanted to hedge against geopolitical uncertainty, like potential tariffs. The Dallas-Fort Worth area offered access to skilled labor and engineering talent, plus strong community values. We’ve built a great relationship with University of Texas Arlington, which has a dedicated liquid cooling program for graduate students. Carrollton, specifically, has a deep bench of light industrial space and a very supportive Chamber of Commerce.
AD: Any economic development incentives tied to the expansion?
JC: Not in this case. But we did a very thorough location search, looking at about a dozen cities, including the Dayton-Cincinnati corridor. Ultimately, DFW gave us the strongest pipeline for talent and technical collaboration.
AD: This second facility isn’t just manufacturing—it includes R&D and testing too, right?
JC: Exactly. We named it “Shorthorn,” a nod to Longhorn, our first site. It’s a power-dense facility where we can do factory witness testing and support advanced R&D. That’s critical as our U.S. commercial presence grows. We used to do all R&D in Hong Kong, but the time zones made collaboration with US customers challenging in some cases. Being closer to major customers and partners here has been a big win.
AD: Let’s talk tech. What makes your manufacturing process unique?

JC: We’re seeing huge demand for our single-phase CDU-1MW—Coolant Distribution Units that support direct-to-chip liquid cooling. These sit between the white space and the chiller plant. One of our investors now embeds our tech in their CDUs. That interoperability is essential. As cooling technology evolves, data centers can operate with higher inlet liquid temperatures. This allows heat to be rejected using water temperatures of up to 40°C—basically hot water. As a result, the reliance on evaporative cooling or mechanical refrigeration is reduced, leading to lower water consumption and improved energy efficiency.
AD: So this is a sustainability play as well?
JC: Absolutely. U.S. data centers will consume over 6 billion liters of water this year for power and cooling. AI is accelerating that, but liquid cooling—especially our tech—can dramatically reduce water use. It also allows data centers to run more efficiently by raising the temperature thresholds for heat rejection.
AD: Is the industry ready to handle this shift to liquid cooling?
JC: There are challenges for sure. Many AI installations are retrofits inside , air-cooled data centers. These facilities weren’t designed for liquid cooling or the power densities we’re seeing. Nvidia Blackwell took peak data power center densities from 17kW per rack, to 120kW. Now they’re talking about 600kW racks. It’s an order of magnitude change in just 12 months. Time to power is another issue. There are regional issues with power generation capacity and grid interconnects that are slowing the deployment of AI. Finally, demand for power and cooling infrastructure is outstripping the industry’s ability to meet required capacities and lead times.
AD: With that kind of demand, how are you managing lead times?
JC: We launched a Quick-Ship program, which allows us to deliver CDUs much faster. We are taking calculated risks on long-lead materials to keep our production lines moving. This allows us to deliver CDUs much faster than our competition. One recent deal went from initial inquiry to a signed contract in 12 days, and we made our first shipment in just two months. In this market, you have to be agile.
AD: You were recently named to NVIDIA’s Recommended Vendor List. What does that mean for LiquidStack?
JC: That was a big milestone. It signals to operators and hyperscalers that we’re a trusted, vetted supplier. It took a lot of work and close collaboration. That designation has already significantly boosted our sales pipeline.
AD: What’s the biggest lesson you’ve learned about scaling U.S. manufacturing?
JC: Hire the right leadership team and surround it with the best talent you can find keep the team motivated, engaged and results-focused. This kind of scale-up doesn’t happen without a bold, collaborative team. And take smart risks. The pace of change is so fast in this sector—you can’t afford to be timid.
AD: What’s next? More expansion ahead?
JC: Definitely. We’re evaluating overseas manufacturing, however the current situation with global tariffs is changing dynamically along with the political landscape. We’ll always think globally but act locally. And we’ll continue building out in the U.S. as long as the demand stays this strong.