Calgary's Data Centre Moment
Off-grid power, liquid cooling built here, and why data sovereignty matters: takeaways from our sold-out Tech Thursday panel.
Last week, we hosted a SOLD OUT panel on ‘The Tech Behind Data Centres’ with our friends at the Western Canada Data Centre Alliance. The key areas we talked about were:
Why hyperscalers flipped from ‘never natural gas’ to building gas plants beside their data centres.
Companies like VoltaGrid are providing off-grid power generation to power Gigawatt-sized plants to avoid 7 year wait times with local utility.
The importance of building data-sovereignty capability in Canada.
Liquid cooling technology and the push to 100% heat capture.
We spent $1.3T just maintaining our grids over 20 years. Now we need 50% growth and nobody knows the bill.
Data Centres in space? How liquid cooling will work in space.
Our speakers were:
Thomas Farran, VP at Longbow Capital
Longbow is an investor in VoltaGrid, a company that just signed a 2.3GW deal with Oracle.
Benjamin Sutton, Product Marketing Manager at CoolIT
CoolIT was acquired last week for $4.75B, the city’s largest tech deal ever.
Joe Gentile, Account Executive at eStruxtures Data Centre
eStruxtures is Canada’s largest data center provider.
Moderated by: Vlad Oujegov, Founder of Western Canada Data Centre Alliance
→ Listen on Youtube, Spotify, and Apple
We covered (w/ timestamps):
01:05 - Introduction of the panel
01:31 - Discussion on increasing rack density and Alberta's emergence as a "frontier market".
04:05 - Massive scale of 2-gigawatt projects in Canadian history.
06:01 - "Collocation" and eStucture's role as a Canadian-owned operator.
10:52 - VoltaGrid and the transition from powering fracking sites to data centers.
16:01 - Liquid cooling technology and its necessity for modern AI chips.
21:47 - The "Tier" rating systems of site reliability
24:47 - "Bring your own power" theme and the challenges of grid interconnection.
35:08 - "Sovereign compute" and data residency for Canadian companies.
40:18 - The economic "boom" associated with large infrastructure builds.
45:16 - Rapid-fire round on tech trends and challenges for the industry.
48:36 - Audience Q&A covering heat reuse and space-based facilities.
1. Why Hyperscalers Flipped from ‘Never Natural Gas’ to Building Gas Plants Beside Their Data Centres
Three years ago, hyperscalers wouldn’t touch natural gas. Now it’s the consensus bridge fuel until nuclear comes online in the 2030s. The shift happened because grid power simply can’t scale fast enough and the demand for AI compute can’t wait.
Vlad (WCDCA) put it bluntly:
“I remember talking to a bunch of hyperscalers about three years ago, and most of them were saying, ‘We’re never gonna do natural gas. We’re not interested. Stop talking to us about it.’ And then a switch flipped.”
Tom Farran (Longbow Capital) added the context:
“Five years ago, natural gas power was legacy. It was going the way of the dodo. I don’t think that’s true anymore. It’s the most reliable and the most cost effective way that you can get power within the next one to two years.”
2. VoltaGrid: Off-Grid Power Generation to Avoid 7-Year Utility Wait Times
When you need a gigawatt of power, you can’t just call the local utility. The infrastructure upgrades alone take 5-7 years. VoltaGrid, one of Longbow Capital’s investments, spotted the opportunity to bring mobile, natural gas-powered generation directly to data centre sites, and the results have been staggering. Tom explained:
“If you go to the utility today, it’ll take you five to seven years to get actually plugged into the grid... natural gas becomes kind of the easiest way to do it. And Alberta is a beautiful place for that because we have the skills, we have the resource, and we have the people that know how to manage all of that.”
VoltaGrid started by powering fracking operations with trailer-mounted generators and pivoted when they realized the power profile of a frac spread was eerily similar to an AI data centre. They now have 5 gigawatts under contract, roughly five times the size of Calgary and half of the Alberta power grid.
Brought to you by:
Boast - Are you leaving innovation capital on the table? Boast combines AI-powered automation with specialized tax expertise to help tech companies maximize R&D credit returns. Join 2,000+ businesses who've secured $675M+ in funding. Get your free assessment at boast.ai
3. The Importance of Building Data-Sovereignty Capability in Canada
With geopolitical tensions rising, the question of who controls Canadian data has moved from niche policy debate to nightly news. Any American company operating a data centre in Canada falls under the Patriot Act, meaning the US government can demand access to that data. Joe Gentile (EStruxture) didn’t mince words:
“Any American company that operates a data centre in country here is under the Patriot Act. So effectively what that means is the American government can come up and say, give me your data, all of it... Data sovereignty allows Canadian companies to land here under a Canadian brand without the American government looking in.”
eStruxture is Canada’s largest Canadian-owned data centre company, operating exclusively in this country, with facilities in Montreal, Toronto, Calgary, and Vancouver. Their new Calgary build will be the largest in Western Canada at 90 megawatts, and they expect it to sell out within two years.
4. Liquid Cooling Technology and the Push to 100% Heat Capture
Since 2023, Nvidia’s AI chips can no longer be air-cooled. Period. That’s made liquid cooling mandatory, and Calgary-based CoolIT has been preparing for this moment for 25 years.
Ben Sutton (CoolIT) explained the shift:
“The chips from Nvidia since kind of 2023, they are only able to be liquid cooled. You can no longer cool them with air... This liquid cooling was no longer optional. It was no longer the most efficient way to do it. You had to do it. You have no choice.”
The industry is now pushing toward 100% heat capture, completely fanless servers where all heat is removed by liquid. The efficiency gains are massive: PUE (power usage effectiveness) is dropping from a historical 1.6 down toward 1.1, meaning less than 10% of power goes to cooling. As Ben put it: “Air is an insulator, right? What do we use in all our buildings? It’s not great for cooling. Liquid is kind of the direction of travel for the future.”
CoolIT currently cools 7 of the top 10 supercomputers in the world. In 2023, KKR bought CoolIT for $270M, last week it was announced that KKR sold CoolIT for $4.75B. The biggest tech deal in Calgary… ever. They now have over 140 open positions as their revenue and headcount doubles year over year.
5. We Spent $1.3T Just Maintaining Our Grids Over 20 Years. Now We Need 50% Growth. And Nobody Knows the Bill.
This was perhaps the most sobering stat of the evening. Our power grids barely grew between 2000 and 2020, yet we still spent $1.3 trillion just keeping them running. Now we need at least 50% growth in the next decade to meet data centre and electrification demand. Tom laid it out:
“From 2000 to 2020, our power grids did not grow or maybe fractionally. We still spent $1.3 trillion maintaining them through that period. Now we’re gonna basically at least 50% growth through the next 10 years. I don’t know how much we’re gonna spend on that.”
The solution isn’t simple. Utilities worry that ratepayers will foot the bill for gigawatt-scale data centre connections. The panel discussed hybrid approaches: dual-track systems where onsite generation can feed the grid during off-peak times and power the data centre otherwise, benefiting both the operator and the community.
6. Data Centres in Space? How Liquid Cooling Would Work in Zero Gravity
The final audience question went big: with Nvidia announcing chip classes designed for space-based data centres, is this the future? Ben brought the reality check:
“A vacuum is way worse [than air as an insulator]. Yes, there’s a lot of solar power there to use, but dissipating that heat is gonna be a nightmare... if data centres are gonna be adopted in space, they’re gonna have to be lower powered chips or larger chips. They’re gonna have to use different cooling technologies and they’re gonna need absolutely enormous radiators to dissipate the heat.”
He noted that two-phase (evaporative) cooling, which is passive and doesn’t rely on pumps, is likely the path forward for space applications, since a failed pump in orbit isn’t exactly a quick repair job. It’s a fascinating thought exercise, but the panel’s consensus was clear: there’s plenty of frontier territory left on Earth before we need to look up.






