TELUS' Sovereign AI Factory

Powered by:

Get Started Today

Accelerate your AI innovation with TELUS.
Sovereign. Powerful. Secure.

TELUS and L-SPARK are bringing the capabilities of TELUS' Sovereign AI Factory – Canada’s fastest and most powerful supercomputer – to startups and innovators across the country, addressing Canada’s growing AI compute gap with secure, domestically controlled infrastructure. Built, owned and operated entirely in Canada, the AI Factory provides end-to-end capabilities for model training, fine-tuning and inferencing while ensuring data remains within Canadian borders under Canadian legal jurisdiction.

Through this collaboration, TELUS and L-SPARK will support organizations that require dependable, reserved GPU capability to move from prototype to production—without compromising sovereignty, privacy, or compliance. The initiative is designed for teams with real workloads, clear roadmaps, and a need for predictable access to high-performance compute.

Who this collaboration is for:

AI startups and scaleups building compute-intensive products that need reliable GPU availability for:

  • Training and fine-tuning foundation models and domain-specific models
  • High-throughput inference for production deployments
  • Large-scale experimentation, evaluation, and synthetic data generation
  • Multi-tenant enterprise AI platforms and AI-native SaaS products

Applied research teams and developers who require reserved capacity for sustained work, including:

  • University labs, research institutes, and consortia running long-duration training runs
  • Developers creating advanced AI tools, agentic workflows, and enterprise RAG systems
  • Teams working with sensitive datasets that must remain under Canadian jurisdiction
Get started today

"Canada has world-class AI talent, founders and research institutions; however, a structural shortage of sovereign domestic compute has limited startups from innovating without sending their sensitive data abroad. By teaming up with L-SPARK, we are leveling the playing field and opening the doors of our Sovereign AI Factory to the country’s founders and innovators who can now build breakthrough AI companies on infrastructure they control – keeping their innovations, intellectual property and competitive advantages in Canada. That's how we build our country’s next generation of AI leaders – kickstarting a new wave of innovation that will fuel economic growth and unlock billions in potential for Canada."

Hesham Fahmy, Chief Information Officer, TELUS

Key Benefits of TELUS' Sovereign AI Factory

Secure AI deployment in Canada

Build, train, scale and deploy AI within Canadian borders, fostering safe and secure AI innovation.

AI-powered innovation and growth

Unlock scalable AI computing power leveraging NVIDIA accelerated computing to streamline development and stay competitive in the evolving AI-driven economy.

Eco-smart AI computing

Experience sustainable AI computing—powered by 99% renewable energy, operating more efficiently than industry average and minimizing power consumption for AI workloads.

FAQs about TELUS' Sovereign AI Factory

  • Built for Developers, Optimized for AI: We provide a purpose-built environment designed to accelerate the end-to-end AI lifecycle, giving you control without the infrastructure headache.
  • Instant Prototyping: Spin up Jupyter Notebooks immediately for rapid experimentation and model development.
  • Production-Ready Inference: Leverage NVIDIA NIMs (Inference Microservices) to deploy optimized pre-built models or bring your own Open Source Software (OSS) models with one-click inference endpoints.
  • Flexible Orchestration: Architect your way with support for both Virtual Machines (VMs) and managed Kubernetes clusters, allowing for scalable training jobs.
  • Eliminate I/O Bottlenecks: Keep your GPUs fed with WEKA high-speed NVMe storage, specifically engineered to handle the massive throughput required for training large models.
  • Massive Compute Density: Access NVIDIA H200 GPUs interconnected with NVIDIA InfiniBand networking for ultra-low latency distributed training.

AI startups and scaleups building compute-intensive products that need reliable GPU availability for:

  • Training and fine-tuning foundation models and domain-specific models
  • High-throughput inference for production deployments
  • Large-scale experimentation, evaluation, and synthetic data generation
  • Multi-tenant enterprise AI platforms and AI-native SaaS products

Applied research teams and developers who require reserved capacity for sustained work, including:

  • University labs, research institutes, and consortia running long-duration training runs
  • Developers creating advanced AI tools, agentic workflows, and enterprise RAG systems
  • Teams working with sensitive datasets that must remain under Canadian jurisdiction

The collaboration will prioritize organizations in regulated and mission-critical sectors where data residency, auditability, and Canadian legal jurisdiction are essential, including:

  • Healthcare & Life Sciences: medical imaging, genomics, clinical decision support, drug discovery, patient operations
  • Public Sector & Public Services: citizen services, program integrity, digital identity, secure AI modernization
  • Financial Services & Insurance: fraud detection, risk and compliance analytics, sovereign AI copilots
  • Critical Infrastructure & Utilities: grid optimization, predictive maintenance, outage forecasting, operational resilience
  • Telecommunications & Network Intelligence: network optimization, security analytics, AI-enabled service assurance
  • Cybersecurity & Digital Trust: threat detection, anomaly monitoring, secure AI operations for regulated environments
  • Advanced Industry (manufacturing, logistics, resources): computer vision, robotics, planning optimization, digital twins

Simply contact our team at info@l-spark.com to get started. We look forward to hearing from you!

Contact Us

Stay in the know

Sign up for startup news and events. We believe in quality over quantity and promise to never overload your inbox.

Briefly describe your intended use case and models of interest.

Chat with our team