Jobs.ca
Jobs.ca
Language
Desjardins logo

Principal Artificial Intelligence Engineer

Desjardinsabout 15 hours ago
Verified
Hybrid
Toronto, ON
CA$103,200 - CA$192,000/year
Senior Level
Full-time

Top Benefits

Health insurance
Tuition reimbursement
Accident and life insurance

About the role

Application Deadline: 11/29/2025

Address: 100 King Street West

Job Family Group: Data Analytics & Reporting

We are back in office 2-4 days/week! This role is not remote/virtual.

The Team

We accelerate BMO’s AI journey by building enterprise-grade, cloud-native AI solutions. Our team combines engineering excellence with cutting-edge AI to deliver scalable, secure, and responsible solutions that power business innovation across the bank. We enable and accelerate our partners on their AI journeys across the enterprise, helping teams across BMO unlock value at scale. We support one another in times of need and take pride in our work. We are engineers, AI practitioners, platform builders, thought leaders, multipliers, and coders. Above all, we are a global team of diverse individuals who enjoy working together to create smart, secure, and scalable solutions that make an impact across the enterprise. Our ambition is bold: deploy our capital and resources to their highest and most profitable use through a digital-first operating model, powered by data and AI-driven decisions.

The Impact

As a Principal Cloud AI Engineer, you are a hands-on technical developer who designs, builds, and scales cloud-native AI solutions and products. You help set engineering standards, establish patterns, mentor senior engineers, and partner with multiple teams to deliver resilient, governed, and cost-efficient AI at enterprise scale.

You’ll help shape and evolve our AI cloud strategy from model serving and LLMOps to security, observability, and compliance so teams across the bank can innovate safely and rapidly.

You will advance BMO’s Digital First strategy by:

  • Defining reference and production-grade solutions for AI/GenAI on cloud (Azure preferred; multi-cloud aware a bonus).
  • Building reusable, secure, and observable components (APIs, SDKs, microservices, pipelines).
  • Operationalizing LLMs and RAG with strong controls and Responsible AI guardrails.
  • Driving platform roadmaps that enable faster delivery, lower risk, and measurable business outcomes.

What’s In It for You

  • Influence the technical direction of enterprise AI and the platform primitives others build on.
  • Ship high-impact systems used across many business lines and products.
  • Work across the full stack: cloud infra, data/feature pipelines, model serving, LLMOps, and DevSecOps.
  • Partner with a leadership team invested in your growth and thought leadership.

Responsibilities

Product Builder

  • Build and operate AI/ML cloud-native systems: frontend, backend, integration to other systems, feature stores, training/serving infra, vector databases, model registries, CI/CD, canary/blue-green, and GitOps for AI.
  • Technical cloud-native implementation of ML/LLM observability (latency, cost, drift, hallucination/guardrails, quality & safety metrics), logging/tracing (OpenTelemetry), and SLOs/SLIs for production AI systems.
  • Design and implement robust CI/CD pipelines for AI/ML workloads using GitHub Actions and Azure DevOps, including automated testing, model validation, security scanning, model versioning, and blue/green or canary deployments to ensure safe, repeatable, and auditable releases.
  • Drive FinOps for AI/GPU workloads (rightsizing, autoscaling, spot, caching, inference optimization).

Strategy

  • Help evolve the cloud AI reference design (networking, security, data, serving, observability) for ML/GenAI workloads (batch, streaming, online) with HA/DR, multi-region patterns, and cost efficiency.
  • Work on standards and best practices for containerization, microservices, serverless, event-driven design, and API management for AI systems.

GenAI & LLMOps

  • Architect RAG systems (chunking, embeddings, vector stores, grounding, evaluation) and guardrail frameworks (prompt/content safety, PII redaction, jailbreak & injection defenses).
  • Lead model serving (LLMs and traditional ML) using performant runtimes (e.g., TensorRT-LLM, vLLM, Triton/KServe) and caching strategies; optimize token usage, throughput, and cost.
  • Guide fine-tuning/PEFT/LoRA strategies, evaluation frameworks (offline/online A/B), and safety/quality scorecards; standardize prompt libraries and prompt engineering patterns.

Security, Risk & Governance

  • Implement defense-in-depth: IAM least privilege, private networking, KMS/Key Vault, secrets mgmt, image signing/SBOM, policy-as-code (OPA/Azure Policy), and data sovereignty controls.
  • Embed Responsible AI: model documentation, lineage, explainability, fairness testing, and human-in-the-loop patterns; align to model risk management and audit needs.
  • Ensure regulatory and privacy compliance (e.g., PII handling, encryption in transit/at rest, approved data sources, retention & residency).

Delivery & Operations

  • Lead complex discovery and solution design with stakeholders; build strong business cases (value, feasibility, ROI).
  • Oversee production readiness and operate platforms with SRE principles (SLOs, error budgets, incident response, chaos testing, playbooks).
  • Mentor engineers; multiply team impact via reusable components, templates, and inner-source.

Qualifications

Must Have

  • Bachelor’s, Master’s, or PhD in Computer Science, Engineering, Mathematics, or related field (or equivalent experience).
  • 7+ years building large-scale distributed cloud systems; 5+ years hands-on with cloud (Azure preferred; AWS/GCP nice to have).
  • Proven experience designing and operating production ML/GenAI systems (training, serving, monitoring) and shipping AI features at scale on cloud.
  • Strong software engineering in Python (and one of Go/Java/TypeScript); deep expertise with APIs, async patterns, and performance optimization.
  • Hands-on with MLOps/LLMOps: MLflow, KServe/Triton, Feast/feature stores, vector DBs (e.g., FAISS, Milvus, Pinecone, pgvector, Cosmos DB with vectors), orchestration (Airflow/Prefect), and CI/CD for ML (GitHub Actions/Azure DevOps).
  • Cloud-native stack: Kubernetes (AKS/EKS), containers, service mesh/ingress, serverless (Azure Functions/Lambda), IaC (Terraform/Bicep), secrets & key management, VNet/Private Link/peering.
  • GenAI production experience: RAG, evaluation, prompt engineering, fine-tuning/PEFT/LoRA, and integration with providers (e.g., Azure OpenAI/OpenAI, Anthropic, Google, open-source models via Hugging Face).
  • Excellent communication; ability to influence across engineering, product, security, and risk.

Nice to Have

  • GPU systems & inference optimization (CUDA/NCCL, TensorRT-LLM, vLLM, TGI); Ray/Spark/Databricks for distributed training/inference.
  • Observability: Prometheus/Grafana, OpenTelemetry, ML observability (e.g., WhyLabs, Arize), data quality (Great Expectations).
  • Event streaming and real-time systems (Kafka/Event Hubs), micro-batching, CQRS.
  • Search & knowledge systems (Elastic, OpenSearch, Knowledge Graphs).

Tech You’ll Use (Illustrative)

  • Cloud & Infra: Azure (AKS, Functions, App Service, Event Hubs, API Management, Key Vault, Private Link, Monitor), Terraform/Bicep, GitHub Actions/Azure DevOps.
  • AI/ML: Python, PyTorch, ONNX, MLflow, Hugging Face, LangChain/LangGraph, OpenAI/Azure OpenAI, Anthropic, vector DBs (FAISS/Milvus/Pinecone/pgvector/Cosmos DB vectors).
  • Serving & Ops: KServe/Triton, vLLM/TensorRT-LLM, Prometheus/Grafana, OpenTelemetry, Great Expectations, ArgoCD/GitOps, OPA/Azure Policy.
  • Data & Orchestration: Spark/Databricks, Ray, Airflow/Prefect, Kafka/Event Hubs, Feast/feature store patterns.

How You’ll Measure Success

  • Reliability & Performance: SLOs met for AI services (latency, availability, quality); scalable throughput and GPU/infra efficiency.
  • Security & Compliance: Zero critical findings; auditable lineage and model documentation; RAI controls consistently applied.
  • Developer Velocity: Time-to-first model and time-to-production reduced via reusable components and golden paths.
  • Business Impact: Clear ROI, adoption across lines of business, measurable customer/employee experience improvements.
  • Technical Leadership: Mentorship, architectural influence, and uplift across teams; strong cross-functional partnerships.

Notes

  • Additional responsibilities may be assigned based on your career growth ambitions and evolving enterprise needs.
  • This role is individual contributor senior technical leadership (Principal), driving impact through architecture, code, and influence rather than direct line management.

About Desjardins

Banking
10,000+

Desjardins Group is the largest cooperative financial group in North America and the fifth largest cooperative financial group in the world, with assets of $435.8 billion as at March 31, 2024. It was named one of Canada's Best Employers by Forbes magazine and by Mediacorp. To meet the diverse needs of its members and clients, Desjardins offers a full range of products and services to individuals and businesses through its extensive distribution network, online platforms and subsidiaries across Canada. Ranked among the world's strongest banks according to The Banker magazine, Desjardins has some of the highest capital ratios and credit ratings in the industry and the first according to Bloomberg News.