Jobs.ca
Jobs.ca
Language
Devacor Solutions Group logo

Senior DevOps / MLOps Engineer

Remote
Remote
Senior Level
Contract

About the role

  • Contract
  • Remote (within Canada)
  • Posted on March 27, 2026

Tri-global Solutions Group Inc.

SENIOR DEVOPS / MLOPS ENGINEEER

**Requisition #:**R26-3474 (RQ00613)

**Location:**Remote (within Canada)

**Engagement Type:**Contract

**Number of Resources required:**1

**Term:**2026-04-13 to 2026-12-18 with possible extension

**Rate (Daily):**Up to $725.00 per diem (equivalent to $100.00/hour) / Commensurate with related experience and market competitiveness

**Hours per day:**7.25

**Security Screening:**Standard (Criminal Record Check)

————————————————————————

Tri-global Solutions Group Inc. is seeking one (1) Senior DevOps / MLOps Engineer to join our talented Service Delivery team with Supply Ontario.

WORK MODEL: The successful candidate(s) will work remotely. Work must be done from within Canada, due to network and data security policies. Applicants must be authorized to work in Canada to apply (e.g., Canadian Citizen, Permanent Resident, etc.).

Please review the project overview, description of services, and requirements below. If you meet the requirements and are interested in submitting for this role, please reply to this job posting.

If you know other consultants who may be interested in this opportunity kindly share this job posting.

Thank you.

Tri-global Solutions Group Inc.

Website: https://tri-global.com

————————————————————————

PROJECT OVERVIEW

We are seeking a highly skilled Senior DevOps / MLOps / Data Engineer to lead platform engineering, deployment automation, and AI/ML model deployment on Azure. This role emphasizes DevOps excellence, secure and cost-efficient deployments, and production-grade AI delivery, while supporting ML lifecycle management and scalable data engineering solutions.

**Experience:**10+ years in IT, 5+ years in DevOps / Data Engineering / MLOps

DESCRIPTION OF SERVICES

DevOps, Deployment & MLOps (Primary Focus):

  • Design and implement robust CI/CD pipelines using Azure DevOps, Git, and YAML
  • Manage end-to-end deployment pipelines across Dev, QA, and Prod environments
  • Lead deployment of AI/ML models into production using automated pipelines
  • Deploy, version, and manage AI models (batch and real-time inference)
  • Implement model lifecycle management (training, validation, deployment, monitoring)
  • Automate infrastructure provisioning using ARM/Bicep/Terraform
  • Manage Azure resources via Azure Portal, CLI, and scripting
  • Oversee cluster management (Databricks clusters, scaling, performance tuning)
  • Implement release strategies, versioning, rollback, and environment parameterization
  • Develop scalable inference endpoints and API-based model serving
  • Integrate AI services such as Azure AI Search and Azure AI Foundry
  • Integrate and deploy services using REST APIs
  • Monitor model performance, drift, and retraining strategies
  • Ensure platform reliability, monitoring, and incident response

Data Engineering:

  • Design and build data pipelines using Azure Data Factory, Databricks, and ADLS
  • Develop and optimize data models in Azure SQL, SQL Server, and Oracle
  • Implement ETL/ELT processes for large-scale data processing
  • Ensure data quality, governance, and performance optimization
  • Support medallion architecture (Bronze, Silver, Gold layers)

Security & Compliance:

  • Implement secure cloud architecture using RBAC, Managed Identities, and Azure Key Vault
  • Secure data pipelines, storage, and ML endpoints (encryption, network controls, private endpoints)
  • Ensure compliance with data protection standards (PII handling, auditability, governance)
  • Manage secrets, credentials, and access policies across environments
  • Optimize cloud costs across Databricks, storage, and compute resources
  • Implement cluster right-sizing, auto-scaling, and job vs all-purpose cluster strategies
  • Monitor usage and enforce cost governance across environments
  • Recommend architecture improvements for cost-performance balance

API & Integration:

  • Design and build scalable REST API layers for data access and model inference
  • Develop and manage API-based integration patterns across internal systems and external vendors
  • Enable real-time and batch integration with downstream applications
  • Implement API security (authentication, throttling, versioning)
  • Support integration with event-driven systems and messaging frameworks

MANDATORY REQUIREMENTS (Candidate must meet all requirements below)

– 10+ years experience Design and implement robust CI/CD pipelines using Azure DevOps, Git, and YAML
– 10+ years experience Lead deployment of AI/ML models into production using automated pipelines
– 10+ years experience Implement model lifecycle management (training, validation, deployment, monitoring)
– 10+ years experience Design and build data pipelines using Azure Data Factory, Databricks, and ADLS

DESIRABLE REQUIREMENTS (Nice to haves)

– Experience with Azure AI Search and Azure AI Foundry
– Experience with event-driven architecture (Event Grid, Service Bus)
– Exposure to streaming platforms (Kafka, Event Hubs)
– Knowledge of containerization (Docker, Kubernetes, AKS)
– Experience with LLMs / Generative AI pipelines and prompt orchestration
– Familiarity with data governance and medallion architecture

SKILL SET AND EVALUATION CRITERIA

Required Skills & Qualifications:
– Minimum 10+ years of IT experience with 5+ years in DevOps / MLOps / Data Engineering
– Strong expertise in:

  • Azure ecosystem (ADF, Databricks, ADLS, Azure SQL)
  • CI/CD pipelines, Git, and YAML-based deployments
  • Infrastructure as Code (ARM, Bicep, Terraform)
  • SQL and relational databases (Azure SQL, Oracle)
  • REST API development and integration

– Proven experience in:

  • Deploying AI/ML models to production environments
  • End-to-end deployment pipelines and release management
  • Cluster management and optimization

Nice to Have / Bonus:
– Experience with Azure AI Search and Azure AI Foundry
– Experience with event-driven architecture (Event Grid, Service Bus)
– Exposure to streaming platforms (Kafka, Event Hubs)
– Knowledge of containerization (Docker, Kubernetes, AKS)
– Experience with LLMs / Generative AI pipelines and prompt orchestration
– Familiarity with data governance and medallion architecture

Soft Skills:
– Strong problem-solving and troubleshooting abilities
– Ability to work across DevOps, Data, and ML teams
– Excellent communication and documentation skills
– Leadership and mentoring capabilities

NOT FOR YOU?

Check out our other opportunities at https://tri-global.com or follow us on LinkedIn. We thank all candidates in advance. Only candidates selected for an interview will be contacted.

WHY WORK WITH TRI-GLOBAL?

– Empower positive change by enabling our clients to revolutionize innovation and technology, elevating them to a higher level of excellence and efficiency.
– Join an exceptional and committed team that redefines the landscape, forging a distinctive path towards success.
– Engage in stimulating and captivating projects that push boundaries and keep you constantly motivated.

To apply for this jobemail your details toapplication-intake+3474@tri-global.com

About Devacor Solutions Group

IT Services and IT Consulting
201-500

Tri-global Solutions Group Inc. (previously Devacor Solutions Group) is a leading IT and Strategic Management Advisory firm, providing high calibre resources and best-in-class practices to our clients across Canada and Internationally. Our history of successful collaborations includes clients from public and private sectors, from front-line staff to executive management teams.

Similar jobs you might like