Senior Platform Engineer - Data Platform, Clo
Top Benefits
About the role
CanCap Group Inc. is part of privately-owned Canadian national financial services company with multiple verticals across automotive, consumer, and merchant lending portfolios. We manage the entire lifecycle of the finance receivable from credit adjudication through to contract administration, customer service, default management and post charge-off recoveries. We are a company of innovators, we learn from each other, respect each other and create together. When it comes to our customers, partners, and each other, we are always motivated by doing the “right thing”. We are always looking to find the best people and the right methods that allow us to meet this goal and look to the future for growth.
What Your Day and Week Could Look Like
Reporting to the Head of Data Platform, the Senior Platform Engineer will be responsible for designing, implementing, and maintaining our Google Cloud-based Databricks environment to support the company’s AI, data analytics and business intelligence needs.
A key focus of this role will be managing Databricks permissions, user access, security policies, and workspace configurations to ensure a well-governed, scalable, and secure data environment deployed on GCP. You will be responsible for ensuring Databricks infrastructure is properly maintained to support business-critical data operations and our ETL pipelines built using Data Build Tool (DBT). The ideal candidate will be responsible for the deployment, configuration, and administration of cloud-native platforms, AI tools integrations, ensuring secure, scalable, and efficient environments for data and analytics workloads.
Key Responsibilities
- Provision and manage Databricks workspaces across development, test, and production environments.
- Set up and configure GCP services including networking, IAM, Cloud Storage etc.
- Implement and maintain IAM policies and role-based access controls across GCP and Databricks environments.
- Supporting applications and projects with infrastructure design decision, and monitoring solution
- Performing complex application programming activities, including coding, testing, debugging, documenting, maintaining, and modifying complex applications.
- Collaborate with data engineering and security teams to ensure compliance with cloud security best practices and organizational standards.
- Contribute to building AI/ML infrastructure, including feature stores, model training pipelines, and model deployment frameworks.
- Build automation around infrastructure provisioning, CI/CD pipelines, observability, and cost management.
- Ensure platform security, availability, and compliance through robust access control, auditing, and data governance practices.
- Monitor platform usage, optimize performance, and ensure high availability and reliability of cloud environments.
- Automate platform operations using Infrastructure as Code (IaC) tools like Terraform.
- Conduct security reviews, manage secrets, and monitor audit logs for anomalous activity.
- Assist in incident response and troubleshooting across platform layers.
- Maintain documentation related to platform architecture, procedures, standards, configurations, permissions structures and access control policies.
- Develop and enforce best practices for Databricks security, governance, and resource optimization.
- Design and implement scalable Databricks workspaces and clusters, optimizing cost efficiency and performance.
- Participate in design reviews with peers and stakeholders to determine the best Databricks configurations and integrations.
- Monitor and optimize Databricks notebooks, jobs, and workflows to enhance efficiency and reliability.
- Maintain and contribute to documentation and best practices for Databricks environments.
- Troubleshoot and resolve Databricks platform issues, identifying root causes of system performance bottlenecks.
- Create and review technical design documents, understand how the design will be used in the code development process, and facilitate meetings to design, troubleshoot, and execute projects
What You Bring
- 6+ years of experience in platform administration, engineering or cloud roles.
- Strong hands-on experience with Databricks setup, provisioning, and workspace administration.
- Solid understanding of Google Cloud Platform (GCP) services and architecture.
- Proven experience in managing IAM (preferably in GCP and Databricks).
- Strong knowledge of cloud security principles, encryption, and compliance standards.
- Expertise in Terraform, CI/CD pipelines, and DevOps practices.
- Proficient in scripting languages like Python, Shell, or Bash.
- Excellent problem-solving, communication, and documentation skills.
- Foundational knowledge on working with common ETL tools.
- Strong expertise in Databricks or other Data Lakehouses.
- Experience managing user access, permissions, security settings, and governance policies within Databricks.
- Experience working with large-scale data processing, data structures, and algorithms.
- Strong platform engineering experience and experience implementing scalable distributed systems.
- Working with building and maintaining DevOps pipeline such as Jenkins, GitHub actions.
- Experience with MLOps orchestration tools such as AirFlow, KubeFlow, Dagster, Flyte, or MetaFlow.
- Experience implementing monitoring solutions to identify system bottlenecks and production issues.
- Hands-on experience building and deploying hybrid environments on-prem and major cloud environments, such as GCP and AWS
- Strong experience implementing Infrastructure as Code in Terraform, Cloudformation, or Google Cloud Deployment Manager Templates.
- Experience working with Databricks, especially GCP Databricks.
Preferred Qualifications
- GCP certification (e.g., Cloud DevOps Engineer, Cloud Security Engineer).
- Databricks certification
- Experience with multi-cloud environments.
- Exposure to data engineering pipelines and ML platform management.
- Understanding of networking concepts (VPC, firewall rules, peering, etc.) in GCP.
- Hands-on experience in MLOps, DataOps, or platform SRE practices.
- Knowledge of data governance, lineage, and privacy frameworks (e.g., GDPR, HIPAA).
Nice to Have
- Master's degree in Computer Science or related technical fields.
- Proficiency in performance tuning, large-scale data analysis and debugging skills.
What You Can Expect From Us
Our Employee Experience is aimed at supporting and inspiring our talented team through:
- A passionate team dedicated to supporting and empowering others.
- An environment where creative, innovative thinking is encouraged.
- Health and Dental Benefits.
Work Location & Remote Flexibility
- This role follows a hybrid model, requiring employees to work 50% in-office, with flexibility to work remotely or from the office on other days.
- The company has two office locations:
- Downtown Toronto (Church Street) – The tech team is primarily based here.
- Mississauga – Another office location, but less frequently used by the tech team.
CanCap is an equal opportunity employer and values diversity. We are committed to building and evolving a team reflecting a variety of backgrounds, perspectives, and skills. To be considered for employment, you will need to successfully pass a criminal background check and validate your work experience.
Next Steps
Adding to our team is an important step in our business. We’ve taken time to be purposeful and thoughtful with this job posting, and we encourage you to do the same with your application. Help us understand how your experience aligns with this
About CanCap Group Inc.
We manage the entire lifecycle of the finance receivable from credit adjudication through to contract administration, customer service, default management and post charge-off recoveries. We are a company of innovators: we learn from each other, respect each other, and create together. We strive to inspire our customers by continually understanding them, meeting their needs, and keeping them happily surprised. And we always do so with integrity.
Nous gérons tout un cycle de vie de la créance financière, de l'adjudication de crédit à l'administration des contrats, au service à la clientèle, à la gestion des défauts et aux recouvrements après imputation. Nous sommes une entreprise d'innovateurs: nous apprenons mutuellement, nous nous respectons et créons ensemble. Nous nous efforçons d'inspirer nos clients en les écoutant, en répondant à leurs besoins et en les gardant agréablement surpris. Et nous le faisons toujours avec intégrité.
Senior Platform Engineer - Data Platform, Clo
Top Benefits
About the role
CanCap Group Inc. is part of privately-owned Canadian national financial services company with multiple verticals across automotive, consumer, and merchant lending portfolios. We manage the entire lifecycle of the finance receivable from credit adjudication through to contract administration, customer service, default management and post charge-off recoveries. We are a company of innovators, we learn from each other, respect each other and create together. When it comes to our customers, partners, and each other, we are always motivated by doing the “right thing”. We are always looking to find the best people and the right methods that allow us to meet this goal and look to the future for growth.
What Your Day and Week Could Look Like
Reporting to the Head of Data Platform, the Senior Platform Engineer will be responsible for designing, implementing, and maintaining our Google Cloud-based Databricks environment to support the company’s AI, data analytics and business intelligence needs.
A key focus of this role will be managing Databricks permissions, user access, security policies, and workspace configurations to ensure a well-governed, scalable, and secure data environment deployed on GCP. You will be responsible for ensuring Databricks infrastructure is properly maintained to support business-critical data operations and our ETL pipelines built using Data Build Tool (DBT). The ideal candidate will be responsible for the deployment, configuration, and administration of cloud-native platforms, AI tools integrations, ensuring secure, scalable, and efficient environments for data and analytics workloads.
Key Responsibilities
- Provision and manage Databricks workspaces across development, test, and production environments.
- Set up and configure GCP services including networking, IAM, Cloud Storage etc.
- Implement and maintain IAM policies and role-based access controls across GCP and Databricks environments.
- Supporting applications and projects with infrastructure design decision, and monitoring solution
- Performing complex application programming activities, including coding, testing, debugging, documenting, maintaining, and modifying complex applications.
- Collaborate with data engineering and security teams to ensure compliance with cloud security best practices and organizational standards.
- Contribute to building AI/ML infrastructure, including feature stores, model training pipelines, and model deployment frameworks.
- Build automation around infrastructure provisioning, CI/CD pipelines, observability, and cost management.
- Ensure platform security, availability, and compliance through robust access control, auditing, and data governance practices.
- Monitor platform usage, optimize performance, and ensure high availability and reliability of cloud environments.
- Automate platform operations using Infrastructure as Code (IaC) tools like Terraform.
- Conduct security reviews, manage secrets, and monitor audit logs for anomalous activity.
- Assist in incident response and troubleshooting across platform layers.
- Maintain documentation related to platform architecture, procedures, standards, configurations, permissions structures and access control policies.
- Develop and enforce best practices for Databricks security, governance, and resource optimization.
- Design and implement scalable Databricks workspaces and clusters, optimizing cost efficiency and performance.
- Participate in design reviews with peers and stakeholders to determine the best Databricks configurations and integrations.
- Monitor and optimize Databricks notebooks, jobs, and workflows to enhance efficiency and reliability.
- Maintain and contribute to documentation and best practices for Databricks environments.
- Troubleshoot and resolve Databricks platform issues, identifying root causes of system performance bottlenecks.
- Create and review technical design documents, understand how the design will be used in the code development process, and facilitate meetings to design, troubleshoot, and execute projects
What You Bring
- 6+ years of experience in platform administration, engineering or cloud roles.
- Strong hands-on experience with Databricks setup, provisioning, and workspace administration.
- Solid understanding of Google Cloud Platform (GCP) services and architecture.
- Proven experience in managing IAM (preferably in GCP and Databricks).
- Strong knowledge of cloud security principles, encryption, and compliance standards.
- Expertise in Terraform, CI/CD pipelines, and DevOps practices.
- Proficient in scripting languages like Python, Shell, or Bash.
- Excellent problem-solving, communication, and documentation skills.
- Foundational knowledge on working with common ETL tools.
- Strong expertise in Databricks or other Data Lakehouses.
- Experience managing user access, permissions, security settings, and governance policies within Databricks.
- Experience working with large-scale data processing, data structures, and algorithms.
- Strong platform engineering experience and experience implementing scalable distributed systems.
- Working with building and maintaining DevOps pipeline such as Jenkins, GitHub actions.
- Experience with MLOps orchestration tools such as AirFlow, KubeFlow, Dagster, Flyte, or MetaFlow.
- Experience implementing monitoring solutions to identify system bottlenecks and production issues.
- Hands-on experience building and deploying hybrid environments on-prem and major cloud environments, such as GCP and AWS
- Strong experience implementing Infrastructure as Code in Terraform, Cloudformation, or Google Cloud Deployment Manager Templates.
- Experience working with Databricks, especially GCP Databricks.
Preferred Qualifications
- GCP certification (e.g., Cloud DevOps Engineer, Cloud Security Engineer).
- Databricks certification
- Experience with multi-cloud environments.
- Exposure to data engineering pipelines and ML platform management.
- Understanding of networking concepts (VPC, firewall rules, peering, etc.) in GCP.
- Hands-on experience in MLOps, DataOps, or platform SRE practices.
- Knowledge of data governance, lineage, and privacy frameworks (e.g., GDPR, HIPAA).
Nice to Have
- Master's degree in Computer Science or related technical fields.
- Proficiency in performance tuning, large-scale data analysis and debugging skills.
What You Can Expect From Us
Our Employee Experience is aimed at supporting and inspiring our talented team through:
- A passionate team dedicated to supporting and empowering others.
- An environment where creative, innovative thinking is encouraged.
- Health and Dental Benefits.
Work Location & Remote Flexibility
- This role follows a hybrid model, requiring employees to work 50% in-office, with flexibility to work remotely or from the office on other days.
- The company has two office locations:
- Downtown Toronto (Church Street) – The tech team is primarily based here.
- Mississauga – Another office location, but less frequently used by the tech team.
CanCap is an equal opportunity employer and values diversity. We are committed to building and evolving a team reflecting a variety of backgrounds, perspectives, and skills. To be considered for employment, you will need to successfully pass a criminal background check and validate your work experience.
Next Steps
Adding to our team is an important step in our business. We’ve taken time to be purposeful and thoughtful with this job posting, and we encourage you to do the same with your application. Help us understand how your experience aligns with this
About CanCap Group Inc.
We manage the entire lifecycle of the finance receivable from credit adjudication through to contract administration, customer service, default management and post charge-off recoveries. We are a company of innovators: we learn from each other, respect each other, and create together. We strive to inspire our customers by continually understanding them, meeting their needs, and keeping them happily surprised. And we always do so with integrity.
Nous gérons tout un cycle de vie de la créance financière, de l'adjudication de crédit à l'administration des contrats, au service à la clientèle, à la gestion des défauts et aux recouvrements après imputation. Nous sommes une entreprise d'innovateurs: nous apprenons mutuellement, nous nous respectons et créons ensemble. Nous nous efforçons d'inspirer nos clients en les écoutant, en répondant à leurs besoins et en les gardant agréablement surpris. Et nous le faisons toujours avec intégrité.