Databricks Developer
About the role
Job Title : Databricks Developer
Full-time opportunity | On-site in Vancouver
Must hold a valid Canadian Work Visa
Our client, a global leader in asset management and infrastructure services within the shipping sector , is seeking a skilled Databricks Developer to join their Data Development team. This role is responsible for implementing robust data pipelines, performing advanced data transformations, and enabling scalable data products to support enterprise analytics and reporting needs.
The successful candidate will be a hands-on developer with expertise in Apache Spark on Databricks, Azure Data Factory (ADF), and Delta Lake , with strong skills in SQL and Python. This position offers the opportunity to work in a modern cloud-based environment with both real-time and batch data solutions.
Responsibilities:
Data Pipeline Development
- Design, build, and maintain scalable data pipelines and workflows using Databricks (SQL, PySpark, Delta Lake).
- Develop efficient ETL/ELT processes for structured and semi-structured data using ADF and Databricks notebooks/jobs.
- Integrate and transform large-scale datasets from multiple sources into analytics-ready outputs.
- Implement real-time and batch processing pipelines for streaming data sources such as MQTT, Kafka, and Event Hub.
Performance Optimization & Data Management
- Optimize Spark jobs and Delta Lake performance (partitioning, Z-ordering, broadcast joins, caching).
- Implement ingestion pipelines for RESTful APIs, transforming JSON responses into Spark tables.
- Perform data validation, profiling, and quality checks.
- Apply best practices in data modeling, data warehousing, and lakehouse architecture.
Collaboration & Governance
- Work closely with Data Scientists, Analysts, Business Analysts, and Architects to deliver high-quality, trusted datasets.
- Contribute to metadata documentation and data governance initiatives.
- Use Git and CI/CD pipelines to manage code deployments and workflow automation.
- Support operational continuity by documenting solutions and adhering to data quality standards.
Qualifications:
Required
- Bachelor’s degree in Computer Science, Information Systems, or a related field (or equivalent experience).
- 5+ years of experience in data engineering or big data development.
- Strong hands-on experience with Databricks and Apache Spark (PySpark/SQL).
- Proven experience with Azure Data Factory, Azure Data Lake, and related Azure services.
- Proficiency in SQL and Python for transformation, validation, and automation.
- Experience with API integration (libraries such as requests, http).
- Deep understanding of Delta Lake architecture, including tuning and advanced features.
- Familiarity with Git, DevOps pipelines, and Agile delivery practices.
- Knowledge of real-time and batch processing design patterns.
Preferred
- Experience with dbt, Azure Synapse, or Microsoft Fabric.
- Familiarity with Databricks Unity Catalog.
- Certifications such as Azure Data Engineer or Databricks.
- Understanding of predictive modeling, anomaly detection, or machine learning (particularly with IoT datasets).
Compensation & Benefits:
- Salary range: $100,000 – $120,000 CAD per annum , commensurate with experience and skills.
- Competitive total compensation program aligned with a pay-for-performance philosophy .
- Opportunity to work in a high-performance, global environment with exposure to large-scale data initiatives.
Additional Information:
- As this is a global organization, occasional work outside of regular office hours may be required.
NOTE : Interested candidates who meet the above qualifications are encouraged to apply directly. Due to the volume of applications, only those shortlisted will be contacted.
About Benchmark Recruitment
Unleash the full potential of your workforce with Benchmark Recruitment - the boutique agency that's shaking up the industry! With over 100 years of combined experience, our leadership team is fueled by a passion for delivering outstanding results for both clients and candidates.
We've seen it all, worked with the biggest names in the game, and listened to what truly matters to you. That's why we created Benchmark Recruitment - to put the focus back on the customer experience and simplify the hiring process. From sourcing top talent to retaining them, we partner with you every step of the way.
From high-growth startups to established enterprises, we have the expertise and network to meet your staffing needs, be it permanent, contract, or contract-to-hire. We're not just talking the talk, we walk the walk too. Our team is equipped with the latest technology and empowered with a work-life balance that their families deserve.
Join the next generation of recruitment with Benchmark Recruitment - where exceptional service meets exceptional results.
Databricks Developer
About the role
Job Title : Databricks Developer
Full-time opportunity | On-site in Vancouver
Must hold a valid Canadian Work Visa
Our client, a global leader in asset management and infrastructure services within the shipping sector , is seeking a skilled Databricks Developer to join their Data Development team. This role is responsible for implementing robust data pipelines, performing advanced data transformations, and enabling scalable data products to support enterprise analytics and reporting needs.
The successful candidate will be a hands-on developer with expertise in Apache Spark on Databricks, Azure Data Factory (ADF), and Delta Lake , with strong skills in SQL and Python. This position offers the opportunity to work in a modern cloud-based environment with both real-time and batch data solutions.
Responsibilities:
Data Pipeline Development
- Design, build, and maintain scalable data pipelines and workflows using Databricks (SQL, PySpark, Delta Lake).
- Develop efficient ETL/ELT processes for structured and semi-structured data using ADF and Databricks notebooks/jobs.
- Integrate and transform large-scale datasets from multiple sources into analytics-ready outputs.
- Implement real-time and batch processing pipelines for streaming data sources such as MQTT, Kafka, and Event Hub.
Performance Optimization & Data Management
- Optimize Spark jobs and Delta Lake performance (partitioning, Z-ordering, broadcast joins, caching).
- Implement ingestion pipelines for RESTful APIs, transforming JSON responses into Spark tables.
- Perform data validation, profiling, and quality checks.
- Apply best practices in data modeling, data warehousing, and lakehouse architecture.
Collaboration & Governance
- Work closely with Data Scientists, Analysts, Business Analysts, and Architects to deliver high-quality, trusted datasets.
- Contribute to metadata documentation and data governance initiatives.
- Use Git and CI/CD pipelines to manage code deployments and workflow automation.
- Support operational continuity by documenting solutions and adhering to data quality standards.
Qualifications:
Required
- Bachelor’s degree in Computer Science, Information Systems, or a related field (or equivalent experience).
- 5+ years of experience in data engineering or big data development.
- Strong hands-on experience with Databricks and Apache Spark (PySpark/SQL).
- Proven experience with Azure Data Factory, Azure Data Lake, and related Azure services.
- Proficiency in SQL and Python for transformation, validation, and automation.
- Experience with API integration (libraries such as requests, http).
- Deep understanding of Delta Lake architecture, including tuning and advanced features.
- Familiarity with Git, DevOps pipelines, and Agile delivery practices.
- Knowledge of real-time and batch processing design patterns.
Preferred
- Experience with dbt, Azure Synapse, or Microsoft Fabric.
- Familiarity with Databricks Unity Catalog.
- Certifications such as Azure Data Engineer or Databricks.
- Understanding of predictive modeling, anomaly detection, or machine learning (particularly with IoT datasets).
Compensation & Benefits:
- Salary range: $100,000 – $120,000 CAD per annum , commensurate with experience and skills.
- Competitive total compensation program aligned with a pay-for-performance philosophy .
- Opportunity to work in a high-performance, global environment with exposure to large-scale data initiatives.
Additional Information:
- As this is a global organization, occasional work outside of regular office hours may be required.
NOTE : Interested candidates who meet the above qualifications are encouraged to apply directly. Due to the volume of applications, only those shortlisted will be contacted.
About Benchmark Recruitment
Unleash the full potential of your workforce with Benchmark Recruitment - the boutique agency that's shaking up the industry! With over 100 years of combined experience, our leadership team is fueled by a passion for delivering outstanding results for both clients and candidates.
We've seen it all, worked with the biggest names in the game, and listened to what truly matters to you. That's why we created Benchmark Recruitment - to put the focus back on the customer experience and simplify the hiring process. From sourcing top talent to retaining them, we partner with you every step of the way.
From high-growth startups to established enterprises, we have the expertise and network to meet your staffing needs, be it permanent, contract, or contract-to-hire. We're not just talking the talk, we walk the walk too. Our team is equipped with the latest technology and empowered with a work-life balance that their families deserve.
Join the next generation of recruitment with Benchmark Recruitment - where exceptional service meets exceptional results.