Cloud Data Specialist
Top Benefits
About the role
Seaspan teams are goal-driven and share a high-performance culture, focusing on building services offerings to become a leading asset manager. Seaspan provides many of the world's major shipping lines with alternatives to vessel ownership by offering long-term leases on large, modern containerships and pure car, truck carriers (PCTCs) combined with industry leading ship management serves. Seaspan's fleet has evolved over time to meet the varying needs of our customer base. We own vessels in a wide range of sizes, from 2,500 TEU to 24,000 TEU vessels. As a wholly owned subsidiary of Atlas Corp, Seaspan delivers on the company's core strategy as a leading asset management and core infrastructure company.
Position description: We are seeking a highly skilled and versatile Cloud Data Specialist to join our Data Operations team. Reporting to the Team Lead, Data Operations, the Cloud Data Specialist plays a key role in the development, administration, and support of our Azure-based data platform, with a particular focus on Databricks, data pipeline orchestration using tools like Azure Data Factory (ADF), and environment management using Unity Catalog. A strong foundation in data engineering, cloud data administration, and data governance is essential. Development experience using SQL and Python is required. Knowledge or experience with APIM is nice to have.
Primary responsibilities: Data Engineering and Platform Management:
- Design, develop, and optimize scalable data pipelines using Azure Databricks and ADF.
- Administer Databricks environments, including user access, clusters, and Unity Catalog for data lineage, governance, and security.
- Support the deployment, scheduling, and monitoring of data workflows and jobs in Databricks and ADF.
- Implement best practices for CI/CD, version control, and operational monitoring for pipeline deployments.
- Implement and manage Delta Lake to ensure reliable, performant, and ACID-compliant data operations.
Data Modeling and Integration:
- Collaborate with business and data engineering teams to design data models that support analytics and reporting use cases.
- Support integration of data from multiple sources into the enterprise data lake and data warehouse
- Configure API calls to utilize our Azure APIM platform.
- Maintain and enhance data quality, structure, and performance within the Lakehouse and warehouse architecture.
Collaboration and Stakeholder Engagement:
- Work cross-functionally with business units, data scientists, BI analysts, and other stakeholders to understand data requirements.
- Translate technical solutions into business-friendly language and deliver clear documentation and training when required.
Required Technical Expertise: Apache Spark (on Databricks)
- Proficient in PySpark and spark SQL
- Spark optimization techniques (caching, partitioning, broadcast joins)
- Writing and scheduling notebooks/jobs in Databricks
- Understanding of Delta Lake architecture and features
- Working with Databricks Workflows (pipelines and job orchestration)
SQL/Python Programming
- Handling JSON, XML, and other semi-structured formats
- Experience with API integration using requests, http, etc.
- Error handling and logging
API Ingestion
- Designing and implementing ingestion pipelines for RESTful API
- Transforming and loading JSON responses to Spark tables
Cloud & Data Platform Skills
- Databricks on Azure
- Cluster configuration and management
- Unity Catalog features (optional but good to have)
Azure Data Factory
- Creating and managing pipelines for orchestration
- Linked services and datasets for ADLS, Databricks, SQL Server
- Parameterized and dynamic ADF pipelines
- Triggering Databricks notebooks from ADF
Data Engineering Foundations
- Data modeling and warehousing concepts
- ETL/ELT design patterns
- Data validation and quality checks
- Working with structured and semi-structured data (JSON, Parquet, Avro)
DevOps & CI/CD
- Git/GitHub for version control
- CI/CD using Azure DevOps or GitHub Actions for Databricks jobs
- Infrastructure-as-code (Terraform for Databricks or ADF)
Additional Requirements:
- Bachelor's degree in computer science, information systems, or a related field.
- 4+ years of experience in a cloud data engineering, data platform, or analytics engineering role.
- Familiarity with data governance, security principles, and data quality best practices.
- Excellent analytical thinking and problem-solving skills.
- Strong communication skills and ability to work collaboratively with technical and non-technical stakeholders.
- Microsoft certifications in Azure Data Engineer, Power Platform, or related field is desired
- Experience with Azure APIM is nice to have
- Knowledge of enterprise data architecture and data warehouse principles (e.g., dimensional modeling) an asset
Job Demands and/or Physical Requirements:
- As Seaspan is a global company, occasional work outside of regular office hours may be required.
Compensation and Benefits package: Seaspan’s total compensation is based on our pay-for-performance philosophy that rewards team members who deliver on and demonstrate our high-performance culture. The hiring range for this position is $87,000 - $104,000 CAD per annum. The exact base salary offered will be commensurate with the incumbent’s experience, job-related skills and knowledge, and internal pay equity.
Seaspan Corporation is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, race, color, religion, gender, sexual orientation, gender identity, national origin, disability, or protected Veteran status. We thank all applicants in advance. If your application is shortlisted to be included in the interview process, one of our team will be in contact with you.
Please note that while this position is open in both Vancouver and Mumbai, it represents a single headcount. The role will be filled in one of the two locations based on candidate availability and suitability, determined by the hiring team.
About Seaspan Corporation
Seaspan Corporation, a wholly-owned subsidiary of Atlas Corp., is the world’s largest independent containership lessor, providing safe, reliable, and economical operations. With an owned and managed fleet of over 140 vessels, we strive to be the global containership provider of choice, offering our stakeholders the best platform for success.
The foundation of our company’s success lies with our people. Our multinational team of container shipping professionals is more than 5,600 members strong, with offices in Canada, India, and Hong Kong, site teams in Korea, China, and Taiwan, and seafarers worldwide.
For more information on our company, please visit our website at http://www.seaspancorp.com/
Cloud Data Specialist
Top Benefits
About the role
Seaspan teams are goal-driven and share a high-performance culture, focusing on building services offerings to become a leading asset manager. Seaspan provides many of the world's major shipping lines with alternatives to vessel ownership by offering long-term leases on large, modern containerships and pure car, truck carriers (PCTCs) combined with industry leading ship management serves. Seaspan's fleet has evolved over time to meet the varying needs of our customer base. We own vessels in a wide range of sizes, from 2,500 TEU to 24,000 TEU vessels. As a wholly owned subsidiary of Atlas Corp, Seaspan delivers on the company's core strategy as a leading asset management and core infrastructure company.
Position description: We are seeking a highly skilled and versatile Cloud Data Specialist to join our Data Operations team. Reporting to the Team Lead, Data Operations, the Cloud Data Specialist plays a key role in the development, administration, and support of our Azure-based data platform, with a particular focus on Databricks, data pipeline orchestration using tools like Azure Data Factory (ADF), and environment management using Unity Catalog. A strong foundation in data engineering, cloud data administration, and data governance is essential. Development experience using SQL and Python is required. Knowledge or experience with APIM is nice to have.
Primary responsibilities: Data Engineering and Platform Management:
- Design, develop, and optimize scalable data pipelines using Azure Databricks and ADF.
- Administer Databricks environments, including user access, clusters, and Unity Catalog for data lineage, governance, and security.
- Support the deployment, scheduling, and monitoring of data workflows and jobs in Databricks and ADF.
- Implement best practices for CI/CD, version control, and operational monitoring for pipeline deployments.
- Implement and manage Delta Lake to ensure reliable, performant, and ACID-compliant data operations.
Data Modeling and Integration:
- Collaborate with business and data engineering teams to design data models that support analytics and reporting use cases.
- Support integration of data from multiple sources into the enterprise data lake and data warehouse
- Configure API calls to utilize our Azure APIM platform.
- Maintain and enhance data quality, structure, and performance within the Lakehouse and warehouse architecture.
Collaboration and Stakeholder Engagement:
- Work cross-functionally with business units, data scientists, BI analysts, and other stakeholders to understand data requirements.
- Translate technical solutions into business-friendly language and deliver clear documentation and training when required.
Required Technical Expertise: Apache Spark (on Databricks)
- Proficient in PySpark and spark SQL
- Spark optimization techniques (caching, partitioning, broadcast joins)
- Writing and scheduling notebooks/jobs in Databricks
- Understanding of Delta Lake architecture and features
- Working with Databricks Workflows (pipelines and job orchestration)
SQL/Python Programming
- Handling JSON, XML, and other semi-structured formats
- Experience with API integration using requests, http, etc.
- Error handling and logging
API Ingestion
- Designing and implementing ingestion pipelines for RESTful API
- Transforming and loading JSON responses to Spark tables
Cloud & Data Platform Skills
- Databricks on Azure
- Cluster configuration and management
- Unity Catalog features (optional but good to have)
Azure Data Factory
- Creating and managing pipelines for orchestration
- Linked services and datasets for ADLS, Databricks, SQL Server
- Parameterized and dynamic ADF pipelines
- Triggering Databricks notebooks from ADF
Data Engineering Foundations
- Data modeling and warehousing concepts
- ETL/ELT design patterns
- Data validation and quality checks
- Working with structured and semi-structured data (JSON, Parquet, Avro)
DevOps & CI/CD
- Git/GitHub for version control
- CI/CD using Azure DevOps or GitHub Actions for Databricks jobs
- Infrastructure-as-code (Terraform for Databricks or ADF)
Additional Requirements:
- Bachelor's degree in computer science, information systems, or a related field.
- 4+ years of experience in a cloud data engineering, data platform, or analytics engineering role.
- Familiarity with data governance, security principles, and data quality best practices.
- Excellent analytical thinking and problem-solving skills.
- Strong communication skills and ability to work collaboratively with technical and non-technical stakeholders.
- Microsoft certifications in Azure Data Engineer, Power Platform, or related field is desired
- Experience with Azure APIM is nice to have
- Knowledge of enterprise data architecture and data warehouse principles (e.g., dimensional modeling) an asset
Job Demands and/or Physical Requirements:
- As Seaspan is a global company, occasional work outside of regular office hours may be required.
Compensation and Benefits package: Seaspan’s total compensation is based on our pay-for-performance philosophy that rewards team members who deliver on and demonstrate our high-performance culture. The hiring range for this position is $87,000 - $104,000 CAD per annum. The exact base salary offered will be commensurate with the incumbent’s experience, job-related skills and knowledge, and internal pay equity.
Seaspan Corporation is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, race, color, religion, gender, sexual orientation, gender identity, national origin, disability, or protected Veteran status. We thank all applicants in advance. If your application is shortlisted to be included in the interview process, one of our team will be in contact with you.
Please note that while this position is open in both Vancouver and Mumbai, it represents a single headcount. The role will be filled in one of the two locations based on candidate availability and suitability, determined by the hiring team.
About Seaspan Corporation
Seaspan Corporation, a wholly-owned subsidiary of Atlas Corp., is the world’s largest independent containership lessor, providing safe, reliable, and economical operations. With an owned and managed fleet of over 140 vessels, we strive to be the global containership provider of choice, offering our stakeholders the best platform for success.
The foundation of our company’s success lies with our people. Our multinational team of container shipping professionals is more than 5,600 members strong, with offices in Canada, India, and Hong Kong, site teams in Korea, China, and Taiwan, and seafarers worldwide.
For more information on our company, please visit our website at http://www.seaspancorp.com/