About the role
Title: Senior Data Engineer
Salary: Based on experience + Benefits + Vacation
Location: Thornhill, Canada (Remote)
Length: Permanent, full-time
About Us:
At WellnessLiving, we empower thousands of health and wellness business owners to turn their entrepreneurial dreams into reality. Our mission-critical software fuels their vision, supporting millions of clients around the world in their wellness journeys. With a deep commitment to putting our customers first, we foster a culture that values high performance, adaptability, and accountability. If you are a skilled professional who thrives in a fast-paced, customer-focused environment and are passionate about making a meaningful impact on the health and wellness industry, we would love to connect with you.
Our team is driven by four core values that shape everything we do. If you share these values and meet the qualifications outlined for this role, we encourage you to apply - we’d love to learn more about you!
Customer First – We approach every challenge with a customer-focused lens, driven by an obsession with our customers’ happiness and success.
Excellence – We approach every task, whether big or small, with a steadfast commitment to exceptional execution and the pursuit of greatness.
Accountability – We take full ownership of our decisions, actions, and outcomes – both successes and failures.
Adaptability – We recognize that sustained success demands that we be malleable and purposefully evolve, acknowledging that the world is dynamic and constantly changing.
About You:
We’re seeking a seasoned Senior Data Engineer to help lead the modernization of our data infrastructure as we transition from a tightly coupled monolithic system to a scalable, microservices-based architecture. This role is central to decoupling legacy database structures, enabling domain-driven service ownership, and powering real-time analytics, operational intelligence, and AI initiatives across our platform. You will work closely with solution architects and domain owners to design resilient pipelines and data models that reflect business context and support scalable, secure, and auditable data access for internal and external consumers.
Key Responsibilities:
- Monolith-to-Microservices Data Transition: Lead the decomposition of monolithic database structures into domain-aligned schemas that enable service independence and ownership.
- Pipeline Development & Migration: Build and optimize ETL/ELT workflows using Python, PySpark/Spark, AWS Glue, and dbt, including schema/data mapping and transformation from on-prem and cloud legacy systems into data lake and warehouse environments.
- Domain Data Modeling: Define logical and physical domain-driven data models (star/snowflake schemas, data marts) to serve cross-functional needs, BI, operations, streaming, and ML.
- Legacy Systems Integration: Design strategies for extracting, validating, and restructuring data from legacy systems with embedded logic and incomplete normalization.
- Database Management: Administer, optimize, and scale SQL (MySQL, Aurora, Redshift) and NoSQL (MongoDB) platforms to meet high-availability and low-latency needs.
- Cloud & Serverless ETL: Leverage AWS Glue Catalog, Crawlers, Lambda, and S3 to manage and orchestrate modern, cost-efficient data pipelines.
- Data Governance & Compliance: Enforce best practices around cataloging, lineage, retention, access control, and security, ensuring compliance with GDPR, CCPA, PIPEDA, and internal standards.
- Monitoring & Optimization: Implement observability (CloudWatch, logs, metrics) and performance tuning across Spark, Glue, and Redshift workloads.
- Stakeholder Collaboration: Work with architects, analysts, product managers, and data scientists to define, validate, and prioritize requirements.
- Documentation & Mentorship: Maintain technical documentation (data dictionaries, migration guides, schema specs) and mentor junior engineers in engineering standards.
Required Qualifications:
- Experience: 5+ years in data engineering with a proven record in modernizing legacy data systems and driving large-scale migration initiatives.
- Cloud ETL Expertise: Proficient in AWS Glue, Apache Spark/PySpark, and modular transformation frameworks like dbt.
- Data Modeling: Strong grasp of domain-driven design, bounded contexts, and BI-friendly modeling approaches (star/snowflake/data vault).
- Data Migration: Experience with full lifecycle migrations including schema/data mapping, reconciliation, and exception handling.
- Databases: SQL: MySQL, Aurora, Redshift & NoSQL: MongoDB, DocumentDB
- Programming: Strong Python skills for data wrangling, pipeline automation, and API interactions.
- Data Architecture: Hands-on with data lakes, warehousing strategies, and hybrid cloud data ecosystems.
- Compliance & Security: Track record implementing governance, data cataloging, encryption, retention, lineage, and RBAC.
- DevOps Practices: Git, CI/CD pipelines, Docker, and test automation for data pipelines.
Preferred Qualifications:
- Experience with streaming data platforms like Kafka, Kinesis, or CDC tools such as Debezium
- Familiarity with orchestration platforms like Airflow or Prefect
- Background in analytics, data modeling for AI/ML pipelines, or ML-ready data preparation
- Understanding of cloud-native data services (AWS Glue, Redshift, Snowflake, BigQuery, etc.)
- Degree in Computer Science, Engineering, or equivalent field
- Strong written and verbal communication skills
- Self-starter with ability to navigate ambiguity and legacy system complexity
- Exposure to generative AI, LLM fine-tuning, or feature store design is a plus
Please note that only those selected for an interview will be contacted.
We appreciate you taking the time and look forward to reviewing your application!
WellnessLiving is proud to be an equal opportunity employer. We base employment decisions solely on qualifications, experience, and business needs. We do not tolerate discrimination or harassment of any kind. All qualified applicants will receive consideration without regard to race, color, religion, creed, gender, gender identity or expression, sexual orientation, national origin, disability, age, genetic information, veteran status, marital or family status, or any other status protected by applicable laws.
We utilize AI to generate summaries of interview notes as part of our candidate evaluation process. This helps ensure a fair and consistent review while maintaining a human-centered hiring approach.
About WellnessLiving
Our powerful business management software is trusted by thousands of spas, salons, fitness and yoga studios across North America. Our cloud based software features real-time appointment and class scheduling, point of sale, email and SMS marketing, customer review management, rewards program and a client mobile app experience that is second to none. Our mission is to provide the wellness industry with the most comprehensive tool set available to better manage their existing clients while helping them grow their business.
About the role
Title: Senior Data Engineer
Salary: Based on experience + Benefits + Vacation
Location: Thornhill, Canada (Remote)
Length: Permanent, full-time
About Us:
At WellnessLiving, we empower thousands of health and wellness business owners to turn their entrepreneurial dreams into reality. Our mission-critical software fuels their vision, supporting millions of clients around the world in their wellness journeys. With a deep commitment to putting our customers first, we foster a culture that values high performance, adaptability, and accountability. If you are a skilled professional who thrives in a fast-paced, customer-focused environment and are passionate about making a meaningful impact on the health and wellness industry, we would love to connect with you.
Our team is driven by four core values that shape everything we do. If you share these values and meet the qualifications outlined for this role, we encourage you to apply - we’d love to learn more about you!
Customer First – We approach every challenge with a customer-focused lens, driven by an obsession with our customers’ happiness and success.
Excellence – We approach every task, whether big or small, with a steadfast commitment to exceptional execution and the pursuit of greatness.
Accountability – We take full ownership of our decisions, actions, and outcomes – both successes and failures.
Adaptability – We recognize that sustained success demands that we be malleable and purposefully evolve, acknowledging that the world is dynamic and constantly changing.
About You:
We’re seeking a seasoned Senior Data Engineer to help lead the modernization of our data infrastructure as we transition from a tightly coupled monolithic system to a scalable, microservices-based architecture. This role is central to decoupling legacy database structures, enabling domain-driven service ownership, and powering real-time analytics, operational intelligence, and AI initiatives across our platform. You will work closely with solution architects and domain owners to design resilient pipelines and data models that reflect business context and support scalable, secure, and auditable data access for internal and external consumers.
Key Responsibilities:
- Monolith-to-Microservices Data Transition: Lead the decomposition of monolithic database structures into domain-aligned schemas that enable service independence and ownership.
- Pipeline Development & Migration: Build and optimize ETL/ELT workflows using Python, PySpark/Spark, AWS Glue, and dbt, including schema/data mapping and transformation from on-prem and cloud legacy systems into data lake and warehouse environments.
- Domain Data Modeling: Define logical and physical domain-driven data models (star/snowflake schemas, data marts) to serve cross-functional needs, BI, operations, streaming, and ML.
- Legacy Systems Integration: Design strategies for extracting, validating, and restructuring data from legacy systems with embedded logic and incomplete normalization.
- Database Management: Administer, optimize, and scale SQL (MySQL, Aurora, Redshift) and NoSQL (MongoDB) platforms to meet high-availability and low-latency needs.
- Cloud & Serverless ETL: Leverage AWS Glue Catalog, Crawlers, Lambda, and S3 to manage and orchestrate modern, cost-efficient data pipelines.
- Data Governance & Compliance: Enforce best practices around cataloging, lineage, retention, access control, and security, ensuring compliance with GDPR, CCPA, PIPEDA, and internal standards.
- Monitoring & Optimization: Implement observability (CloudWatch, logs, metrics) and performance tuning across Spark, Glue, and Redshift workloads.
- Stakeholder Collaboration: Work with architects, analysts, product managers, and data scientists to define, validate, and prioritize requirements.
- Documentation & Mentorship: Maintain technical documentation (data dictionaries, migration guides, schema specs) and mentor junior engineers in engineering standards.
Required Qualifications:
- Experience: 5+ years in data engineering with a proven record in modernizing legacy data systems and driving large-scale migration initiatives.
- Cloud ETL Expertise: Proficient in AWS Glue, Apache Spark/PySpark, and modular transformation frameworks like dbt.
- Data Modeling: Strong grasp of domain-driven design, bounded contexts, and BI-friendly modeling approaches (star/snowflake/data vault).
- Data Migration: Experience with full lifecycle migrations including schema/data mapping, reconciliation, and exception handling.
- Databases: SQL: MySQL, Aurora, Redshift & NoSQL: MongoDB, DocumentDB
- Programming: Strong Python skills for data wrangling, pipeline automation, and API interactions.
- Data Architecture: Hands-on with data lakes, warehousing strategies, and hybrid cloud data ecosystems.
- Compliance & Security: Track record implementing governance, data cataloging, encryption, retention, lineage, and RBAC.
- DevOps Practices: Git, CI/CD pipelines, Docker, and test automation for data pipelines.
Preferred Qualifications:
- Experience with streaming data platforms like Kafka, Kinesis, or CDC tools such as Debezium
- Familiarity with orchestration platforms like Airflow or Prefect
- Background in analytics, data modeling for AI/ML pipelines, or ML-ready data preparation
- Understanding of cloud-native data services (AWS Glue, Redshift, Snowflake, BigQuery, etc.)
- Degree in Computer Science, Engineering, or equivalent field
- Strong written and verbal communication skills
- Self-starter with ability to navigate ambiguity and legacy system complexity
- Exposure to generative AI, LLM fine-tuning, or feature store design is a plus
Please note that only those selected for an interview will be contacted.
We appreciate you taking the time and look forward to reviewing your application!
WellnessLiving is proud to be an equal opportunity employer. We base employment decisions solely on qualifications, experience, and business needs. We do not tolerate discrimination or harassment of any kind. All qualified applicants will receive consideration without regard to race, color, religion, creed, gender, gender identity or expression, sexual orientation, national origin, disability, age, genetic information, veteran status, marital or family status, or any other status protected by applicable laws.
We utilize AI to generate summaries of interview notes as part of our candidate evaluation process. This helps ensure a fair and consistent review while maintaining a human-centered hiring approach.
About WellnessLiving
Our powerful business management software is trusted by thousands of spas, salons, fitness and yoga studios across North America. Our cloud based software features real-time appointment and class scheduling, point of sale, email and SMS marketing, customer review management, rewards program and a client mobile app experience that is second to none. Our mission is to provide the wellness industry with the most comprehensive tool set available to better manage their existing clients while helping them grow their business.