Top Benefits
About the role
Who you are
- 5+ years experience in a Data Engineering position
- Strong experience with Python and SQL
- Hands-on experience with Apache Airflow5
- Experience working with Databricks
- Expertise using DBT for transformations and analytics modeling
- Experience building streaming data pipelines with Kafka
- Experience with data ingestion tools such as Fivetran
- Working knowledge of Apache Iceberg and modern Lakehouse architectures
- Experience implementing data quality checks, testing frameworks, and pipeline observability
- Familiarity with AWS services including Athena, EC2, and cloud-based data platforms
- Strong understanding of data modeling, analytics, and semantic layer design
- Experience enabling AI or GenAI use cases on top of analytics platforms (e.g., Databricks Genie)
- Experience delivering self-service BI solutions (e.g., ThoughtSpot)
- Knowledge of data governance, metadata management, and data catalogs
- Experience supporting SaaS or multi-product platforms
- Familiarity with privacy, compliance, and secure data access patterns
What the job involves
- We are seeking a Data Engineer II to help design and scale our modern data platform, supporting analytics, self-service BI, real-time use cases, and AI-powered insights
- You’ll work closely with analytics engineers, product teams, and business stakeholders to deliver reliable, high-quality data that drives measurable business outcomes
- This role is hands-on and impact-focused, ideal for someone who enjoys building Lakehouse-based platforms, enabling streaming data, and supporting AI and GenAI use cases in production
- Design, build, and operate scalable batch and streaming data pipelines
- Develop and orchestrate workflows using Apache Airflow
- Implement transformations and analytics-ready datasets using DBT
- Build and maintain real-time pipelines using Kafka
- Leverage Databricks for data processing, analytics, and AI enablement
- Support AI and GenAI use cases, including enabling high-quality data access for tools like Databricks Genie
- Design and optimize data storage using Apache Iceberg and Lakehouse architecture
- Ingest and manage data from diverse internal and external sources using Fivetran
- Handle a wide variety of data structures (structured, semi-structured, and event-based data)
- Build and maintain a semantic layer that enables trusted reporting and self-service analytics
- Implement data quality frameworks, monitoring, and unit test automation to ensure reliability at scale
- Partner with BI, product, and engineering teams to deliver data that is intuitive, trusted, and actionable
- Optimize performance, scalability, and cost across AWS services such as Athena, EC2, and related tooling
- Contribute to data platform standards, documentation, and best practices
Benefits
- Flexibility to work where/how you want – in-office, remote, or hybrid
- Robust health and wellness benefits, including an annual wellness stipend
- Continued investment in your professional development through Udemy
- 401k with company match
- Flexible and generous paid time off
- Employee Stock Purchase Program
About EverCommerce
EverCommerce is a leading service commerce platform, providing vertically-tailored, integrated SaaS solutions that help more than 690,000 global service-based businesses accelerate growth, streamline operations, and increase retention. Its modern digital and mobile applications create predictable, informed, and convenient experiences between customers and their service professionals. Specializing in Home & Field Services, Health Services, and Fitness & Wellness industries, EverCommerce solutions include end-to-end business management software, integrated payment acceptance, marketing technology, and customer engagement applications.
Top Benefits
About the role
Who you are
- 5+ years experience in a Data Engineering position
- Strong experience with Python and SQL
- Hands-on experience with Apache Airflow5
- Experience working with Databricks
- Expertise using DBT for transformations and analytics modeling
- Experience building streaming data pipelines with Kafka
- Experience with data ingestion tools such as Fivetran
- Working knowledge of Apache Iceberg and modern Lakehouse architectures
- Experience implementing data quality checks, testing frameworks, and pipeline observability
- Familiarity with AWS services including Athena, EC2, and cloud-based data platforms
- Strong understanding of data modeling, analytics, and semantic layer design
- Experience enabling AI or GenAI use cases on top of analytics platforms (e.g., Databricks Genie)
- Experience delivering self-service BI solutions (e.g., ThoughtSpot)
- Knowledge of data governance, metadata management, and data catalogs
- Experience supporting SaaS or multi-product platforms
- Familiarity with privacy, compliance, and secure data access patterns
What the job involves
- We are seeking a Data Engineer II to help design and scale our modern data platform, supporting analytics, self-service BI, real-time use cases, and AI-powered insights
- You’ll work closely with analytics engineers, product teams, and business stakeholders to deliver reliable, high-quality data that drives measurable business outcomes
- This role is hands-on and impact-focused, ideal for someone who enjoys building Lakehouse-based platforms, enabling streaming data, and supporting AI and GenAI use cases in production
- Design, build, and operate scalable batch and streaming data pipelines
- Develop and orchestrate workflows using Apache Airflow
- Implement transformations and analytics-ready datasets using DBT
- Build and maintain real-time pipelines using Kafka
- Leverage Databricks for data processing, analytics, and AI enablement
- Support AI and GenAI use cases, including enabling high-quality data access for tools like Databricks Genie
- Design and optimize data storage using Apache Iceberg and Lakehouse architecture
- Ingest and manage data from diverse internal and external sources using Fivetran
- Handle a wide variety of data structures (structured, semi-structured, and event-based data)
- Build and maintain a semantic layer that enables trusted reporting and self-service analytics
- Implement data quality frameworks, monitoring, and unit test automation to ensure reliability at scale
- Partner with BI, product, and engineering teams to deliver data that is intuitive, trusted, and actionable
- Optimize performance, scalability, and cost across AWS services such as Athena, EC2, and related tooling
- Contribute to data platform standards, documentation, and best practices
Benefits
- Flexibility to work where/how you want – in-office, remote, or hybrid
- Robust health and wellness benefits, including an annual wellness stipend
- Continued investment in your professional development through Udemy
- 401k with company match
- Flexible and generous paid time off
- Employee Stock Purchase Program
About EverCommerce
EverCommerce is a leading service commerce platform, providing vertically-tailored, integrated SaaS solutions that help more than 690,000 global service-based businesses accelerate growth, streamline operations, and increase retention. Its modern digital and mobile applications create predictable, informed, and convenient experiences between customers and their service professionals. Specializing in Home & Field Services, Health Services, and Fitness & Wellness industries, EverCommerce solutions include end-to-end business management software, integrated payment acceptance, marketing technology, and customer engagement applications.