Top Benefits
About the role
Who you are
-
- Problem-solving, and the ability and confidence to tackle complex data and platform challenges
-
- Ability to prioritize and meet deadlines in a dynamic environment
-
- Attention to detail and solid written and verbal English communication skills
-
- Willingness and an enthusiastic attitude to work within existing processes/methodologies, while driving improvements where needed
- This is a senior technical role focused on the development of our SaaS products—suited to a highly focused, ownership-driven engineer
- You’ll have focused professional experience as a data engineer, preferably in cloud-based SaaS products. Ideally, you’ll have at least five years of experience, but we focus on skill and ability, not tenure
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field (or equivalent professional experience)
- Proven experience with Snowflake (native Snowflake application development is essential)
- Proficiency in Python for data engineering tasks and application development
- Experience deploying and managing containerized applications using Kubernetes (preferably on Azure Kubernetes Services)
- Understanding of event-driven architectures and hands-on experience with event buses (e.g., Kafka, RabbitMQ)
- Familiarity with data orchestration and choreography concepts, including the use of scheduling/orchestration tools (e.g., Airflow, Prefect) and using eventual consistency/distributed systems patterns to avoid centralised orchestration at the platform level
- Hands-on experience with cloud platforms (Azure preferred) for building and operating data pipelines
- Solid knowledge of SQL and database fundamentals
- Strong ability to work in a collaborative environment, including cross-functional teams in DevOps, software engineering, and analytics
- Master’s degree in a relevant technical field
- Certifications in Azure, Snowflake, Databricks (e.g., Microsoft Certified: Azure Data Engineer, SnowPro, Databricks Certified: Data Engineer)
- Experience implementing CI/CD pipelines for data-related projects
- Working knowledge of infrastructure-as-code tools (e.g., Terraform, ARM templates)
- Exposure to real-time data processing frameworks (e.g., Spark Streaming, Flink)
- Familiarity with data governance and security best practices (e.g., RBAC, data masking, encryption)
- Demonstrated leadership in data engineering best practices or architecture-level design
What the job involves
- You’ll work as a senior voice within the data platform team to build and evolve the core tools, infrastructure, and processes that empower other domain teams within our data mesh ecosystem to develop and maintain data products
- Ensuring our data solutions (including Kubernetes-based deployments, Snowflake application development, and event-driven architectures) are reusable, standardized, and enable self-service for domain teams
- You will contribute to the technical design, implementation, testing, deployment, and ongoing support and maintenance of our data platform on Snowflake and Azure
- This role includes peer code reviews to maintain quality, reliability, and security
- Modern big data architecture design, encompassing data orchestration and choreography
- You will regularly leverage Python, Kubernetes, and Snowflake for both data and application development
- Development is a part of the role, but you’ll also be expected to contribute to all areas of our engineering work, including product and feature design, leading and mentoring peers, and helping us to continually improve
- You'll plan, design, and evolve data platform solutions within a Data Mesh architecture, ensuring decentralized data ownership and scalable, domain-oriented data pipelines
- Apply Domain-Driven Design (DDD) principles to model data, services, and pipelines around business domains, promoting clear boundaries and alignment with domain-specific requirements
- Collaborate with stakeholders to translate business needs into robust, sustainable data architecture patterns
- Develop and maintain production-level applications primarily using Python (Pandas, PySpark, SnowPark), with the option to leverage other languages (e.g., C#) as needed
- Implement and optimize DevOps workflows, including Git/GitHub, CI/CD pipelines , and infrastructure-as-code (Terraform), to streamline development and delivery processes
- Containerize and deploy data and application workloads on Kubernetes leveraging KEDA for event-driven autoscaling and ensuring reliability, efficiency, and high availability
- Handle enterprise-scale data pipelines and transformations, with a strong focus on Snowflake, or comparable technologies such as Databricks or BigQuery
- Optimize data ingestion, storage, and processing performance to ensure high-throughput and fault-tolerant systems
- Manage and optimize SQL/NoSQL databases, Blob storage, Delta Lake, and other large-scale data store solutions
- Evaluate, recommend, and implement the most appropriate storage technologies based on performance, cost, and scalability requirements
- Build and orchestrate data pipelines across multiple technologies (e.g., dbt, Spark), employing tools like Airflow, Prefect, or Azure Data Factory for macro-level scheduling and dependency management
- Design and integrate event-driven architectures (e.g., Kafka, RabbitMQ) to enable real-time and asynchronous data processing across the enterprise
- Leverage Kubernetes & KEDA to orchestrate containerized jobs in response to events, ensuring scalable, automated operations for data processing tasks
- Participate fully in Scrum ceremonies, leveraging tools like JIRA and Confluence to track progress and collaborate with the team
- Provide input on sprint planning, refinement, and retrospectives to continuously improve team efficiency and product quality
- Deploy and monitor data solutions in Azure, leveraging its native services for data and analytics
- Foster a team-oriented environment by mentoring peers, offering constructive code reviews, and sharing knowledge across the organization
- Communicate proactively with technical and non-technical stakeholders, ensuring transparency around progress, risks, and opportunities
- Take ownership of deliverables, driving tasks to completion and proactively suggesting improvements to existing processes
- Analyze complex data challenges, propose innovative solutions, and drive them through implementation
- Maintain high-quality standards in coding, documentation, and testing to minimize defects and maintain reliability
- Exhibit resilience under pressure by troubleshooting critical issues and delivering results within tight deadlines
- This position may lead project-based teams or mentor junior data engineers, but typically does not include direct, ongoing management of staff
- Collaboration with stakeholders (Data Architects, DevOps engineers, Data Product Managers) to set technical direction and ensure high-quality deliverables
- Once hired this person will have the job title Senior Data Engineer II
Benefits
- Hybrid working environment
- Work-life balance
- Free food and drink
- Generous vacation time
- Colleague bonus plan
- Cycle to work scheme
- Parental leave
- Wellness program
- Regular social events
- Equity plan
- Bring your pets to work
- Electric vehicle scheme
About Enable
Enable helps manufacturers, distributors, and retailers turn rebates into a strategic growth engine. Enable's easy-to-use, collaborative, scalable rebate management platform lets you take control of your rebates, showing the influence and impact strategic rebate programs have on your company's growth, returns, and opportunities.
Our goal is to make a rebate management platform that is fully:
• Comprehensive: Effectively manage every deal type while tracking, analyzing and optimizing the entire rebate management process.
• Collaborative: Create, negotiate, and execute deals together, then track progress in real-time in one trusted location to promote better alignment.
• Controlled: Share the data you want to share, both internally and externally, while configuring workflows, approval processes and audit trails to maintain transparency and compliance.
Top Benefits
About the role
Who you are
-
- Problem-solving, and the ability and confidence to tackle complex data and platform challenges
-
- Ability to prioritize and meet deadlines in a dynamic environment
-
- Attention to detail and solid written and verbal English communication skills
-
- Willingness and an enthusiastic attitude to work within existing processes/methodologies, while driving improvements where needed
- This is a senior technical role focused on the development of our SaaS products—suited to a highly focused, ownership-driven engineer
- You’ll have focused professional experience as a data engineer, preferably in cloud-based SaaS products. Ideally, you’ll have at least five years of experience, but we focus on skill and ability, not tenure
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field (or equivalent professional experience)
- Proven experience with Snowflake (native Snowflake application development is essential)
- Proficiency in Python for data engineering tasks and application development
- Experience deploying and managing containerized applications using Kubernetes (preferably on Azure Kubernetes Services)
- Understanding of event-driven architectures and hands-on experience with event buses (e.g., Kafka, RabbitMQ)
- Familiarity with data orchestration and choreography concepts, including the use of scheduling/orchestration tools (e.g., Airflow, Prefect) and using eventual consistency/distributed systems patterns to avoid centralised orchestration at the platform level
- Hands-on experience with cloud platforms (Azure preferred) for building and operating data pipelines
- Solid knowledge of SQL and database fundamentals
- Strong ability to work in a collaborative environment, including cross-functional teams in DevOps, software engineering, and analytics
- Master’s degree in a relevant technical field
- Certifications in Azure, Snowflake, Databricks (e.g., Microsoft Certified: Azure Data Engineer, SnowPro, Databricks Certified: Data Engineer)
- Experience implementing CI/CD pipelines for data-related projects
- Working knowledge of infrastructure-as-code tools (e.g., Terraform, ARM templates)
- Exposure to real-time data processing frameworks (e.g., Spark Streaming, Flink)
- Familiarity with data governance and security best practices (e.g., RBAC, data masking, encryption)
- Demonstrated leadership in data engineering best practices or architecture-level design
What the job involves
- You’ll work as a senior voice within the data platform team to build and evolve the core tools, infrastructure, and processes that empower other domain teams within our data mesh ecosystem to develop and maintain data products
- Ensuring our data solutions (including Kubernetes-based deployments, Snowflake application development, and event-driven architectures) are reusable, standardized, and enable self-service for domain teams
- You will contribute to the technical design, implementation, testing, deployment, and ongoing support and maintenance of our data platform on Snowflake and Azure
- This role includes peer code reviews to maintain quality, reliability, and security
- Modern big data architecture design, encompassing data orchestration and choreography
- You will regularly leverage Python, Kubernetes, and Snowflake for both data and application development
- Development is a part of the role, but you’ll also be expected to contribute to all areas of our engineering work, including product and feature design, leading and mentoring peers, and helping us to continually improve
- You'll plan, design, and evolve data platform solutions within a Data Mesh architecture, ensuring decentralized data ownership and scalable, domain-oriented data pipelines
- Apply Domain-Driven Design (DDD) principles to model data, services, and pipelines around business domains, promoting clear boundaries and alignment with domain-specific requirements
- Collaborate with stakeholders to translate business needs into robust, sustainable data architecture patterns
- Develop and maintain production-level applications primarily using Python (Pandas, PySpark, SnowPark), with the option to leverage other languages (e.g., C#) as needed
- Implement and optimize DevOps workflows, including Git/GitHub, CI/CD pipelines , and infrastructure-as-code (Terraform), to streamline development and delivery processes
- Containerize and deploy data and application workloads on Kubernetes leveraging KEDA for event-driven autoscaling and ensuring reliability, efficiency, and high availability
- Handle enterprise-scale data pipelines and transformations, with a strong focus on Snowflake, or comparable technologies such as Databricks or BigQuery
- Optimize data ingestion, storage, and processing performance to ensure high-throughput and fault-tolerant systems
- Manage and optimize SQL/NoSQL databases, Blob storage, Delta Lake, and other large-scale data store solutions
- Evaluate, recommend, and implement the most appropriate storage technologies based on performance, cost, and scalability requirements
- Build and orchestrate data pipelines across multiple technologies (e.g., dbt, Spark), employing tools like Airflow, Prefect, or Azure Data Factory for macro-level scheduling and dependency management
- Design and integrate event-driven architectures (e.g., Kafka, RabbitMQ) to enable real-time and asynchronous data processing across the enterprise
- Leverage Kubernetes & KEDA to orchestrate containerized jobs in response to events, ensuring scalable, automated operations for data processing tasks
- Participate fully in Scrum ceremonies, leveraging tools like JIRA and Confluence to track progress and collaborate with the team
- Provide input on sprint planning, refinement, and retrospectives to continuously improve team efficiency and product quality
- Deploy and monitor data solutions in Azure, leveraging its native services for data and analytics
- Foster a team-oriented environment by mentoring peers, offering constructive code reviews, and sharing knowledge across the organization
- Communicate proactively with technical and non-technical stakeholders, ensuring transparency around progress, risks, and opportunities
- Take ownership of deliverables, driving tasks to completion and proactively suggesting improvements to existing processes
- Analyze complex data challenges, propose innovative solutions, and drive them through implementation
- Maintain high-quality standards in coding, documentation, and testing to minimize defects and maintain reliability
- Exhibit resilience under pressure by troubleshooting critical issues and delivering results within tight deadlines
- This position may lead project-based teams or mentor junior data engineers, but typically does not include direct, ongoing management of staff
- Collaboration with stakeholders (Data Architects, DevOps engineers, Data Product Managers) to set technical direction and ensure high-quality deliverables
- Once hired this person will have the job title Senior Data Engineer II
Benefits
- Hybrid working environment
- Work-life balance
- Free food and drink
- Generous vacation time
- Colleague bonus plan
- Cycle to work scheme
- Parental leave
- Wellness program
- Regular social events
- Equity plan
- Bring your pets to work
- Electric vehicle scheme
About Enable
Enable helps manufacturers, distributors, and retailers turn rebates into a strategic growth engine. Enable's easy-to-use, collaborative, scalable rebate management platform lets you take control of your rebates, showing the influence and impact strategic rebate programs have on your company's growth, returns, and opportunities.
Our goal is to make a rebate management platform that is fully:
• Comprehensive: Effectively manage every deal type while tracking, analyzing and optimizing the entire rebate management process.
• Collaborative: Create, negotiate, and execute deals together, then track progress in real-time in one trusted location to promote better alignment.
• Controlled: Share the data you want to share, both internally and externally, while configuring workflows, approval processes and audit trails to maintain transparency and compliance.