Jobs.ca
Jobs.ca
Language
Textlayer logo

Senior Data Engineer

Textlayer1 day ago
Remote
CA$200,000 - CA$220,000/year
Senior Level
contract

About the role

Ready to solve the AI implementation gap that 85% of enterprises face? Join us!

About Textlayer

TextLayer helps enterprises and funded startups deploy advanced AI systems without rewriting their infrastructure. We work with organizations across fintech, healthtech, and other sectors to bridge the gap between AI potential and practical implementation.

Our approach combines deep technical expertise with proven frameworks like TextLayer Core to accelerate development and ensure production-ready results. From bespoke AI workflows to agentic systems, we help clients adopt AI that actually works in their existing tech stacks.

We're on a mission to help address the implementation gap that over 85% of enterprise clients experience in adding AI to their operations and products. We're looking for sharp, curious people who want to meaningfully shape how we build, operate, and deliver.

If you're excited to work on foundational AI infrastructure, solve complex problems for diverse clients, and help define what agentic software looks like in practice, we'd love to meet you.

The Role

The Senior Data Engineer plays a critical role in our team, working on the backend architecture and orchestration layer for our data systems. You'll build production-grade data pipelines, develop sophisticated data processing workflows, and create robust integrations that power our customer-facing applications with reliable, scalable data infrastructure.

Key Responsibilities

  • Architect and maintain Python-based services using Flask and modern data frameworks for pipeline and workflow implementations
  • Build and scale secure, well-structured API endpoints that interface with data stores, processing engines, and downstream applications
  • Implement advanced data orchestration logic, ETL/ELT strategies, and tool chaining for complex data workflows
  • Design and optimize data pipelines, including data loaders, transformation strategies, and integration with search systems like OpenSearch
  • Develop and maintain ML data processing pipelines for ingesting, transforming, and serving data across various storage systems
  • Containerize data services using Docker and implement scalable deployment strategies with Kubernetes
  • Collaborate with engineering teams to productionize data models and processing workflows
  • Optimize data processing techniques for improved performance, reliability, and cost efficiency
  • Set up robust test coverage, monitoring, and CI/CD pipelines for data-powered backend services
  • Stay current with emerging trends in data engineering, pipeline architectures, Agent architecture and data systems

What You Will Bring

To succeed in this role, you'll need deep full-stack development expertise, hands-on experience with data pipeline implementations, and a strong understanding of modern data processing patterns. You should be passionate about building scalable data infrastructure and optimizing data workflows.

Required Qualifications

  • 3+ years of experience as a full-stack engineer with strong Python expertise
  • Hands-on experience building data pipelines and processing architectures in production
  • Proficiency with data orchestration frameworks and ETL/ELT tools
  • Experience with databases, data modeling, and search implementations
  • Strong knowledge of data processing optimization and performance tuning
  • Experience with cloud platforms (AWS/GCP/Azure) for data workload deployment
  • Proficiency with Docker and Kubernetes for containerizing and orchestrating applications
  • Comfortable with modern data tooling and monitoring systems
  • Track record of building end-to-end data systems at scale

Bonus Points

  • Experience with Golang (Go), and Java programming language
  • Experience with OpenTelemetry (OTel) for observability and monitoring
  • Contributions to open-source data engineering projects
  • Experience with advanced data processing and pipeline optimization
  • Familiarity with multiple database and storage solutions
  • Background in implementing data processing frameworks at scale
  • Published research or blog posts on data engineering, pipelines, or data systems
  • Experience with data observability and monitoring tools
  • Experience with stream processing and real-time data systems

Employment Type: Contract-to-hire (starts as contract, converts to permanent)

Location: Remote - Canada

Compensation: $200,000 - $220,000 CAD base salary

Start Date: Flexible, but prefer immediate

How to Apply: Apply through LinkedIn or directly via our portal: https://jobs.ashbyhq.com/textlayer/e5dc51ed-3987-45e0-ae4f-3a27edb88838

About Textlayer

1-10

TextLayer helps enterprises and ambitious teams build, deploy, and scale advanced AI systems—without rewriting their infrastructure.

We provide engineering teams with a modular, stable foundation so they can adopt AI without betting on the wrong tech. Our flagship stack, TextLayer Core, is maintainable, tailored to the environment, and deployed with Terraform and standardized APIs.

We work closely with platform teams and technical leaders to integrate LLMs, retrieval-augmented generation (RAG) pipelines, and agentic workflows directly into production environments.

From internal copilots to customer-facing features, TextLayer delivers fast, reliable implementation without compromising long-term maintainability.

We’re a small, fast-moving team on a mission to power enterprise clients with serious AI infrastructure. Modular. Scalable. Battle-tested.