Jobs.ca
Jobs.ca
Language
Textlayer logo

Senior AI Engineer

Textlayer15 days ago
Senior Level
contract

About the role

About TextLayer

TextLayer helps enterprises and funded startups build, deploy, and scale advanced AI systems, without rewriting their infrastructure.

We provide engineering teams with a modular, stable foundation, so they can adopt AI without betting on the wrong tech. Our flagship stack, TextLayer Core, is maintainable, tailored to the environment, and deployed with Terraform and standardized APIs.

We’re a team on a mission to help address the implementation gap that over 85% of enterprise clients experience in adding AI to their operations and products. We’re looking for sharp, curious people who want to meaningfully shape how we build, operate, and deliver.

If you're excited to work on foundational AI infrastructure, ship production-grade systems quickly, and help define what agentic software looks like in practice, we’d love to meet you.

The Role

The AI Engineer plays a critical role in our team, working on both the frontend and backend architecture and orchestration layer for our AI systems. You'll build production-grade RAG (Retrieval-Augmented Generation) pipelines, develop sophisticated AI Agent workflows, and create robust LLM integrations that power our customer-facing applications.

Key Responsibilities

  • Architect and maintain Python-based services using Flask and modern AI frameworks for RAG and Agent implementations
  • Build and scale secure, well-structured API endpoints that interface with LLMs (OpenAI, Hugging Face models), vector databases, and agentic tools
  • Implement advanced Agent orchestration logic, prompt engineering strategies, and tool chaining for complex AI workflows
  • Design and optimize RAG pipelines, including data loaders, chunking strategies, vector embeddings, and search integration with Elasticsearch/OpenSearch
  • Develop and maintain ML pipelines for processing, indexing, and retrieving data in vector stores
  • Build seamless frontend experiences using Next.js, Vercel AI SDK, and modern React patterns for streaming LLM responses
  • Containerize AI services using Docker and implement scalable deployment strategies with AWS ECS/Lambda
  • Collaborate with AI research teams to productionize PyTorch models and Hugging Face transformers
  • Optimize prompt engineering techniques for improved LLM performance and reliability using Langfuse observability
  • Set up robust test coverage, monitoring, and CI/CD pipelines for AI-powered services using GitHub Actions
  • Stay current with emerging trends in AI engineering, Agent architectures, and RAG systems

What You Will Bring

To succeed in this role, you'll need deep full-stack development expertise, hands-on experience with LLM and RAG implementations, and a strong understanding of modern AI Agent patterns. You should be passionate about prompt engineering and building scalable AI pipelines.

Required Qualifications

  • 3+ years of experience as a full-stack engineer with strong Python expertise
  • Hands-on experience building RAG systems and AI Agent architectures in production
  • Proficiency with LLM orchestration frameworks and AI development tools
  • Experience with vector databases, embeddings, and vector search implementations
  • Strong knowledge of prompt engineering principles and LLM optimization techniques
  • Experience integrating OpenAI APIs, Hugging Face models, or similar LLM providers via LiteLLM
  • Proficiency with Docker for containerizing AI applications
  • Experience building ML/data pipelines for AI systems
  • Comfortable with modern AI tooling and search technologies like Elasticsearch/OpenSearch
  • Track record of building end-to-end Agent systems with RAG capabilities

Bonus Points

  • Experience with PyTorch for model deployment and optimization
  • Contributions to open-source AI/LLM projects
  • Experience with advanced prompt engineering and LLM fine-tuning
  • Familiarity with multiple vector database solutions
  • Background in implementing AI applications at scale
  • Experience with Hugging Face ecosystem and model deployment
  • Frontend development experience with Next.js, React, and Vercel AI SDK for streaming interfaces
  • Published research or blog posts on RAG, Agents, or LLM applications
  • AWS experience with ECS/Lambda for AI workload deployment
  • Experience with Langfuse for LLM observability and tracing
  • Document processing pipeline experience: ingestion from diverse sources (PDFs, documents, web content), text extraction, and chunking strategies
  • Infrastructure experience with Terraform, GitHub Actions, and production monitoring
  • Data engineering background: experience with orchestration tools for ML/AI workloads
  • Experience with async workflows and scalable data processing patterns

About Textlayer

1-10

TextLayer helps enterprises and ambitious teams build, deploy, and scale advanced AI systems—without rewriting their infrastructure.

We provide engineering teams with a modular, stable foundation so they can adopt AI without betting on the wrong tech. Our flagship stack, TextLayer Core, is maintainable, tailored to the environment, and deployed with Terraform and standardized APIs.

We work closely with platform teams and technical leaders to integrate LLMs, retrieval-augmented generation (RAG) pipelines, and agentic workflows directly into production environments.

From internal copilots to customer-facing features, TextLayer delivers fast, reliable implementation without compromising long-term maintainability.

We’re a small, fast-moving team on a mission to power enterprise clients with serious AI infrastructure. Modular. Scalable. Battle-tested.