Top Benefits
About the role
eDNA Explorer is expanding through partnership with Dr. Caren Helbing’s laboratory at the University of Victoria to create eDNA Explorer Canada! We are building a cutting-edge platform for processing and analyzing environmental DNA (eDNA) data. Our system processes biological samples to identify species based on their genetic material, integrates environmental data, and provides insights into biodiversity and ecological patterns. We're using modern cloud-native data engineering principles to build robust, scalable pipelines for scientific data analysis.
We're seeking a Full-stack Engineer to enhance and maintain our comprehensive eDNA Explorer Canada platform, which includes both cutting-edge web applications and scientific data processing systems. This role involves building sophisticated data visualization components, implementing complex user workflows, developing type-safe APIs, and maintaining Python-based data processing pipelines and report generation services.
The ideal candidate will have strong React/TypeScript experience with a passion for creating intuitive interfaces for complex scientific data, combined with solid Python backend development skills for data-intensive applications.
Our platform consists of:
· Front-end Web Applications: Modern React-based interfaces for scientific data analysis and research collaboration
· Python Data Processing Services: Flask-based APIs and report generation systems handling large-scale scientific datasets
· Data Pipeline Infrastructure: Dagster-based workflows for processing genomic and environmental data
Requirements
Core Experience (Required)
· 4+ years of full-stack web development experience
· Strong experience with React 18+ and TypeScript
· Solid understanding of Next.js (App Router and Pages Router)
· Experience with Python web development using Flask or FastAPI
· Knowledge of modern database technologies (PostgreSQL, SQLAlchemy)
· Experience with tRPC for type-safe APIs
· Familiarity with modern testing frameworks (Vitest, Playwright, React Testing Library, pytest)
Preferred Experience
· Component-driven development and design systems
· Understanding of monorepo architecture and Turborepo (for TS) and Poetry (for Python)
· Knowledge of cloud services and deployment pipelines (Google Cloud Platform preferred)
· Experience with data visualization libraries and scientific applications
· Background in Redis/RQ for job queuing systems
· Experience with scientific data processing or bioinformatics applications
· Knowledge of containerization (Docker) and orchestration (Kubernetes)
· Experience with AI-powered development tools like Claude Code, GitHub Copilot, or similar agentic coding assistants
· Familiarity with AI frameworks such as Google AI SDK or PydanticAI (a plus)
Technology Stack
Front-end Technologies
· React & Next.js: React 19 with functional components and hooks, Next.js 15 with both App Router and Pages Router patterns
· TypeScript: Comprehensive type safety across the entire application
· React 19 compatibility: With React Compiler integration
· UI & Styling: Custom component library (@cal-edna/ui) with Storybook documentation, Tailwind CSS for utility-first styling
· State Management: Zustand for client state, tRPC for server state management
· Data Fetching: tRPC for type-safe API calls with automatic TypeScript generation
· Forms: React Hook Form with Zod validation for type-safe form handling
· Testing: Vitest for unit testing, Playwright for E2E testing, React Testing Library for component testing
Back-end Technologies
· Python Web Frameworks: Flask 3.0+ for API services, with potential FastAPI integration
· Database: PostgreSQL with SQLAlchemy 2.0+ ORM for robust data modeling
· Job Processing: Redis with RQ (Redis Queue) for background job processing
· Authentication: Experience with JWT-based authentication
· Cloud Services: Google Cloud Platform (BigQuery, Cloud Storage, Secret Manager)
· Data Visualization: Plotly for interactive scientific visualizations
· Containerization: Docker with Kubernetes deployment
· Data Processing: polars for scientific data manipulation
· Scientific Computing: scipy, scikit-bio, scikit-learn for data analysis
Development & Infrastructure
· Monorepo Architecture: Turborepo for efficient builds and dependency management
· Package Management: yarn for frontend, Poetry for Python
· Version Control: Git with conventional commits
· CI/CD: GitHub Actions with automated testing and deployment
· Code Quality: ESLint, Prettier, Ruff (Python), pre-commit hooks
· Documentation: Storybook for component documentation, comprehensive API documentation
Data Processing Pipeline
· Workflow Orchestration: Dagster for data pipeline management
· Data Storage: Google Cloud Storage, BigQuery for large-scale data analytics
· Data Formats: Support for scientific data formats (FASTA, TSV, compressed formats)
· Performance Optimization: Polars for high-performance data processing
Key Responsibilities
Front-end Development
· Build and maintain React applications for scientific data visualization and analysis
· Develop reusable UI components following design system principles
· Implement complex data visualization dashboards using modern charting libraries
· Create intuitive user workflows for researchers and scientists
· Ensure type safety across the entire frontend application stack
· Optimize application performance for large scientific datasets
Back-end Development
· Design and implement Flask APIs for data processing and report generation
· Manage database operations using SQLAlchemy for complex scientific data models
· Develop background job processing systems using Redis and RQ
· Build report generation services that process large-scale genomic and environmental data
· Integrate with Google Cloud services for scalable data processing
· Implement robust authentication and authorization systems
System Integration
· Connect frontend applications with Python backend services via tRPC
· Maintain data consistency across web applications and processing pipelines
· Optimize system performance for handling large scientific datasets
· Implement monitoring and logging for both web and data processing components
· Ensure security best practices across the entire platform
Data & Analytics
· Work with scientific datasets including genomic sequences, environmental data, and biodiversity information
· Implement data validation and quality assurance processes
· Build interactive dashboards for scientific data exploration
· Create data export and download functionality for researchers
What You'll Build
Web Applications
· Interactive data visualization dashboards for biodiversity analysis
· Real-time data processing interfaces with progress tracking
· Complex form systems for scientific metadata collection
· Responsive data tables with advanced filtering and sorting
· Map-based visualizations for geographic species distribution
Backend Services
· Report generation APIs that process terabytes of scientific data
· Background job systems for long-running data processing tasks
· Data validation services for scientific metadata
· Authentication and user management systems
· File processing and storage services for scientific datasets
Integration Features
· Real-time updates between web interfaces and data processing jobs
· Type-safe API contracts between frontend and backend systems
· Scalable file upload and processing workflows
· Advanced search and filtering across scientific datasets
Technical Challenges
· Performance optimization for applications handling large scientific datasets
· Complex state management across multiple interconnected applications
· Real-time updates for long-running scientific computations
· Type safety across full-stack applications with complex data models
· Scientific data visualization with interactive and responsive charts
· Scalable architecture supporting growing research community
Team & Culture
You'll join a collaborative international team of scientists, engineers, and researchers working on meaningful environmental and biodiversity research. Our development culture emphasizes:
· AI-native development leveraging modern coding assistants and tools for enhanced productivity
· Code quality and testing with comprehensive test coverage
· Type safety and robust error handling across all systems
· Performance and scalability for scientific computing workloads
· Documentation and knowledge sharing for complex scientific processes
· Collaborative problem-solving with domain experts and researchers
· Continuous learning and adoption of cutting-edge development tools and practices
Growth Opportunities
· Scientific domain expertise in environmental biology and genomics
· Advanced data engineering and pipeline optimization
· Cloud architecture and distributed systems design
· Open-source contributions to scientific computing tools
· Research collaboration with academic institutions and environmental organizations
Benefits
This is a grant-funded position with the possibility of future hiring as an employee at the end of the grant.
eDNA Explorer Canada is committed to building a diverse team. We encourage applications from candidates of all backgrounds.
This position is available as remote within Canada with preference for candidates who can occasionally visit our offices located at the University of Victoria on Vancouver Island in beautiful British Columbia. Applicant must be a Canadian citizen or have a valid work permit to work in Canada.
The Helbing lab is situated in the Department of Biochemistry & Microbiology at the University of Victoria. The eDNA Explorer platform can be viewed here: https://www.ednaexplorer.org.
We're looking for engineers who are excited about building tools that enable groundbreaking environmental research that can truly change the world. If you're passionate about creating robust, scalable applications that help scientists understand and protect biodiversity, we'd love to hear from you.
This role offers the unique opportunity to work at the intersection of modern web development and cutting-edge environmental science, building tools that have real impact on our understanding of the natural world.
About eDNA Explorer Canada
Top Benefits
About the role
eDNA Explorer is expanding through partnership with Dr. Caren Helbing’s laboratory at the University of Victoria to create eDNA Explorer Canada! We are building a cutting-edge platform for processing and analyzing environmental DNA (eDNA) data. Our system processes biological samples to identify species based on their genetic material, integrates environmental data, and provides insights into biodiversity and ecological patterns. We're using modern cloud-native data engineering principles to build robust, scalable pipelines for scientific data analysis.
We're seeking a Full-stack Engineer to enhance and maintain our comprehensive eDNA Explorer Canada platform, which includes both cutting-edge web applications and scientific data processing systems. This role involves building sophisticated data visualization components, implementing complex user workflows, developing type-safe APIs, and maintaining Python-based data processing pipelines and report generation services.
The ideal candidate will have strong React/TypeScript experience with a passion for creating intuitive interfaces for complex scientific data, combined with solid Python backend development skills for data-intensive applications.
Our platform consists of:
· Front-end Web Applications: Modern React-based interfaces for scientific data analysis and research collaboration
· Python Data Processing Services: Flask-based APIs and report generation systems handling large-scale scientific datasets
· Data Pipeline Infrastructure: Dagster-based workflows for processing genomic and environmental data
Requirements
Core Experience (Required)
· 4+ years of full-stack web development experience
· Strong experience with React 18+ and TypeScript
· Solid understanding of Next.js (App Router and Pages Router)
· Experience with Python web development using Flask or FastAPI
· Knowledge of modern database technologies (PostgreSQL, SQLAlchemy)
· Experience with tRPC for type-safe APIs
· Familiarity with modern testing frameworks (Vitest, Playwright, React Testing Library, pytest)
Preferred Experience
· Component-driven development and design systems
· Understanding of monorepo architecture and Turborepo (for TS) and Poetry (for Python)
· Knowledge of cloud services and deployment pipelines (Google Cloud Platform preferred)
· Experience with data visualization libraries and scientific applications
· Background in Redis/RQ for job queuing systems
· Experience with scientific data processing or bioinformatics applications
· Knowledge of containerization (Docker) and orchestration (Kubernetes)
· Experience with AI-powered development tools like Claude Code, GitHub Copilot, or similar agentic coding assistants
· Familiarity with AI frameworks such as Google AI SDK or PydanticAI (a plus)
Technology Stack
Front-end Technologies
· React & Next.js: React 19 with functional components and hooks, Next.js 15 with both App Router and Pages Router patterns
· TypeScript: Comprehensive type safety across the entire application
· React 19 compatibility: With React Compiler integration
· UI & Styling: Custom component library (@cal-edna/ui) with Storybook documentation, Tailwind CSS for utility-first styling
· State Management: Zustand for client state, tRPC for server state management
· Data Fetching: tRPC for type-safe API calls with automatic TypeScript generation
· Forms: React Hook Form with Zod validation for type-safe form handling
· Testing: Vitest for unit testing, Playwright for E2E testing, React Testing Library for component testing
Back-end Technologies
· Python Web Frameworks: Flask 3.0+ for API services, with potential FastAPI integration
· Database: PostgreSQL with SQLAlchemy 2.0+ ORM for robust data modeling
· Job Processing: Redis with RQ (Redis Queue) for background job processing
· Authentication: Experience with JWT-based authentication
· Cloud Services: Google Cloud Platform (BigQuery, Cloud Storage, Secret Manager)
· Data Visualization: Plotly for interactive scientific visualizations
· Containerization: Docker with Kubernetes deployment
· Data Processing: polars for scientific data manipulation
· Scientific Computing: scipy, scikit-bio, scikit-learn for data analysis
Development & Infrastructure
· Monorepo Architecture: Turborepo for efficient builds and dependency management
· Package Management: yarn for frontend, Poetry for Python
· Version Control: Git with conventional commits
· CI/CD: GitHub Actions with automated testing and deployment
· Code Quality: ESLint, Prettier, Ruff (Python), pre-commit hooks
· Documentation: Storybook for component documentation, comprehensive API documentation
Data Processing Pipeline
· Workflow Orchestration: Dagster for data pipeline management
· Data Storage: Google Cloud Storage, BigQuery for large-scale data analytics
· Data Formats: Support for scientific data formats (FASTA, TSV, compressed formats)
· Performance Optimization: Polars for high-performance data processing
Key Responsibilities
Front-end Development
· Build and maintain React applications for scientific data visualization and analysis
· Develop reusable UI components following design system principles
· Implement complex data visualization dashboards using modern charting libraries
· Create intuitive user workflows for researchers and scientists
· Ensure type safety across the entire frontend application stack
· Optimize application performance for large scientific datasets
Back-end Development
· Design and implement Flask APIs for data processing and report generation
· Manage database operations using SQLAlchemy for complex scientific data models
· Develop background job processing systems using Redis and RQ
· Build report generation services that process large-scale genomic and environmental data
· Integrate with Google Cloud services for scalable data processing
· Implement robust authentication and authorization systems
System Integration
· Connect frontend applications with Python backend services via tRPC
· Maintain data consistency across web applications and processing pipelines
· Optimize system performance for handling large scientific datasets
· Implement monitoring and logging for both web and data processing components
· Ensure security best practices across the entire platform
Data & Analytics
· Work with scientific datasets including genomic sequences, environmental data, and biodiversity information
· Implement data validation and quality assurance processes
· Build interactive dashboards for scientific data exploration
· Create data export and download functionality for researchers
What You'll Build
Web Applications
· Interactive data visualization dashboards for biodiversity analysis
· Real-time data processing interfaces with progress tracking
· Complex form systems for scientific metadata collection
· Responsive data tables with advanced filtering and sorting
· Map-based visualizations for geographic species distribution
Backend Services
· Report generation APIs that process terabytes of scientific data
· Background job systems for long-running data processing tasks
· Data validation services for scientific metadata
· Authentication and user management systems
· File processing and storage services for scientific datasets
Integration Features
· Real-time updates between web interfaces and data processing jobs
· Type-safe API contracts between frontend and backend systems
· Scalable file upload and processing workflows
· Advanced search and filtering across scientific datasets
Technical Challenges
· Performance optimization for applications handling large scientific datasets
· Complex state management across multiple interconnected applications
· Real-time updates for long-running scientific computations
· Type safety across full-stack applications with complex data models
· Scientific data visualization with interactive and responsive charts
· Scalable architecture supporting growing research community
Team & Culture
You'll join a collaborative international team of scientists, engineers, and researchers working on meaningful environmental and biodiversity research. Our development culture emphasizes:
· AI-native development leveraging modern coding assistants and tools for enhanced productivity
· Code quality and testing with comprehensive test coverage
· Type safety and robust error handling across all systems
· Performance and scalability for scientific computing workloads
· Documentation and knowledge sharing for complex scientific processes
· Collaborative problem-solving with domain experts and researchers
· Continuous learning and adoption of cutting-edge development tools and practices
Growth Opportunities
· Scientific domain expertise in environmental biology and genomics
· Advanced data engineering and pipeline optimization
· Cloud architecture and distributed systems design
· Open-source contributions to scientific computing tools
· Research collaboration with academic institutions and environmental organizations
Benefits
This is a grant-funded position with the possibility of future hiring as an employee at the end of the grant.
eDNA Explorer Canada is committed to building a diverse team. We encourage applications from candidates of all backgrounds.
This position is available as remote within Canada with preference for candidates who can occasionally visit our offices located at the University of Victoria on Vancouver Island in beautiful British Columbia. Applicant must be a Canadian citizen or have a valid work permit to work in Canada.
The Helbing lab is situated in the Department of Biochemistry & Microbiology at the University of Victoria. The eDNA Explorer platform can be viewed here: https://www.ednaexplorer.org.
We're looking for engineers who are excited about building tools that enable groundbreaking environmental research that can truly change the world. If you're passionate about creating robust, scalable applications that help scientists understand and protect biodiversity, we'd love to hear from you.
This role offers the unique opportunity to work at the intersection of modern web development and cutting-edge environmental science, building tools that have real impact on our understanding of the natural world.