About the role
Key Responsibilities:
Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data.
Implement scalable data architectures using cloud platforms (AWS, GCP, Azure).
Develop and optimize machine learning pipelines for training, validation, and deployment .
Work closely with data scientists to productionize ML models using MLflow, TensorFlow, PyTorch, or Scikit-learn .
Implement MLOps best practices , including CI/CD pipelines for model deployment and monitoring.
Optimize data storage and retrieval using data lakes, warehouses (Snowflake, Redshift, BigQuery), and NoSQL databases .
Develop real-time data streaming solutions using Kafka, Kinesis, or Apache Flink .
Ensure data quality, governance, and compliance with industry standards.
Automate workflows using Airflow, Prefect, or Dagster .
Monitor model performance and ensure retraining pipelines are in place.
Required Skills & Qualifications:
3-7 years of experience in Data Engineering, ML Engineering, or a related field .
Strong programming skills in Python and SQL (experience with Scala or Java is a plus).
Hands-on experience with big data processing frameworks (Spark, Hadoop, Dask).
Expertise in machine learning frameworks (TensorFlow, PyTorch, Scikit-learn).
Experience with containerization (Docker, Kubernetes) for model deployment.
Familiarity with feature engineering, feature stores, and vector databases (Feast, Pinecone).
Strong understanding of data pipelines, batch & streaming data processing .
Knowledge of MLOps tools (Kubeflow, MLflow, Sagemaker, Vertex AI).
Proficiency in cloud services such as AWS (S3, Lambda, SageMaker), GCP (BigQuery, Vertex AI), or Azure (Synapse, ML Studio) .
Experience with monitoring and logging tools (Prometheus, Grafana, ELK Stack).
Nice to Have:
Experience working in Retail, Finance, Healthcare, or E-commerce domains.
Exposure to A/B testing, recommendation systems, or NLP applications .
Understanding of data privacy regulations (GDPR, CCPA) .
About BayOne Solutions
BayOne is a minority-owned Technology and Talent Solutions Partner with a global footprint, headquartered in the San Francisco Bay Area. We excel at bridging talent and technology gaps, building strong teams in Project & Program Management, Cloud Computing, IT Infrastructure, Big Data, Software Engineering, User Experience Design, and more. Our commitment to customer success is matched by our passion for championing diversity in tech. We believe in sustainable practices and are dedicated to making a positive impact on the communities we serve. At BayOne, we’re more than a technology and talent partner—we’re a trusted ally in driving innovation and success. We are passionate about diversity in the tech industry and are dedicated to #MakeTechPurple.
Our team will only email you via @bayone.com or @bayonesolutions.com email domains.
About the role
Key Responsibilities:
Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data.
Implement scalable data architectures using cloud platforms (AWS, GCP, Azure).
Develop and optimize machine learning pipelines for training, validation, and deployment .
Work closely with data scientists to productionize ML models using MLflow, TensorFlow, PyTorch, or Scikit-learn .
Implement MLOps best practices , including CI/CD pipelines for model deployment and monitoring.
Optimize data storage and retrieval using data lakes, warehouses (Snowflake, Redshift, BigQuery), and NoSQL databases .
Develop real-time data streaming solutions using Kafka, Kinesis, or Apache Flink .
Ensure data quality, governance, and compliance with industry standards.
Automate workflows using Airflow, Prefect, or Dagster .
Monitor model performance and ensure retraining pipelines are in place.
Required Skills & Qualifications:
3-7 years of experience in Data Engineering, ML Engineering, or a related field .
Strong programming skills in Python and SQL (experience with Scala or Java is a plus).
Hands-on experience with big data processing frameworks (Spark, Hadoop, Dask).
Expertise in machine learning frameworks (TensorFlow, PyTorch, Scikit-learn).
Experience with containerization (Docker, Kubernetes) for model deployment.
Familiarity with feature engineering, feature stores, and vector databases (Feast, Pinecone).
Strong understanding of data pipelines, batch & streaming data processing .
Knowledge of MLOps tools (Kubeflow, MLflow, Sagemaker, Vertex AI).
Proficiency in cloud services such as AWS (S3, Lambda, SageMaker), GCP (BigQuery, Vertex AI), or Azure (Synapse, ML Studio) .
Experience with monitoring and logging tools (Prometheus, Grafana, ELK Stack).
Nice to Have:
Experience working in Retail, Finance, Healthcare, or E-commerce domains.
Exposure to A/B testing, recommendation systems, or NLP applications .
Understanding of data privacy regulations (GDPR, CCPA) .
About BayOne Solutions
BayOne is a minority-owned Technology and Talent Solutions Partner with a global footprint, headquartered in the San Francisco Bay Area. We excel at bridging talent and technology gaps, building strong teams in Project & Program Management, Cloud Computing, IT Infrastructure, Big Data, Software Engineering, User Experience Design, and more. Our commitment to customer success is matched by our passion for championing diversity in tech. We believe in sustainable practices and are dedicated to making a positive impact on the communities we serve. At BayOne, we’re more than a technology and talent partner—we’re a trusted ally in driving innovation and success. We are passionate about diversity in the tech industry and are dedicated to #MakeTechPurple.
Our team will only email you via @bayone.com or @bayonesolutions.com email domains.