Top Benefits
Work from home opportunities
Health & Wellness
Financial Benefits
About the role
Who you are
- At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we’re looking for lifelong learners and people who can make us better with their unique experiences
- Engineering Experience: 3+ years of software engineering experience with a strong focus on backend systems, distributed computing, or cloud infrastructure (AWS/GCP/Azure)
- ML Infrastructure: Experience building and scaling ML platforms using technologies like Kubernetes, Ray, Kubeflow, or similar orchestration tools
- Production Mindset: You treat infrastructure as code. You have a track record of implementing CI/CD pipelines, automated testing, and observability for Machine Learning workloads
- GenAI Fluency: Familiarity with the modern LLM stack (e.g., LangChain, vector databases, RAG patterns) and an interest in emerging standards like Model Context Protocol (MCP) and Agentic workflows
- Language Proficiency: Expert-level proficiency in Python and familiarity with Go, Java, or C++
- Collaborative Spirit: Experience working directly with Data Scientists or Applied AI teams to translate research needs into robust production systems
- Experience with Java and/or Java EE web applications
- Familiarity with authentication protocols, access management, or cloud security best practices
- Exposure to prompt engineering, LLM ecosystems (Hugging Face, LangChain, vector databases), and identity and access management or security focused ML applications
What the job involves
- The Intelligence Accelerator team in the Data Platform Group is the engine behind Okta’s AI/ML evolution. We are responsible for building the foundational AI/ML services and systems that fast-track AI/ML for Okta and deliver differentiated value to our users
- Working hand-in-glove with the Data Platform team, Data Scientists, Product Managers, and the SREs, you will bridge the gap between data, research and platform offerings, playing a critical role in enabling the technical adoption of ML and Generative AI across the company
- We are assembling an elite unit designed to be fast, creative, and flexible. We practice extreme ownership. We expect great things from our engineers and reward them with stimulating challenges, novel technologies, and the chance to hold equity. Join the Intelligence Accelerator and help us change the cloud security landscape forever
- As a Machine Learning Engineer within the Intelligence Accelerator team, you will contribute to the development of our next-generation AI/ML platform. You will work with the team to build the "paved road" that empowers Okta’s Applied AI teams to rapidly build and deploy intelligent features—enabling leveraging classical models, LLMs and autonomous agents
- You will join a group that prioritizes engineering rigor—designing for scale, rigorous code reviews, automated testing, and CI/CD for ML (MLOps). In this role, you will:
- Contribute to the Foundation: Help design and maintain scalable infrastructure for core machine learning lifecycles, including distributed training clusters, real-time inference serving, and high-performance feature stores
- Implement Agentic Infrastructure: Work on the bleeding-edge stack for Agentic AI, helping to implement standards like the Model Context Protocol (MCP) to securely connect LLMs with internal tools, data, and APIs
- Enable Applied AI: Build internal developer tooling that allow our product engineering teams to leverage Generative AI workflows, RAG pipelines, and vector databases without worrying about the underlying infrastructure
- This is your opportunity to build the systems that power Okta’s future, working with emerging tech stacks in a fast, flexible, and elite environment
- Architect & Deploy ML Infrastructure: Design, build, and maintain robust, scalable ML infrastructure using modern tools (Airflow, MLFlow, feature stores, vector databases) while implementing automated CI/CD workflows for model training, validation, and deployment
- Drive Innovation & Security: Evaluate and adopt new ML technologies while ensuring data security, privacy, and compliance within the ML infrastructure
- Build Scalable Data Pipelines: Design and maintain pipelines to ingest, process, and transform data from various sources, data warehouses, ensuring data quality, security, and compliance of data feeding into ML models and offline model artifact deployment
- Optimize & Monitor Systems: Deploy and monitor ML systems with data scientists and engineers, troubleshoot infrastructure issues, and optimize their performance
- Collaborate & Evangelize: Work with cross-functional teams to align AI/ML platform strategies with company objectives while acting as an AI/ML evangelist and provide mentorship to analysts and stakeholders
Benefits
- Work from home opportunities
- Health + Wellness
- Financial Benefits
- Pay + Incentives
- Time Off
- Everyday Living
- Resources
Top Benefits
Work from home opportunities
Health & Wellness
Financial Benefits
About the role
Who you are
- At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we’re looking for lifelong learners and people who can make us better with their unique experiences
- Engineering Experience: 3+ years of software engineering experience with a strong focus on backend systems, distributed computing, or cloud infrastructure (AWS/GCP/Azure)
- ML Infrastructure: Experience building and scaling ML platforms using technologies like Kubernetes, Ray, Kubeflow, or similar orchestration tools
- Production Mindset: You treat infrastructure as code. You have a track record of implementing CI/CD pipelines, automated testing, and observability for Machine Learning workloads
- GenAI Fluency: Familiarity with the modern LLM stack (e.g., LangChain, vector databases, RAG patterns) and an interest in emerging standards like Model Context Protocol (MCP) and Agentic workflows
- Language Proficiency: Expert-level proficiency in Python and familiarity with Go, Java, or C++
- Collaborative Spirit: Experience working directly with Data Scientists or Applied AI teams to translate research needs into robust production systems
- Experience with Java and/or Java EE web applications
- Familiarity with authentication protocols, access management, or cloud security best practices
- Exposure to prompt engineering, LLM ecosystems (Hugging Face, LangChain, vector databases), and identity and access management or security focused ML applications
What the job involves
- The Intelligence Accelerator team in the Data Platform Group is the engine behind Okta’s AI/ML evolution. We are responsible for building the foundational AI/ML services and systems that fast-track AI/ML for Okta and deliver differentiated value to our users
- Working hand-in-glove with the Data Platform team, Data Scientists, Product Managers, and the SREs, you will bridge the gap between data, research and platform offerings, playing a critical role in enabling the technical adoption of ML and Generative AI across the company
- We are assembling an elite unit designed to be fast, creative, and flexible. We practice extreme ownership. We expect great things from our engineers and reward them with stimulating challenges, novel technologies, and the chance to hold equity. Join the Intelligence Accelerator and help us change the cloud security landscape forever
- As a Machine Learning Engineer within the Intelligence Accelerator team, you will contribute to the development of our next-generation AI/ML platform. You will work with the team to build the "paved road" that empowers Okta’s Applied AI teams to rapidly build and deploy intelligent features—enabling leveraging classical models, LLMs and autonomous agents
- You will join a group that prioritizes engineering rigor—designing for scale, rigorous code reviews, automated testing, and CI/CD for ML (MLOps). In this role, you will:
- Contribute to the Foundation: Help design and maintain scalable infrastructure for core machine learning lifecycles, including distributed training clusters, real-time inference serving, and high-performance feature stores
- Implement Agentic Infrastructure: Work on the bleeding-edge stack for Agentic AI, helping to implement standards like the Model Context Protocol (MCP) to securely connect LLMs with internal tools, data, and APIs
- Enable Applied AI: Build internal developer tooling that allow our product engineering teams to leverage Generative AI workflows, RAG pipelines, and vector databases without worrying about the underlying infrastructure
- This is your opportunity to build the systems that power Okta’s future, working with emerging tech stacks in a fast, flexible, and elite environment
- Architect & Deploy ML Infrastructure: Design, build, and maintain robust, scalable ML infrastructure using modern tools (Airflow, MLFlow, feature stores, vector databases) while implementing automated CI/CD workflows for model training, validation, and deployment
- Drive Innovation & Security: Evaluate and adopt new ML technologies while ensuring data security, privacy, and compliance within the ML infrastructure
- Build Scalable Data Pipelines: Design and maintain pipelines to ingest, process, and transform data from various sources, data warehouses, ensuring data quality, security, and compliance of data feeding into ML models and offline model artifact deployment
- Optimize & Monitor Systems: Deploy and monitor ML systems with data scientists and engineers, troubleshoot infrastructure issues, and optimize their performance
- Collaborate & Evangelize: Work with cross-functional teams to align AI/ML platform strategies with company objectives while acting as an AI/ML evangelist and provide mentorship to analysts and stakeholders
Benefits
- Work from home opportunities
- Health + Wellness
- Financial Benefits
- Pay + Incentives
- Time Off
- Everyday Living
- Resources