AWS Engineer
About the role
AWS Data Engineer
Job Summary
The AWS Data Engineer is responsible for designing, building, and optimizing scalable data pipelines and cloud-native data solutions on Amazon Web Services (AWS). This role requires strong expertise in AWS data services, ETL/ELT development, SQL, Python, and data modeling. You will collaborate with cross-functional teams to deliver high-quality, reliable, and secure data solutions that support analytics, reporting, and data-driven decision-making.
Key Responsibilities
Data Engineering & Pipeline Development
- Design, develop, and maintain scalable ETL/ELT pipelines using AWS services such as Glue, Lambda, EMR, Step Functions, and Data Pipeline.
- Build and optimize data ingestion frameworks for structured and unstructured data.
- Implement data transformation logic using Python, PySpark, and SQL.
- Develop and maintain conceptual, logical, and physical data models to support analytics and reporting.
Cloud Architecture & AWS Services
- Work with AWS services including S3, Redshift, Glue Catalog, Athena, DynamoDB, Kinesis, and IAM.
- Optimize data storage, retrieval, and compute performance across AWS environments.
- Implement best practices for scalability, cost optimization, security, and reliability.
Data Mapping & Governance
- Perform Source-to-Target data mapping to ensure accurate data lineage, traceability, and alignment with business requirements.
- Execute data validation, profiling, and quality checks to ensure accuracy and completeness.
- Support data governance initiatives through metadata management, standards enforcement, and documentation.
Collaboration & Stakeholder Engagement
- Collaborate with data analysts, data scientists, architects, and business stakeholders to understand data needs.
- Translate business requirements into technical specifications and scalable data solutions.
- Prepare clear documentation and presentations to communicate technical concepts and project status.
Continuous Improvement & Innovation
- Identify opportunities to automate data engineering workflows and improve operational efficiency.
- Stay current with AWS innovations, big data technologies, and industry best practices.
- Contribute to architectural discussions and long-term data platform strategy.
Required Qualifications
- Strong experience with AWS data services such as Glue, Redshift, S3, Lambda, EMR, Athena, and Kinesis.
- Proficiency in Python, SQL, and PySpark.
- Hands-on experience with ETL/ELT development and orchestration.
- Strong understanding of data warehousing concepts and dimensional data modeling.
- Experience with CI/CD pipelines and version control tools such as Git, CodePipeline, and CodeBuild.
- Knowledge of data governance, metadata management, and Source-to-Target mapping.
- Experience working in Agile/Scrum environments.
Preferred Qualifications
- Experience with infrastructure-as-code tools such as Terraform or CloudFormation.
- Exposure to streaming technologies including Kafka or Amazon Kinesis.
- Knowledge of containerization technologies such as Docker, ECS, or EKS.
- Experience with BI and visualization tools such as Power BI or Amazon QuickSight.
Certifications (Recommended)
- AWS Certified Data Engineer – Associate
- AWS Certified Solutions Architect – Associate
- AWS Certified Developer – Associate
About Cognizant
Cognizant (Nasdaq-100: CTSH) engineers modern businesses. We help our clients modernize technology, reimagine processes and transform experiences so they can stay ahead in our fast-changing world. Together, we’re improving everyday life. See how at www.cognizant.com or @cognizant.
AWS Engineer
About the role
AWS Data Engineer
Job Summary
The AWS Data Engineer is responsible for designing, building, and optimizing scalable data pipelines and cloud-native data solutions on Amazon Web Services (AWS). This role requires strong expertise in AWS data services, ETL/ELT development, SQL, Python, and data modeling. You will collaborate with cross-functional teams to deliver high-quality, reliable, and secure data solutions that support analytics, reporting, and data-driven decision-making.
Key Responsibilities
Data Engineering & Pipeline Development
- Design, develop, and maintain scalable ETL/ELT pipelines using AWS services such as Glue, Lambda, EMR, Step Functions, and Data Pipeline.
- Build and optimize data ingestion frameworks for structured and unstructured data.
- Implement data transformation logic using Python, PySpark, and SQL.
- Develop and maintain conceptual, logical, and physical data models to support analytics and reporting.
Cloud Architecture & AWS Services
- Work with AWS services including S3, Redshift, Glue Catalog, Athena, DynamoDB, Kinesis, and IAM.
- Optimize data storage, retrieval, and compute performance across AWS environments.
- Implement best practices for scalability, cost optimization, security, and reliability.
Data Mapping & Governance
- Perform Source-to-Target data mapping to ensure accurate data lineage, traceability, and alignment with business requirements.
- Execute data validation, profiling, and quality checks to ensure accuracy and completeness.
- Support data governance initiatives through metadata management, standards enforcement, and documentation.
Collaboration & Stakeholder Engagement
- Collaborate with data analysts, data scientists, architects, and business stakeholders to understand data needs.
- Translate business requirements into technical specifications and scalable data solutions.
- Prepare clear documentation and presentations to communicate technical concepts and project status.
Continuous Improvement & Innovation
- Identify opportunities to automate data engineering workflows and improve operational efficiency.
- Stay current with AWS innovations, big data technologies, and industry best practices.
- Contribute to architectural discussions and long-term data platform strategy.
Required Qualifications
- Strong experience with AWS data services such as Glue, Redshift, S3, Lambda, EMR, Athena, and Kinesis.
- Proficiency in Python, SQL, and PySpark.
- Hands-on experience with ETL/ELT development and orchestration.
- Strong understanding of data warehousing concepts and dimensional data modeling.
- Experience with CI/CD pipelines and version control tools such as Git, CodePipeline, and CodeBuild.
- Knowledge of data governance, metadata management, and Source-to-Target mapping.
- Experience working in Agile/Scrum environments.
Preferred Qualifications
- Experience with infrastructure-as-code tools such as Terraform or CloudFormation.
- Exposure to streaming technologies including Kafka or Amazon Kinesis.
- Knowledge of containerization technologies such as Docker, ECS, or EKS.
- Experience with BI and visualization tools such as Power BI or Amazon QuickSight.
Certifications (Recommended)
- AWS Certified Data Engineer – Associate
- AWS Certified Solutions Architect – Associate
- AWS Certified Developer – Associate
About Cognizant
Cognizant (Nasdaq-100: CTSH) engineers modern businesses. We help our clients modernize technology, reimagine processes and transform experiences so they can stay ahead in our fast-changing world. Together, we’re improving everyday life. See how at www.cognizant.com or @cognizant.