About the role
HM Note: This hybrid contract role is five (5) days in office. Candidates' resumes must include first and last name.
Description
Responsibilities
- Designing and developing data pipelines from source to end user
- Optimizing data pipelines
General Skills
- Experience with Cloud data platforms, data management and data exchange tools and technologies
- Experience with Commercial and opensource data and database development and management and specializing in data storage setting and managing cloud Data As a Service (DaaS). application Database as a Service (DBaaS), Data Warehouses as a Service (DWaaS), and other storage platforms (both in the cloud and on-premise).
- Experience with Data pipeline and workflow development, orchestration, deployment, and automation and specializing in programming and pipelines to create and managing the dataflow and movement of data.
- Experience with Cloud data engineer must be familiar and experienced with different programming languages and be able to integrate with many different platforms to create data pipelines, automate tasks, and write scripts.
- Experience in DataOPS principle, best practice, and implementation and Agile project development and deployment
- Experience in Continuous Integration/Continuous Development/Deployment (CI/CD) and Data provisioning Automation
- Experience with digital product, data analysis, data exchange, data provisioning, and data security
- Extensive expert experience in designing/developing and implementing data conversation and migration of VLD (Very large Data) of Online analytical processing (OLAP) and online transaction processing (OLTP) environments to Cloud Software-as-a-service (SaaS), Platform-as-a-service (PaaS) and Infrastructure-as-a-service (IaaS) environments.
- Experience in design, development and implementation of fact/dimension model, data mapping, data warehouse, data lake and data lakehouse for enterprise
- Experience managing Cloud Data services for project delivery, including storage, repositories, Data Lake, Data Lakehouse, key vault, virtual machine, disk, etc.
- Experience with structured, semi-structured, unstructured data collection, ingestion, provisioning and exchange technological development of enterprise data warehouse and data lake and data lakehouse solutions and operational support.
- Experience with DataOPS performance monitoring and tuning
- Excellent analytical, problem-solving and decision-making skills; verbal and written communication skills; presentation skills; interpersonal and negotiation skills
- A team player with a track record for meeting deadlines
Skills
Experience and Skill Set Requirements
Responsibilities:
- Designing and developing data pipelines from source to end user
- Optimizing data pipelines
- Review business requirements, familiarize with and understand business rules and transactional source data model
- Review performance of Extract Load Transform (ELT) pipelines with developers and suggest improvements
- Create end-to-end integration tests for data pipelines and improve pipelines accordingly
- Translate requirements into clear design specifications, and present solutions for team review, incorporating feedback and direction from team lead and team members in a collaborative development environment.
- Conduct Knowledge Transfer and training sessions, ensuring staff receive the required knowledge to support and improve upon the system. Develop learning activities using review-watch-do methodology & demonstrate the ability to prepare and present.
- Develop documentation and materials as part of a review and knowledge transfer to other team members
General Skills:
Technical Experience (50%)
- Proficiency in SQL and python, with hands-on experience using Databricks and Spark SQL for data modeling and transformation tasks.
- Demonstrated fluency in Python, with knowledge of its best practices, coding conventions, and application in building robust, scalable data pipelines.
- Experience with Data pipeline and workflow development, orchestration, deployment, and automation and specializing in programming and pipelines to create and managing the dataflow and movement of data.
- Experienced with different programming languages and be able to integrate with many different platforms to create data pipelines, automate tasks, and write scripts.
- Experience working with open file formats and optimizing pipelines with storage formats such as parquet, delta, iceberg.
- Strong understanding of data quality principles, with the ability to design and implement automated data quality checks using tools such as Python and SQL and/or via frameworks such as Great Expectations or Soda, ensuring data integrity across pipelines and models.
- Experience with performance monitoring and tuning for data pipelines and data stores.
- Experience in design, development and implementation of fact/dimension model, data mapping, data warehouse, data lake and data lakehouse for enterprise.
- Experience with structured, semi-structured, unstructured data collection, ingestion, provisioning and exchange technological development of enterprise data warehouse and data lake and data lakehouse solutions and operational support.
Cloud Knowledge and Experience (25%)
- Experience with Cloud data platforms, data management and data exchange tools and technologies.
- Experience with Commercial and open-source data and database development and management and specializing in data storage setting and managing cloud Data As a Service (DaaS), application Database as a Service (DBaaS), Data Warehouses as a Service (DWaaS), and other storage platforms (both in the cloud and on-premise).
- Experience managing Cloud Data services for project delivery, including storage, repositories, Data Lake, Data Lakehouse, key vault, virtual machine, disk, etc.
Agile Product Development (25%)
- Experience working in an agile, sprint-based development environment
- Understanding and working knowledge of iterative product development cycles (Discovery, Agile, Beta, Live)
- Experience collaborating and sharing tasks with multiple developers on complex data product deliveries
- Experience contributing to version-controlled, shared codebases using git (Azure DevOps, GitHub, Bitbucket) and participating in pull request code reviews.
- Experience in Continuous Integration/Continuous Development/Deployment (CI/CD) and Data provisioning Automation.
- Demonstrated experience in defining and executing tests across the development lifecycle (unit testing, system testing, user acceptance testing) and using results to refine database/store design.
Must Have:
- Demonstrated fluency in Python, with knowledge of its best practices, coding conventions, and application in building robust, scalable data pipelines.
- Experience with Data pipeline and workflow development, orchestration, deployment, and automation and specializing in programming and pipelines to create and managing the dataflow and movement of data.
- Experience with Cloud data platforms, data management and data exchange tools and technologies.
- Experience with Commercial and open-source data and database development and management and specializing in data storage setting and managing cloud Data As a Service (DaaS), application Database as a Service (DBaaS), Data Warehouses as a Service (DWaaS), and other storage platforms (both in the cloud and on-premise).
- Understanding and working knowledge of iterative product development cycles (Discovery, Agile, Beta, Live)
- Experience contributing to version-controlled, shared codebases using git (Azure DevOps, GitHub, Bitbucket) and participating in pull request code reviews.
- Experience in Continuous Integration/Continuous Development/Deployment (CI/CD) and Data provisioning Automation.
- Proficiency in SQL and python, with hands-on experience using Databricks and Spark SQL for data modeling and transformation tasks.
About Foilcon
At Foilcon, we are focused on delivering results to our clients. To be their go to partner for technical services, application developement, integration and training. This leads us to our goals of being a great partner and being the good guys.
With our global resources, we bring the rest of the world within reach to our customers.
Our nimble, experienced team moves from ideas to execution rapidly.
Our motto..There is always a way
About the role
HM Note: This hybrid contract role is five (5) days in office. Candidates' resumes must include first and last name.
Description
Responsibilities
- Designing and developing data pipelines from source to end user
- Optimizing data pipelines
General Skills
- Experience with Cloud data platforms, data management and data exchange tools and technologies
- Experience with Commercial and opensource data and database development and management and specializing in data storage setting and managing cloud Data As a Service (DaaS). application Database as a Service (DBaaS), Data Warehouses as a Service (DWaaS), and other storage platforms (both in the cloud and on-premise).
- Experience with Data pipeline and workflow development, orchestration, deployment, and automation and specializing in programming and pipelines to create and managing the dataflow and movement of data.
- Experience with Cloud data engineer must be familiar and experienced with different programming languages and be able to integrate with many different platforms to create data pipelines, automate tasks, and write scripts.
- Experience in DataOPS principle, best practice, and implementation and Agile project development and deployment
- Experience in Continuous Integration/Continuous Development/Deployment (CI/CD) and Data provisioning Automation
- Experience with digital product, data analysis, data exchange, data provisioning, and data security
- Extensive expert experience in designing/developing and implementing data conversation and migration of VLD (Very large Data) of Online analytical processing (OLAP) and online transaction processing (OLTP) environments to Cloud Software-as-a-service (SaaS), Platform-as-a-service (PaaS) and Infrastructure-as-a-service (IaaS) environments.
- Experience in design, development and implementation of fact/dimension model, data mapping, data warehouse, data lake and data lakehouse for enterprise
- Experience managing Cloud Data services for project delivery, including storage, repositories, Data Lake, Data Lakehouse, key vault, virtual machine, disk, etc.
- Experience with structured, semi-structured, unstructured data collection, ingestion, provisioning and exchange technological development of enterprise data warehouse and data lake and data lakehouse solutions and operational support.
- Experience with DataOPS performance monitoring and tuning
- Excellent analytical, problem-solving and decision-making skills; verbal and written communication skills; presentation skills; interpersonal and negotiation skills
- A team player with a track record for meeting deadlines
Skills
Experience and Skill Set Requirements
Responsibilities:
- Designing and developing data pipelines from source to end user
- Optimizing data pipelines
- Review business requirements, familiarize with and understand business rules and transactional source data model
- Review performance of Extract Load Transform (ELT) pipelines with developers and suggest improvements
- Create end-to-end integration tests for data pipelines and improve pipelines accordingly
- Translate requirements into clear design specifications, and present solutions for team review, incorporating feedback and direction from team lead and team members in a collaborative development environment.
- Conduct Knowledge Transfer and training sessions, ensuring staff receive the required knowledge to support and improve upon the system. Develop learning activities using review-watch-do methodology & demonstrate the ability to prepare and present.
- Develop documentation and materials as part of a review and knowledge transfer to other team members
General Skills:
Technical Experience (50%)
- Proficiency in SQL and python, with hands-on experience using Databricks and Spark SQL for data modeling and transformation tasks.
- Demonstrated fluency in Python, with knowledge of its best practices, coding conventions, and application in building robust, scalable data pipelines.
- Experience with Data pipeline and workflow development, orchestration, deployment, and automation and specializing in programming and pipelines to create and managing the dataflow and movement of data.
- Experienced with different programming languages and be able to integrate with many different platforms to create data pipelines, automate tasks, and write scripts.
- Experience working with open file formats and optimizing pipelines with storage formats such as parquet, delta, iceberg.
- Strong understanding of data quality principles, with the ability to design and implement automated data quality checks using tools such as Python and SQL and/or via frameworks such as Great Expectations or Soda, ensuring data integrity across pipelines and models.
- Experience with performance monitoring and tuning for data pipelines and data stores.
- Experience in design, development and implementation of fact/dimension model, data mapping, data warehouse, data lake and data lakehouse for enterprise.
- Experience with structured, semi-structured, unstructured data collection, ingestion, provisioning and exchange technological development of enterprise data warehouse and data lake and data lakehouse solutions and operational support.
Cloud Knowledge and Experience (25%)
- Experience with Cloud data platforms, data management and data exchange tools and technologies.
- Experience with Commercial and open-source data and database development and management and specializing in data storage setting and managing cloud Data As a Service (DaaS), application Database as a Service (DBaaS), Data Warehouses as a Service (DWaaS), and other storage platforms (both in the cloud and on-premise).
- Experience managing Cloud Data services for project delivery, including storage, repositories, Data Lake, Data Lakehouse, key vault, virtual machine, disk, etc.
Agile Product Development (25%)
- Experience working in an agile, sprint-based development environment
- Understanding and working knowledge of iterative product development cycles (Discovery, Agile, Beta, Live)
- Experience collaborating and sharing tasks with multiple developers on complex data product deliveries
- Experience contributing to version-controlled, shared codebases using git (Azure DevOps, GitHub, Bitbucket) and participating in pull request code reviews.
- Experience in Continuous Integration/Continuous Development/Deployment (CI/CD) and Data provisioning Automation.
- Demonstrated experience in defining and executing tests across the development lifecycle (unit testing, system testing, user acceptance testing) and using results to refine database/store design.
Must Have:
- Demonstrated fluency in Python, with knowledge of its best practices, coding conventions, and application in building robust, scalable data pipelines.
- Experience with Data pipeline and workflow development, orchestration, deployment, and automation and specializing in programming and pipelines to create and managing the dataflow and movement of data.
- Experience with Cloud data platforms, data management and data exchange tools and technologies.
- Experience with Commercial and open-source data and database development and management and specializing in data storage setting and managing cloud Data As a Service (DaaS), application Database as a Service (DBaaS), Data Warehouses as a Service (DWaaS), and other storage platforms (both in the cloud and on-premise).
- Understanding and working knowledge of iterative product development cycles (Discovery, Agile, Beta, Live)
- Experience contributing to version-controlled, shared codebases using git (Azure DevOps, GitHub, Bitbucket) and participating in pull request code reviews.
- Experience in Continuous Integration/Continuous Development/Deployment (CI/CD) and Data provisioning Automation.
- Proficiency in SQL and python, with hands-on experience using Databricks and Spark SQL for data modeling and transformation tasks.
About Foilcon
At Foilcon, we are focused on delivering results to our clients. To be their go to partner for technical services, application developement, integration and training. This leads us to our goals of being a great partner and being the good guys.
With our global resources, we bring the rest of the world within reach to our customers.
Our nimble, experienced team moves from ideas to execution rapidly.
Our motto..There is always a way