About the role
Description
Seeking a highly skilled Senior Data Architect / Platform Data Engineer to support the design and implementation of a secure, scalable Integrated Data Hub (IDH) leveraging Databricks and Medallion Architecture. This role will focus on designing security, access controls, data modeling, metadata management, and high-volume data processing across bronze, silver, and gold layers. Experience with FHIR data standards at the gold layer is a strong asset.
Skills
Experience and Skill Set Requirements
Must Haves:
- Hands-on experience with Databricks (including Unity Catalog, Delta Lake, Auto loader and PySpark)
- Knowledge of Medallion Architecture patterns in Databricks and designing and supporting data pipelines in a Bronze/Silver/Gold (Medallion) architecture
- Experience conducting data profiling to identify structure, completeness, and data quality issues
- Experience in Azure cloud data architecture
- Extensive experience designing and managing ETL pipelines, including Change Data Capture (CDC)
- Experience implementing role-based access control (RBAC)
- Demonstrated ability to lead data platform initiatives from requirements gathering through design, development, and deployment
Technical Knowledge 60%
- Expert knowledge of data warehouse design methodologies, including Delta Lake and Medallion Architecture, with deep understanding of Delta Lake optimizations.
- Proficient in Azure Data Lake, Delta Lake, Azure DevOps, Git, and API testing tools like Postman.
- Strong proficiency in relational databases with expertise in writing, tuning, and debugging complex SQL queries.
- Experienced in integrating and managing REST APIs for downstream systems like MDM and FHIR services.
- Skilled in designing and optimizing ETL/ELT pipelines in Databricks using PySpark, SQL, and Delta Live Tables, including implementing Change Data Capture (batch and streaming).
- Experienced in metadata-driven ingestion and transformation pipelines with Python and PySpark.
- Familiar with Unity Catalog structure and management, including configuring fine-grained permissions and workspace ACLs for secure data governance.
- Ability to lead logical and physical data modeling across lakehouse layers (Bronze, Silver, Gold) and define business and technical metadata.
- Experienced with Databricks job and all-purpose cluster configuration, optimization, and DevOps practices such as notebook versioning and environment management.
- Proficient in assessing and profiling large volumes of data to ensure data quality and support business rules.
- Able to collaborate effectively with ETL developers and business analysts to translate user stories into technical pipeline logic.
General Skills (40%)
- 5+ years in data engineering, ideally in cloud data lake environments
- Ability to translate business requirements into scalable data architectures, data models, and governance frameworks
- Able to serve as technical advisor during sprint planning and backlog grooming.
- Skilled in conducting data discovery, profiling, and quality assessments to guide architecture and modeling decisions
- Capable of conducting performance diagnostics and root cause analysis across multiple layers (DB, ETL, infrastructure)
- Strong communication skills for working with business stakeholders, developers, and executives
- Passion for mentoring, training, and establishing reusable frameworks and best practices
- Experience with agile practices, including sprints, user stories, and iterative development, especially when working in an agile data environment
- Experience grooming and assembling requirements into coherent user stories and use cases and managing the Product Backlog Items, refining them and communicate changes to project manager/Team Lead
- Analyze current and future data needs, data flows, and data governance practices to support enterprise data strategies
- Lead data discovery efforts and participate in the design of data models and data integration solutions
About VLink Inc
VLink Inc. is a global software engineering and tech talent partner, delivering next-gen software development, DevOps & AI-powered solutions, and expert software professionals to businesses across the globe. Founded in 2006, VLink helps organizations—from Fortune 500s to SMBs—accelerate digital transformation, enhance operational efficiency & drive innovation in an ever-evolving tech landscape by leveraging the latest technologies and the best IT talent.
We don’t just build software and apps— we spark conversations! Tune into VLink’s Award-winning Mini Tech Podcast— Tech Talk with VLink, for insights from industry leaders and VLink experts on AI, cybersecurity, digital transformation, legacy modernization, and emerging tech trends>> https://vlinkinfo.com/podcast/
Why Choose VLink?
- 100% Customer Retention
- 7+ Delivery Locations with Onshore/Offshore/Nearshore IT Talent
- 48 Hrs. TAT for Candidates Profiles on New Hiring Requests
- 2 Decades of Software Engineering Excellence
- 250+ Global Fortune-500, Large & SMB Clients Served
Our core competencies and global service offerings include: DIGITAL TRANSFORMATION & CONSULTING (Data & Analytics| DevOps & Automation| Blockchain, AI/ML, IoT| Re-engineering Legacy Apps | Enterprise Portal Upgrades & Migrations)
CUSTOM SOFTWARE DEVELOPMENT & MAINTENANCE (Web & Mobile Apps| Product Engineering| Maintenance & Support| QA & Testing)
DEDICATED PROJECT TEAMS & SCALE-UP (Delivery Centers Onshore, Offshore & Nearshore| Short & Long-term Assignments| 24x7 Time Zone Alignment)
AWARD-WINNING WORKFORCE
- Inc Magazine 500/5000 Fastest Growing Privately Owned Companies
- Connecticut’s 40 Fastest-Growing Tech Companies by Connecticut Technology Council’s Marcum TT 40
- Consistent Top Five Winner of Hartford Business Journal’s Best Places to Work in CT (2023)
- Great Place to Work® Certified™ 2024 and 2023
OUR IPs
- SimplyEDI - VLinkCare
About the role
Description
Seeking a highly skilled Senior Data Architect / Platform Data Engineer to support the design and implementation of a secure, scalable Integrated Data Hub (IDH) leveraging Databricks and Medallion Architecture. This role will focus on designing security, access controls, data modeling, metadata management, and high-volume data processing across bronze, silver, and gold layers. Experience with FHIR data standards at the gold layer is a strong asset.
Skills
Experience and Skill Set Requirements
Must Haves:
- Hands-on experience with Databricks (including Unity Catalog, Delta Lake, Auto loader and PySpark)
- Knowledge of Medallion Architecture patterns in Databricks and designing and supporting data pipelines in a Bronze/Silver/Gold (Medallion) architecture
- Experience conducting data profiling to identify structure, completeness, and data quality issues
- Experience in Azure cloud data architecture
- Extensive experience designing and managing ETL pipelines, including Change Data Capture (CDC)
- Experience implementing role-based access control (RBAC)
- Demonstrated ability to lead data platform initiatives from requirements gathering through design, development, and deployment
Technical Knowledge 60%
- Expert knowledge of data warehouse design methodologies, including Delta Lake and Medallion Architecture, with deep understanding of Delta Lake optimizations.
- Proficient in Azure Data Lake, Delta Lake, Azure DevOps, Git, and API testing tools like Postman.
- Strong proficiency in relational databases with expertise in writing, tuning, and debugging complex SQL queries.
- Experienced in integrating and managing REST APIs for downstream systems like MDM and FHIR services.
- Skilled in designing and optimizing ETL/ELT pipelines in Databricks using PySpark, SQL, and Delta Live Tables, including implementing Change Data Capture (batch and streaming).
- Experienced in metadata-driven ingestion and transformation pipelines with Python and PySpark.
- Familiar with Unity Catalog structure and management, including configuring fine-grained permissions and workspace ACLs for secure data governance.
- Ability to lead logical and physical data modeling across lakehouse layers (Bronze, Silver, Gold) and define business and technical metadata.
- Experienced with Databricks job and all-purpose cluster configuration, optimization, and DevOps practices such as notebook versioning and environment management.
- Proficient in assessing and profiling large volumes of data to ensure data quality and support business rules.
- Able to collaborate effectively with ETL developers and business analysts to translate user stories into technical pipeline logic.
General Skills (40%)
- 5+ years in data engineering, ideally in cloud data lake environments
- Ability to translate business requirements into scalable data architectures, data models, and governance frameworks
- Able to serve as technical advisor during sprint planning and backlog grooming.
- Skilled in conducting data discovery, profiling, and quality assessments to guide architecture and modeling decisions
- Capable of conducting performance diagnostics and root cause analysis across multiple layers (DB, ETL, infrastructure)
- Strong communication skills for working with business stakeholders, developers, and executives
- Passion for mentoring, training, and establishing reusable frameworks and best practices
- Experience with agile practices, including sprints, user stories, and iterative development, especially when working in an agile data environment
- Experience grooming and assembling requirements into coherent user stories and use cases and managing the Product Backlog Items, refining them and communicate changes to project manager/Team Lead
- Analyze current and future data needs, data flows, and data governance practices to support enterprise data strategies
- Lead data discovery efforts and participate in the design of data models and data integration solutions
About VLink Inc
VLink Inc. is a global software engineering and tech talent partner, delivering next-gen software development, DevOps & AI-powered solutions, and expert software professionals to businesses across the globe. Founded in 2006, VLink helps organizations—from Fortune 500s to SMBs—accelerate digital transformation, enhance operational efficiency & drive innovation in an ever-evolving tech landscape by leveraging the latest technologies and the best IT talent.
We don’t just build software and apps— we spark conversations! Tune into VLink’s Award-winning Mini Tech Podcast— Tech Talk with VLink, for insights from industry leaders and VLink experts on AI, cybersecurity, digital transformation, legacy modernization, and emerging tech trends>> https://vlinkinfo.com/podcast/
Why Choose VLink?
- 100% Customer Retention
- 7+ Delivery Locations with Onshore/Offshore/Nearshore IT Talent
- 48 Hrs. TAT for Candidates Profiles on New Hiring Requests
- 2 Decades of Software Engineering Excellence
- 250+ Global Fortune-500, Large & SMB Clients Served
Our core competencies and global service offerings include: DIGITAL TRANSFORMATION & CONSULTING (Data & Analytics| DevOps & Automation| Blockchain, AI/ML, IoT| Re-engineering Legacy Apps | Enterprise Portal Upgrades & Migrations)
CUSTOM SOFTWARE DEVELOPMENT & MAINTENANCE (Web & Mobile Apps| Product Engineering| Maintenance & Support| QA & Testing)
DEDICATED PROJECT TEAMS & SCALE-UP (Delivery Centers Onshore, Offshore & Nearshore| Short & Long-term Assignments| 24x7 Time Zone Alignment)
AWARD-WINNING WORKFORCE
- Inc Magazine 500/5000 Fastest Growing Privately Owned Companies
- Connecticut’s 40 Fastest-Growing Tech Companies by Connecticut Technology Council’s Marcum TT 40
- Consistent Top Five Winner of Hartford Business Journal’s Best Places to Work in CT (2023)
- Great Place to Work® Certified™ 2024 and 2023
OUR IPs
- SimplyEDI - VLinkCare