Job Description
Job Description
Designation: Data Architect
Job Location: New York (Hybrid)
Experience: 9yrs+
Position Summary
We are looking for an experienced Data Architect to design and manage scalable, secure, and high-performance data solutions. The candidate will drive enterprise data architecture, define data models, and build end-to-end frameworks for data integration, governance, and analytics across multiple platforms.
Responsibilities:
- Own enterprise data architecture across conceptual, logical, and physical models, including data standards, naming conventions, and best practices.
- Design reusable architecture patterns (CDC ingestion, SCD handling, lakehouse/warehouse layers, medallion architecture) and architect scalable ETL/ELT + ingestion frameworks for end-to-end integration and transformations.
- Build and optimize warehouse/datamart architectures for analytics and reporting with strong focus on performance tuning (partitioning, clustering, indexing, query optimization).
- Ensure data governance, quality, lineage, metadata management, and secure access controls (RBAC, encryption, masking).
- Drive architecture reviews, technical decisions, and create/maintain documentation (architecture diagrams, data flows, ERDs, integration specs) ensuring alignment with enterprise standards.
- Collaborate with business stakeholders, data engineers, cloud, and BI teams to translate requirements into validated solutions.
- Client-facing role with end-to-end ownership of data architecture and delivery in a high-stakes, high-impact environment.
Skills & Qualifications:
- 8+ years of experience in data architecture and/or data engineering with strong architecture ownership
- Strong knowledge of data modeling (3NF, Star Schema, Snowflake Schema, Dimensional Modeling)
- Expertise in SQL and database design principles
- Strong experience with data warehousing concepts and best practices
- Hands-on experience with at least one modern cloud data platform (Warehouse/Lakehouse): BigQuery / Databricks / Snowflake / Redshift
- Experience with ETL/ELT tools and orchestration frameworks
- Knowledge of data governance tools (Collibra, Alation, Purview) is a plus
- Exposure to API-based ingestion and streaming (Kafka / PubSub) is a plus
- Strong communication and stakeholder management skills
Education:
- Bachelors/Masters degree in Computer Science / IT / Engineering or equivalent experience