<Back to Search
Lead Data Engineer
Alpharetta, GAMarch 24th, 2026
Job Title – Lead Data Engineer Please note this role is not able to offer visa transfer or sponsorship now or in the futureAbout the roleAs a Lead Data Engineer, you will make an impact by designing, building, and operating scalable, cloud‑native data platforms supporting batch and streaming use cases, with strong focus on governance, performance, and reliability. You will be a valued member of the Data Engineering team and work collaboratively with cross‑functional engineering, cloud, and architecture stakeholders.In this role, you will:Design, build, and operate scalable cloud‑native data platforms supporting batch and streaming workloads with strong governance, performance, and reliability.Develop and operate data systems on AWS, Azure, and GCP, designing cloud‑native, scalable, and cost‑efficient data solutions.Build modern data architectures including data lakes, data lakehouses, and data hubs, with strong understanding of ingestion patterns, data governance, data modeling, observability, and platform best practices.Develop data ingestion and collection pipelines using Kafka and AWS Glue; work with modern storage formats such as Apache Iceberg and Parquet.Design and develop real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks, with understanding of event‑driven architectures and low‑latency data processing.Perform data transformation and modeling using SQL‑based frameworks and orchestration tools such as dbt, AWS Glue, and Airflow, including Slowly Changing Dimensions (SCD) and schema evolution.Use Apache Spark extensively for large‑scale data transformations across batch and streaming workloads.Work modelWe believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 4 days a week in a client or Cognizant office in Atlanta, GA. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs.The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.What you need to have to be consideredHands‑on experience developing and operating data systems on AWS, Azure, and GCP.Proven ability to design cloud‑native, scalable, and cost‑efficient data solutions.Experience building data lakes, data lakehouses, and data hubs with strong understanding of ingestion patterns, governance, modeling, observability, and platform best practices.Expertise in data ingestion and collection using Kafka and AWS Glue, with experience in Apache Iceberg and Parquet.Strong experience designing and developing real‑time streaming pipelines using Kafka, Flink, or similar streaming frameworks.Deep expertise in data transformation and modeling using SQL‑based frameworks and orchestration tools including dbt, AWS Glue, and Airflow, with knowledge of SCD and schema evolution.Extensive experience using Apache Spark for large‑scale batch and streaming data transformations.These will help you stand outExperience with event‑driven architectures and low‑latency data processing.Strong understanding of schema evolution, SCD modeling, and modern data modeling concepts.Experience with Apache Iceberg, Parquet, and modern ingestion/storage patterns.Strong knowledge of observability, governance, and platform best practices.Ability to partner effectively with cloud, architecture, and engineering teams.Salary and Other Compensation:Applications will be accepted until March 17, 2025.The annual salary for this position is between $81,000 - $135,000, depending on experience and other qualifications of the successful candidate.This position is also eligible for Cognizant’s discretionary annual incentive program, based on performance and subject to the terms of Cognizant’s applicable plans.Benefits: Cognizant offers the following benefits for this position, subject to applicable eligibility requirements:Medical/Dental/Vision/Life InsurancePaid holidays plus Paid Time Off401(k) plan and contributionsLong‑term/Short‑term DisabilityPaid Parental LeaveEmployee Stock Purchase PlanDisclaimer: The salary, other compensation, and benefits information is accurate as of the date of this posting. Cognizant reserves the right to modify this information at any time, subject to applicable law.
Showing 50 of 14,193 matching similar jobs
- Sr. Data Engineer (Hybrid)
- Big Data Architect
- Lead Data Engineer
- Data Engineer
- Lead Data Engineer
- Data Warehouse Test Lead: SQL Server, Hadoop, ETL
- Data Engineer / ETL Engineer for AI-Driven Pipelines
- Big Data Architect
- Databricks Architect/ Senior Data Engineer
- Cloud SQL Developer: ETL & Data Pipelines
- Senior Data Analytics Engineer - Healthcare ETL & ML
- Senior Software Engineer: BI, ETL & Data - TS/SCI
- Senior Data Engineer, DPD Team
- Senior Data Engineer
- Databricks Architect/ Senior Data Engineer
- Lead Data Engineer - Hybrid and Contract to Hire Opportunity
- Data Engineer - Databricks
- Hybrid Chicago Data Engineer: ETL, Spark & Snowflake
- Senior Data Analyst - ETL, Snowflake & AWS Expert
- ETL Data Engineer - SpringBatch | AWS, Oracle, SQL
- Sr. Data Engineer, tvScientific
- Databricks Data Engineer - ETL Refactor & Migration
- Analytics ETL Developer (Ab Initio/Boomi)
- Senior Data Modeler & Analytics Architect (Snowflake)
- Data Architect - Power & Utilities - Senior Manager- Consulting - Location OPEN
- Data Architect
- Principal Data Modeler
- Data Architect
- Data Engineer
- Senior Data Architect (SDA)
- Data Engineer
- Senior Data Engineer – Business Intelligence
- Senior Data Engineer (Python/SQL) - Mobility-as-a-service Company
- BI Developer
- ETL Report Developer
- Jr Data Engineer - Onsite - W2
- Senior Data Engineer
- Analytics Engineer
- Data Engineer
- Data Engineer With Python PySpark AWS AT Plano TX W2 Position