Upvote
Downvote
Staff Data Engineer
Share Job
- Suggest Revision
$130,000 - $180,000 a year
Full-time
- Architect and optimize data pipelines for performance and cost-efficiency within the Databricks environment
- Design and implement ETL processes using Databricks jobs to process and transform raw data into a usable format for analysis.
- 4+ years of experience working on Spark (RDDs / Data Frames / Dataset API) using Scala/Python to build and maintain complex ETL pipelines.
- Deep understanding of Apache Spark, Delta Lake, and related big data technologies, and proficiency in modern programming languages such as Python and Scala.
- 3+ years of experience working on AWS (Kinesis / Kafka / S3 / RedShift) or Azure.
Active Job
Updated 3 days agoSimilar Job
Relevance
Active