- UpvoteDownvoteShare Job
- Suggest Revision
Your expertise in ETL, Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks will be essential in ensuring efficient data processing and analysis. High level of proficiency in ETL processes and demonstrated, hands-on experience with technologies such as Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks.
Full-timeExpandApply NowActive JobUpdated 7 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Your deep expertise in ETL, Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks will be the cornerstone in architecting a robust data mesh that ensures data is accessible, reliable, and provides actionable insights across the organization.
ExpandApply NowActive JobUpdated 5 days ago - UpvoteDownvoteShare Job
- Suggest Revision
This architecture includes an Databricks, Microsoft Azure platform tools (including Data Lake, Synapse), Apache platform tools (including Hadoop, Hive, Impala, Spark, Sedona, Airflow) and data pipeline/ETL development tools (including Streamsets, Apache NiFi, Azure Data Factory.
ExpandApply NowActive JobUpdated 7 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Experience in implementing data pipelines for both streaming and batch integrations using tools/frameworks like Glue ETL, Lambda, Google Cloud DataFlow, Azure Data Factory, Spark, Spark Streaming, etc.
Full-timeExpandApply NowActive JobUpdated 7 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Advanced knowledge of Extract/Transform/Load (ETL) or Extract/Load/Transform (ELT) tools, including both batch and real-time data transmission applications such as SSIS, Informatica, Kafka, Spark, MuleSoft, or equivalent software.
ExpandApply NowActive JobUpdated 11 days ago - UpvoteDownvoteShare Job
- Suggest Revision
You will support the National Geospatial-Intelligence Agency (NGA) to process ETL on new and varied data sets, and work with Big Data at scale, using Spark/PySpark to handle petabytes of data.
Full-timeExpandApply NowActive JobUpdated 5 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Your expertise in ETL, DataBricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupiter Notebooks will be essential in ensuring efficient data processing and analysis. Job Responsibilities:- Design, develop, and implement end-to-end data pipelines, utilizing ETL processes and technologies such as DataBricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks.
ExpandApply NowActive JobUpdated 5 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Design, develop, and implement end-to-end data pipelines, utilizing ETL processes and technologies such as Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks. Knowledge of Qlik/Qlik Sense, QVD/QlikView, and Qlik Production Application Standards (QPAS) is a significant plus.
Full-timeRemoteExpandApply NowActive JobUpdated 9 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Familiarity with Databricks and Spark for big data processing and analytics. Your proficiency in Azure, PowerBI, DevSecOps practices, Python scripting, ETL, SQL, and related technologies will be instrumental in architecting robust and scalable cloud infrastructures.
Full-timeExpandApply NowActive JobUpdated 7 days ago - UpvoteDownvoteShare Job
- Suggest Revision
5 + yrs Experience with implementing Apache data products such as Spark, Sedona, Airflow, Atlas, NiFi, Hive, Impala. 5 + yrs Experience with ETL, data processing, analytics using languages such as Python, Java or R.
ExpandApply NowActive JobUpdated 8 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Tagged as: Big Data, Google Cloud, Hadoop, Hive, Informatica, SAP MDG/MDM, SPARK, SQOOP. Hands-on development experience using open source big data components such as Hadoop, Hive, Pig, Spark, HBase, Hawk, Oozie, Mahout, Flume, Kafka, ZooKeeper, Sqoop etc.
ExpandApply NowActive JobUpdated 5 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Location & Travel: The Data Engineer position will be an on-site role based at the Pentagon in Northern VA. Security Clearance: The Data Engineer must be a United States Citizen and hold a current DoD TS/SCI security clearance.
ExpandApply NowActive JobUpdated 2 days ago - UpvoteDownvoteShare Job
- Suggest Revision
This role will provide expertise to support the development of a Big Data / Data Lake system architecture that supports enterprise data operations for the District of Columbia government, including the Internet of Things (IoT) / Smart City projects, enterprise data warehouse, the open data portal, and data science applications.
$90 an hourExpandApply NowActive JobUpdated 12 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Job Title: Big Data Architect IT Consultant Master (ONSITE) The platform will be designed for District wide use and integration with other client Enterprise Data tools such as Esri, Tableau, MicroStrategy, API Gateways, and Oracle databases and integration tools.
ExpandApply NowActive JobUpdated 6 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Knowledge of Qlik/ Qlik Sense, QVD/ QlikView, and Qlik Production Application Standards (QPAS) is a huge plus. Understanding of data modeling/ visualization, database design principles, and data governance practices.
Full-timeExpandApply NowActive JobUpdated 10 days ago
spark etl jobs in Alexandria, VA
FEATURED BLOG POSTS
5 Common Interview Mistakes
Everyone's interview process is unique in some form or fashion. Like most, your interview process is crafted so you can get the most information out of your candidates to increase hiring confidence and make the right hiring decisions. However, there are often small problems in interview processes that could ultimately affect the success of hiring decisions.
How to Ask Someone to be a Reference + Email Templates
One part of the job-hunting process that frequently gets overlooked is putting together a list of good references. Most of the time we focus on creating the perfect resume, writing an awesome cover letter, and getting our hands on letters of recommendation. We think about what outfit we’ll wear to the job interview, how we’ll answer those tricky questions, and what our career plan looks like. But, in fact, having multiple references lined up who will speak favorably about you to a potential employer is critical to landing a job. This aspect of job searching really can’t be ignored.
Job Rejection Email Response with Examples
Glassdoor estimates that, on average, there are about 250 applicants for every job vacancy out there. If you’ve ever applied for a job, the odds are that you’ve received the dreaded job rejection email.
Structured vs Unstructured Interviews
The goal of an interview is to evaluate candidates based on their skills, personality, and knowledge. You want to choose the BEST candidate from your candidate pool, so the interview is something you can't mess up. As you begin planning your interview process, one of the major decisions you'll face is whether the interview should be a structured vs unstructured interview. So let's take a dive into the differences and sort out which circumstances warrant which interview process.
How to Describe Your Personality with Examples
Imagine you’re in an elevator with the CEO of your dream company and you get to talking. The conversation is going well and you start to imagine yourself working for their company when the CEO turns around and asks you “tell me a bit about yourself.” Would this catch you off guard or would you be able to give a clear and succinct description of who you are?
4 Ways to Make Your Job Posting More Inclusive
According to a Glassdoor survey,
To ATS or not to ATS
As hiring is becoming more analytical and data-driven, companies have found ways to incorporate technology to help hire and recruit more efficiently. ATS, also known as an applicant tracking system, has become one of the most widely adopted technological recruiting tools to date. In fact, according to data from Capterra: