- UpvoteDownvoteShare Job
- Suggest Revision
Technical Proficiency: Expertise in SQL and programming languages such as Python; familiarity with big data technologies like Apache Hadoop, Spark, and Kafka. Professional Experience: At least 5 years of experience in data engineering, with a strong focus on data integration and pipeline construction in cloud environments and enterprise data warehouses like Snowflake.
$134,500 - $215,000Full-timeExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Big Data Technologies: Proficient with big data technologies such as Databricks, Hadoop, Spark, and Kafka. Our client, a leading provider of Artificial Intelligence Integration Services and innovative data platforms for government and defense clients, is recruiting a full-time Cyber Security Engineer.
RemoteExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Experience with big data technologies, such as Apache Hadoop, Spark, Kafka, and others. Leverage AWS cloud services like S3, Kinesis, Redshift, EMR, Glue, Lambda, and others in combination with Data Lakehouse platform/Apache Spark Integration for advanced data processing and analytics.
Full-timeExpandApply NowActive JobUpdated Yesterday - UpvoteDownvoteShare Job
- Suggest Revision
Experience with big data tools (Cloudera, Hadoop, Spark, Map Reduce, NiFi, Sqoop, YARN) Working knowledge of distributed event streaming data platforms (Tibco, Kafka.
Full-timeExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Big data technologies such as Apache Hadoop, Apache Spark, or Apache Kafka. Microsoft Azure Cloud platform familiarity with Azure Data Lake Storage, Azure Databricks, Azure Data Factory, and Azure DevOps.
$69 - $78 an hourExpandApply NowActive JobUpdated 6 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Optimize jobs to utilize Kafka, Hadoop, Presto, Spark, and Kubernetes resources in the most efficient way. Create different consumers for data in Kafka using Spark Streaming for near time aggregation.
RemoteExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
At least 4 years of experience in data engineering working with Big Data Technologies: Apache Spark, Pyspark, Hadoop, and DataBricks with delta lake. Experience with Apache Airflow, Hive, Snowflake, Kafka, Python.
Full-timeExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Develop Big Data and Cloud data acquisition, integration and analytics while adhering to corporate strategies, policies, best practices, IT General Controls (ITGC) and SOX compliance where applicable.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Computer Science graduate ideally with specialization in Data Engineering or Machine LearningExperience in Hadoop technologies (Map/Reduce, Oozie, Pig, Hive, Spark, Kafka, HBase, Storm.
Full-timeRemoteExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
R , Python ) • Data Integration, Data Security on Hadoop ecosystem. Experience with Big Data Analytics & Business Intelligence and Industry standard tools integrated with Hadoop ecosystem.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
1+ years’ experience in Big Data Distributed ecosystems (Hadoop, SPARK, Unity Catalog & Delta Lake) 1+ years’ experience in Big Data Distributed systems such as Databricks, AWS EMR, AWS Glue etc.
Full-timeExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Experience with big data tools and architectures, such as Cloudera Hadoop, HDFS, Hive, and Spark. YOUR SKILLS AND EXPERTISEBachelor’s degree in computer science, data science or a related field with five (5) or more years of working as a data engineer, ETL developer and/or data warehouse DBA.STANDOUT QUALIFICATIONS:Experience with Cloud-based data services and solutions (Azure Synapse / Data Lake, AWS RedShift, Snowflake, GCP Big Query)Experience partnering with Analytics and Data Science teams in building out production grade GenAI/ML solutions.
Full-timeExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Experience with some of the following: Kubernetes, Docker, Helm Charts, Hadoop, Cloudera, Hortonworks, Apache Storm, Apache Spark, HBase, YARN, map-reduce, big-data analytics, semantic-web (RDF, OWL), Hive, Elastic Search, MS SQL Server Big Data, and graph-databases.
Full-timeExpandApply NowActive JobUpdated Yesterday - UpvoteDownvoteShare Job
- Suggest Revision
Experience with Event driven Big Data streaming infrastructure and ETL/ELT frameworks (e.g., Spark Streaming, Flink, Kafka, Hive, Hadoop, Airflow, etc.) Utilize programming languages like Python, SQL, and NoSQL databases, Container Orchestration services including Terraform, Docker and Kubernetes, and a variety of Azure tools and services to build an Event Driven Big Data Streaming platform for an ELT data pipeline.
RemoteExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Experience with Angular, PySpark, Hadoop, Spark, Hive, Kafka, HBase, Elasticsearch, OpenSearch, Apache, and programming languages such as Java, and Python. Exposure to other big data frame works such as MapReduce, HDFS, Hive/Hbase, Cassandra.
$125,000 - $160,000Full-timeExpandApply NowActive JobUpdated Today
hadoop kafka big data integration jobs Company: Global Relay
FEATURED BLOG POSTS
How to Prepare to Be Fired - What You Need to Do
If you’re reading this, let me be the first to tell you how sorry I am. Getting fired feels crappy, disheartening, hurtful, and all the other bad, sad words. But here’s what I want you to do. First, let yourself fumble for a minute. Then, pick your head up — sometimes getting fired is a blessing in disguise. If you think termination is around the corner, we’ll teach you how to prepare to be fired and what to do next so you land somewhere even better.
How to Find a Job That Makes You Happy - 11 Concerning Facts
Do you ever feel like your life is like one of those rom-com movie scene openers? You know, the ones where the main character rolls out of bed, awakened by a casually upbeat theme song, sulks their way to the coffee pot, and then trudges toward their computer to begin yet another boring day at work?
How to Decline a Job Offer You Already Accepted
When you think about it, turning down a job offer is not the worst position you could be in. If you’ve been lucky enough to consider multiple job offers, well, then you’re lucky enough.
How to Practice Fair Chance Hiring for People With Criminal Records
Usually when you think of your dream hire, you think of someone who is respectful, trustworthy, reliable, and has sound judgment, right? As you envision your ideal candidate with these qualities, the last person you think of is someone with a criminal record.
6 Common Mistakes to Avoid When Employer Branding
Currently, job searchers are putting extra effort into researching employers. The information they find plays a major role in whether they will pursue an opportunity with you or look for jobs elsewhere. That is why it is now more important than ever to be proactive and intentional when showcasing your workforce and workplace culture. Having a well crafted employer branding strategy can help you strategize and influence your potential candidates so they see your business in the best light. But in order to do that, you should be aware of some of the most common mistakes that employers make.
What to Say When Terminating an Employee
Terminating an employee is an inevitable part of doing business. Whether you’re re-structuring your department or you’ve identified a few employees who’re not living up to your expectations, letting people go is necessary for keeping your workforce healthy and thriving.
How to Utilize Keywords for Your Job Ads
Before we give you the scoop on how to utilize keywords in job ads, it would be helpful if we defined what keywords are and why they are important. In simple terms,