- UpvoteDownvoteShare Job
- Suggest Revision
Must be proficient in SQL and understand different data modeling techniques using Big Data frameworks such as Presto/Snowflake/Databricks. The FreeWheel Data Processing and Data Warehouse team is responsible for creating and managing the frameworks for large scale ETL using modern frameworks such as DBT and Spark.
RemoteExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Strong Experience with cloud-based data warehouses – e.g., Snowflake, Big Query, Synapse, RedShift, etc. Identify data needs for business and product teams, understand their specific requirements for metrics and analysis, and build efficient and scalable data pipelines to enable data-driven decisions across DICKs.
$83,000 - $138,200 a yearFull-timeRemoteExpandApply NowActive JobUpdated 6 days ago - UpvoteDownvoteShare Job
- Suggest Revision
Cloud-based Big Data/MPP analytics platforms like Snowflake, AWS Redshift, Google BigQuery, Azure Data Warehouse. Workflow management engines like Dagster, Airflow, Google Cloud Composer, AWS Step Functions, Azure Data Factory or similar.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Design, development, implementation, operations, and support of the Big Data real-time streaming pipelines in a secure and performant manner using Apache Flink, Apache Spark and Apache Kafka.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Experience with pipeline orchestration software (e.g. Airflow, Dagster, Prefect) and data streaming technologies (e.g. airbyte, datastream, artie, fivetran, or your own flavor), especially in a cloud data environment, such as AWS, Azure, or GCP is essential for this role.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
8+ years of experience architecting data warehouses such as BigQuery, Redshift, Databricks. Develop ETL batch data pipelines with BigQuery or similar and orchestrate with modern orchestration tools (Airflow, Puppet.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Extensive experience working with various data technologies and tools such as Airflow, Snowflake, Meltano, Fivetran, DBT, AWS, and Looker. By establishing HackerOne's one source of truth and developing data products and solutions on this foundation, we help our Hackers, Customers, and Hackeronies gain insights and make better decisions, thereby empowering the world to build a safer internet.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
10+ years experience in data engineering with data warehouse technologies. As a member of our data engineering team, you'll be setting standards for data engineering solutions that have organizational impact.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Create, maintain and monitor data flows in Spark, Hive, SQL and Presto for consistency, accuracy and lag time. Presto - fast parallel data warehouse and data federation layer. Create different consumers for data in Kafka using Spark Streaming for near time aggregation.
RemoteExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Hands-on experience with big data technologies such as Hadoop, Spark, or similar frameworks. Knowledge of data warehousing concepts and experience with tools like Redshift, BigQuery, or Snowflake.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
2-4 years direct experience in Data Engineering with experience in tools such as: Big data tools: Hadoop, Spark, Kafka, etc. Experience with message queuing, stream processing, and highly scalable 'big data' data stores.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
In this role, you will help architect and implement our ML strategy and data processing pipelines using Redshift, Apache Airflow, Apache Spark, and AWS SageMaker. Leverage Apache Spark for large-scale data processing and analytics.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
Design, develop, and maintain scalable data platforms using AWS, Snowflake, and Databricks. The Data Engineering team is focused on building platforms and capabilities that are utilized across the organization by sales, marketing, engineering, finance, product, and operations.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
We process billions of mail messages using cutting edge algorithms in areas including but are not limited to: Natural language processing, GenAI, Large Language Models, Machine Learning techniques, big data processing in order of petabytes to: Extract information, build mail content and user knowledge, and interconnect different sources to identify, highlight and amplify what matters.
ExpandApply NowActive JobUpdated Today - UpvoteDownvoteShare Job
- Suggest Revision
In-depth understanding of Azure data engineering stacks, such as SQLDB, Azure Functions, Synapse, Data Factory, Data Lake Storage, Purview, and CosmosDB (Databricks and Snowflake will be considered highly advantageous.
ExpandApply NowActive JobUpdated Today
big data jobs Company: At T in Pittsburgh, PA
FEATURED BLOG POSTS
How to Write an Address Correctly: Explained with Examples
It's hard to imagine a scenario where a text or phone call just won't do these days. With communication at our fingertips, you may think learning how to write an address is a superfluous skill. But it's a skill that will come in handy when you need to fill out healthcare forms, ship a package, order food delivery, or even apply for new jobs.
What is Employment Participation Rate
According to economists, there are four factors of production that go into creating higher quality goods at lower prices. These are
How to Get Pay Stubs (From Previous Employee Also!)
Pay stubs are an important piece of document which shows your earnings in a given period, as well as any deductions made towards your health insurance or pension contributions. They’re also excellent for finding out how much your recent salary raise has bumped up your monthly net income.
How to Write a Job Description?
It might be tempting to overlook the importance of a well-written job description. After all, if you’ve posted job ads before and ended up with tons of resumes in hand, it’s easy to assume that this will always be the case, regardless of how your job ad reads. But, in reality, you really can’t take getting an influx of resumes for granted.
How to Get a W2 From Previous Employers
When tax time rolls around, the last thing you want to worry about is having to track down a W-2 from your former employer. Many times you won’t have to because the IRS requires companies to send these forms to all current and former employees who have earned more than $600 in the last year. Unfortunately, there are employers who don’t do what they’re supposed to. There are even times where something else may happen that prevents the W-2 from getting where it’s supposed to go.
How to Ask Someone to be a Reference + Email Templates
One part of the job-hunting process that frequently gets overlooked is putting together a list of good references. Most of the time we focus on creating the perfect resume, writing an awesome cover letter, and getting our hands on letters of recommendation. We think about what outfit we’ll wear to the job interview, how we’ll answer those tricky questions, and what our career plan looks like. But, in fact, having multiple references lined up who will speak favorably about you to a potential employer is critical to landing a job. This aspect of job searching really can’t be ignored.
Job Rejection Email Response with Examples
Glassdoor estimates that, on average, there are about 250 applicants for every job vacancy out there. If you’ve ever applied for a job, the odds are that you’ve received the dreaded job rejection email.