Sr. Data Engineer Data Activation & Sharing

Company:  Dice
Location: Seattle
Closing Date: 01/11/2024
Salary: £150 - £200 Per Annum
Hours: Full Time
Type: Permanent
Job Requirements / Description

Dice is the leading career destination for tech experts at every stage of their careers. Our client, INSPYR Solutions, is seeking the following. Apply via Dice today!

Title: Sr. Data Engineer Data Activation & Sharing

Location: Seattle, WA (Hybrid 2-3 days a week)

Duration: 12+ month contract

Compensation: $87.00-$95.40/hr

Work Requirements: Holders or Authorized to Work in the U.S.

We are looking for a Sr. Data Engineer to join the Data Activation team who can operate across the enterprise to ensure on-time, high-quality delivery of products and features that directly drive our business. The Data Activation team, within the Product & Data Engineering business unit, is responsible for sharing data with internal customers and external 3rd party partners in a secure and reliable fashion, ensuring that chain-of-custody of our sensitive data is maintained at every step of the process.

Responsibilities:

  • Interfacing with the key stakeholders to understand the business

Requirements:

  • Designing and developing necessary data models, ETL's, reports, etc. as per requirements
  • Working with big data technologies such as Spark and cloud database technologies like Snowflake
  • Writing test scripts and automating where possible (using tools such as but not limited to Pyspark, bash scripting, python)

Basic Qualification:

  • Participating in code reviews
  • Ensuring code is checked in as per company guidelines
  • Participating in weekly scrum meetings and daily stand up meetings
  • Contributing to ensure correct status in burn down charts
  • Ensuring QA sign off is obtained, fix any bugs as discovered
  • Supporting product signoff process and fix any issues / bugs discovered
  • Providing post go-live support for addressing any P1 issues

Preferred Qualifications:

  • 5+ years of data engineering experience developing large data pipelines
  • Strong SQL skills and ability to create queries to extract data and build performant datasets
  • Hands-on experience with distributed systems such as Spark (via Databricks using Scala or Python) to query and process data.
  • Experience with at least one major MPP (elastic map reduce) or cloud database technology (Snowflake, Redshift, Big Query). Snowflake Experience Is Strongly Preferred.
  • Experience with AWS Cloud technologies (S3 at a minimum)
  • Solid experience with data integration toolsets (i.e Airflow) and writing

Required Education:

  • BS STEM +5yrs
#J-18808-Ljbffr
Apply Now
An error has occurred. This application may no longer respond until reloaded. Reload 🗙