Senior Data Engineer III

Company:  Chewy
Location: Boston
Closing Date: 08/11/2024
Salary: £125 - £150 Per Annum
Hours: Full Time
Type: Permanent
Job Requirements / Description

Our Opportunity:

Chewy is seeking a Sr. Data Engineer III in Boston or Minneapolis. This person will be part of the Customer Interaction Datastore team responsible for building a customer data platform in support of critical Enterprise Priorities. The ideal candidate will have an interest in building data pipelines, an eye for data quality, and potentially a curiosity about or experience building APIs. Additionally, the candidate will have a strong customer first, embody a curious and think big approach to their work to help further innovation within the team, and be an engaged and respectful team member.

What You’ll Do:

  • Develop and maintain complex data ingestion pipelines and transformations for data originating from multiple data sources (structured/unstructured)
  • Assist in crafting proof of concepts and advise, consult, mentor, and coach other data engineering and analytics professionals on data standards.
  • Data cataloging and documentation of data sources
  • Supervise data pipelines for accuracy, missing data, enhancements, changes, and billing volumes to ensure all data is assembled and processed accurately and when needed
  • Build containerized applications with microservices architecture
  • Reconcile data issues and alerts between various systems, finding opportunities to innovate and drive improvements.
  • Work with multi-functional partners in defining and documenting requirements for building high-quality and impactful data products.
  • Create operational reports using visualization/business intelligence tools.

What You’ll Need:

  • 8+ years of proven experience in Data Engineering or Business Analytics roles working with ETL, Data Modeling, and Data Architecture, developing modern data pipelines and applications
  • Expertise crafting and implementing enterprise data pipelines using data engineering approaches and tools including but not limited to: Spark, PySpark, Scala, Docker, Databricks, Glue, cloud-native EDW (Snowflake, Redshift), Kafka, Athena
  • Strong dimensional data modeling (Star, Snowflake) and ER modeling skills!
  • Proficiency building and maintaining infrastructure-as-code preferably with terraform and AWS ecosystem!
  • Proficiency in Java, Python, SQL
  • Experience with writing and reviewing version-controlled code (GitHub)
  • Experience optimally presenting insights and summarizing sophisticated data to diverse audiences through visualizations.
  • To be a self-starter with the ability to take initiative and drive projects forward independently.
  • Experience working with and delivering to collaborators from multiple parts of the company.

Bonus:

  • Some experience with API development
  • Some experience with technologies like GraphQL, graph databases

#J-18808-Ljbffr
Apply Now
An error has occurred. This application may no longer respond until reloaded. Reload 🗙