Sr. Data Integration Engineer

Company:  The Judge Group
Location: Chicago
Closing Date: 08/11/2024
Hours: Full Time
Type: Permanent
Job Requirements / Description

WORK TO BE PERFORMED:

  • Responsible for the design, development, implementation, and maintenance of new and existing Data Integration processes within the Strategic Data Platform and other data solutions implementations.
  • Design, develop, maintain, enhance, and monitor Data Integration processes sourcing data from various structured and semi-structured data sources, transforming as per provided business rules or data mappings, and loading into target databases/services including DB2, AWS S3, and Apache Iceberg.
  • Serve as a subject matter expert on multiple data integration projects, ensuring consistent design and efficient data processing.
  • Play a leading role in Build, Test, and Implementation phases of assigned projects, with a diverse development team spread across multiple locations.
  • Provide Level 2 and Level 3 technical support to mission-critical applications.
  • Perform process improvement and re-engineering with an understanding of technical data problems and solutions as they relate to the current and future business environment.
  • Analyze complex distributed production deployments and make recommendations to optimize performance.
  • Design and develop innovative data solutions for demanding business situations.
  • Develop and document Data Integration standards and procedures.
  • Design reusable and portable Data Integration components.
  • Transform flat delimited files into multiple formats, including Parquet, JSON, and Protobuf.
  • Play a supporting role in Build, Test, and Implementation phases of assigned projects, with a diverse development team spread across multiple locations.
  • Investigate production Data Integration issues and work to determine the root cause of problems.



SKILL AND EXPERIENCE REQUIRED:

  • 5+ years in IT application development, with at least 5 years in Data Integration (ETL) Development, using tools such as Python, SQL and Unix Shell Scripting.
  • Experienced practitioner with a proven track record of implementing mid-scale to large-scale Data Integration (ETL) solutions, performing data analysis and profiling, interacting with client management, and designing and architecting dynamic ETL solutions.
  • Working knowledge for Data Types and Databases including Relational, Dimensional, Vertical and No SQL databases.
  • Hands-on experience with data cleansing and data visualization and familiarity with Hadoop Ecosystem.
  • Experience with platforms such Hive and Trino/Starburst.
  • Strong conceptual knowledge of file formats such as JSON, Parquet and Protobuf.
  • Capable of writing and optimizing SQL queries and stored procedures.
  • Exposure to AWS Services including but not limited S3, PostgreSQL, Apache Iceberg and Kafka.
  • Ideal candidate would have worked on a Scrum Team and familiar with using tools such as Jira and Git.
  • Experience developing containerized Data Integration solutions using Kubernetes, Rancher, and CI/CD pipelines using Harness, Jenkins and Artifactory is a plus.
  • Experience working with scheduling tools such as UC4, Control-M, or Autosys is a plus.
Apply Now
An error has occurred. This application may no longer respond until reloaded. Reload 🗙