AWS Data Engineer

Company:  Brooksource
Location: Charlotte
Closing Date: 04/11/2024
Hours: Full Time
Type: Permanent
Job Requirements / Description

Location : Charlotte, NC (Hybrid 2-3 days a week on site)

3-year contract with opportunity for extension or full-time hire

W-2 Only, No Corp-to-Corp or 1099


Brooksource is searching for an AWS Data Engineer with experience in data warehousing using AWS Redshift to join our Fortune 500 Energy & Utilities client in Charlotte, NC.


RESPONSIBILITIES:

  • Provides technical direction, guides the team on key technical aspects and responsible for product tech delivery
  • Lead the Design, Build, Test and Deployment of components
  • Where applicable in collaboration with Lead Developers (Data Engineer, Software Engineer, Data Scientist, Technical Test Lead)
  • Understand requirements / use case to outline technical scope and lead delivery of technical solution
  • Confirm required developers and skillsets specific to product
  • Provides leadership, direction, peer review and accountability to developers on the product (key responsibility)
  • Works closely with the Product Owner to align on delivery goals and timing
  • Assists Product Owner with prioritizing and managing team backlog
  • Collaborates with Data and Solution architects on key technical decisions
  • The architecture and design to deliver the requirements and functionality
  • Skilled in developing data pipelines, focusing on long-term reliability and maintaining high data quality
  • Designs data warehousing solutions with the end-user in mind, ensuring ease of use without compromising on performance
  • Manage and resolve issues in production data warehouse environments on AWS


TECHNICAL REQUIREMENTS:

  • 5+ years of AWS experience, specifically including AWS Redshift
  • AWS services - S3, EMR, Glue Jobs, Lambda, Athena, CloudTrail, SNS, SQS, CloudWatch, Step Functions, QuickSight
  • Experience with Kafka/Messaging preferably Confluent Kafka
  • Experience with EMR databases such as Glue Catalog, Lake Formation, Redshift, DynamoDB and Aurora
  • Experience with Amazon Redshift for AWS data warehousing tools such as Amazon Redshift and Amazon Athena
  • Proven track record in the design and implementation of data warehouse solutions using AWS
  • Skilled in data modeling and executing ETL processes tailored for data warehousing
  • Competence in developing and refining data pipelines within AWS
  • Proficient in handling both real-time and batch data processing tasks
  • Extensive understanding of database management fundamentals
  • Expertise in creating alerts and automated solutions for handling production problems
  • Tools and Languages – Python, Spark, PySpark and Pandas
  • Infrastructure as Code technology – Terraform/CloudFormation
  • Experience with Secrets Management Platform like Vault and AWS Secrets manager
  • Experience with Event Driven Architecture
  • DevOps pipeline (CI/CD); Bitbucket; Concourse
  • Experience with RDBMS platforms and Strong proficiency with SQL
  • Experience with Rest APIs and API gateway
  • Deep knowledge of IAM roles and Policies
  • Experience using AWS monitoring services like CloudWatch, CloudTrail ad CloudWatch events
  • Deep understanding of networking DNS, TCP/IP and VPN
  • Experience with AWS workflow orchestration tool like Airflow or Step Functions


PREFFERED SKILLS

  • Experience with native AWS technologies for data and analytics such as Kinesis, OpenSearch
  • Databases - Document DB, Mongo DB
  • Hadoop platform (Hive; HBase; Druid)
  • Java, Scala, Node JS
  • Workflow Automation
  • Experience transitioning on premise big data platforms into cloud-based platforms such as AWS
  • Strong Background in Kubernetes, Distributed Systems, Microservice architecture and containers


ADDITIONAL REQUIREMENTS

  • Ability to perform hands on development and peer review for certain components / tech stack on the product
  • Standing up of development instances and migration path (with required security, access/roles)
  • Develop components and related processes (e.g. data pipelines and associated ETL processes, workflows)
  • Lead implementation of integrated data quality framework
  • Ensures optimal framework design and load testing scope to optimize performance (specifically for Big Data)
  • Supports data scientist with test and validation of models
  • Performs impact analysis and identifies risk to design changes
  • Ability to build new data pipelines, identify existing data gaps and provide automated solutions to deliver analytical capabilities and enriched data to applications
  • Ability to implement data pipelines with the right attentiveness to durability and data quality
  • Implements data warehousing products thinking of the end users experience (ease of use with right performance)
  • Ensures Test Driven development
  • 5+ years of Experience leading teams to deliver complex products
  • Strong technical skills and communication skills
  • Strong skills with business stakeholder interactions
  • Strong solutioning and architecture skills
  • 5+ years of Experience building real time data ingestion streams (event driven)
  • Ensure data security and permissions solutions, including data encryption, user access controls and logging
Apply Now
An error has occurred. This application may no longer respond until reloaded. Reload 🗙