A bachelor's degree in Computer Science, Data Science, Software/Computer Engineering, or a related field.
Proven experience as a data engineer or in a similar role, with a track record of manipulating, processing, and extracting value from large disconnected data sets.
Demonstrated technical proficiency with data architecture, databases, and processing large data sets.
Proficient in Oracle databases and comprehensive understanding of ETL processes, including creating and implementing custom ETL processes.
Experience with cloud services (AWS, Azure), and understanding of distributed systems, such as Hadoop/MapReduce, Spark, or equivalent technologies.
Knowledge of Kafka, Kinesis, OCI Data Integration, Azure Service Bus or similar technologies for real-time data processing and streaming.
Experience designing, building, and maintaining data processing systems, as well as experience working with either a MapReduce or an MPP system.
Strong organizational, critical thinking, and problem-solving skills, with clear understanding of high-performance algorithms and Python scripting.
Experience with machine learning toolkits, data ingestion technologies, data preparation technologies, and data visualization tools is a plus.
Excellent communication and collaboration abilities, with the ability to work in a dynamic, team-oriented environment and adapt to changes in a fast-paced work environment.
Data-driven mindset, with the ability to translate business requirements into data solutions.
Experience with version control systems e.g. Git, and with agile methodologies.
Certifications in related field would be an added advantage (e.g. Google Certified Professional Data Engineer, AWS Certified Big Data, etc.).
#J-18808-Ljbffr