SS&C is a global provider of investment and financial services and software for the financial services and healthcare industries. Named to Fortune 1000 list as top U.S. company based on revenue, SS&C is headquartered in Windsor, Connecticut and has 20,000+ employees in over 90 offices in 35 countries. Some 18,000 financial services and healthcare organizations, from the world's largest institutions to local firms, manage and account for their investments using SS&C's products and services.
Job Description
Sr. Data Engineer
Locations : San Francisco CA (hybrid)
Get To Know The Team:
Our mission and team are expanding, and we are looking for motivated people who want to be part of an exciting growth opportunity in the Fintech industry.
Why You Will Love It Here!
- Flexibility: Hybrid Work Model & a Business Casual Dress Code, including jeans
- Your Future: 401k Matching Program, Professional Development Reimbursement
- Work/Life Balance: Flexible Personal/Vacation Time Off, Sick Leave, Paid Holidays
- Your Wellbeing: Medical, Dental, Vision, Employee Assistance Program, Parental Leave
- Diversity & Inclusion: Committed to Welcoming, Celebrating and Thriving on Diversity
- Training: Hands-On, Team-Customized, including SS&C University
- Extra Perks: Discounts on fitness clubs, travel and more!
- Join a team that creates a leading cloud platform with the latest big data, cloud-native and machine learning technologies
- Build and implement models in a production environment for a revenue-driving product.
- Create and maintain overall optimal data infrastructure.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Manage data migrations/conversions and troubleshoot data processing issues.
- Create the pipeline for optimal extraction, transformation, and loading (ETL) of data from a wide variety of data sources.
- Build machine learning functions to help with data automation and data preparation.
- Work closely with our product team and data warehouse engineers to build analytics tools and dashboards that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Work with other data experts to strive for greater functionality in our data systems.
- Discover more innovative machine-learning solutions that can drive business values.
- Passionate about big data and building a platform.
- Proficient coding skills and strong algorithm & data structure basis.
- Knowledge of one or multiple of the tools: Hadoop, Spark, Presto, Hive, Kafka, or Airflow.
- Skills in transforming data, developing data structures, building a metadata store, and setting up data pipelines or data workflows.
- Experience in getting data through API with Python or other programming languages such as JavaScript, Java, C#, Scala, etc
- Strong problem-solving skills with a can-do attitude.
- Great software engineering best practices with git and version control systems.
- Solid communication skills to explain complex data pipelines to both technical and non-technical people.
- Excellent writing and documenting skills.
- Self-starter and self-motivated.
- Bachelor’s Degree in Computer Science, Information Systems or other STEM field
- 5+ years of experience in data engineering or a related field required
- Machine learning academic or working experiences.
- Kubernetes or docker knowledge.
- Experience with AWS Glue, Azure Data Factory, or Data Bricks.
Salary is determined by various factors including, but not limited to, relevant work experience, job related knowledge, skills, abilities, business needs, and geographic regions.
California: Salary range for the position: 140000 USD to 160000 USD.
#J-18808-Ljbffr