Tatari is on a mission to revolutionize TV advertising. We work with some of your favorite disruptor brands—like Calm, Fiverr, and RocketMoney—to grow their business through linear and streaming TV. We combine a sophisticated media buying platform with proprietary analytics to turn TV advertising into an automated, digital-like experience.
Named one of the Hottest Ad Tech Companies by Business Insider, and Best Places to Work by Inc. Magazine, our team includes founders and leaders from Google, Microsoft, Stripe, Shazam and Facebook. We are growing rapidly as we accelerate our mission to automate the complex landscape of managing and measuring television advertising. We have a long-term goal to make marketing on TV available to businesses of any size.
The Measurement Calculation team is responsible for providing accurate and reliable brand-focused metrics, measurement methodologies, and systems to our clients and Client Service team (CS) so they can create marketing strategies via informed audience targeting.
This includes designing, building, and maintaining robust data pipelines that serve as the cornerstone of Tatari's business operations. As dedicated professionals, we collectively work on developing and supporting the intricate calculations and algorithms necessary for both linear and streaming TV platforms. Our focus lies in creating scalable and efficient solutions that enable Tatari to leverage data-driven insights effectively and drive success in the dynamic TV advertising landscape.
Responsibilities:
- Building, managing and optimizing data infrastructure, designing and developing data pipelines, and ensuring the reliability and scalability of data systems.
- Data Infrastructure Design: Designing and implementing scalable, efficient, and reliable data infrastructure, including data storage, processing, and retrieval systems.
- Data Pipeline Development: Developing and maintaining robust and efficient data pipelines to ingest, transform, and deliver data from various sources to data storage and analytical systems.
- Data Modeling and Architecture: Designing and implementing data models and database schemas that support efficient data storage, retrieval, and analysis.
- ETL (Extract, Transform, Load) Processes: Building and maintaining ETL processes to extract data from different sources, transform it into a suitable format, and load it into data storage systems.
- Performance Optimization: Identifying and resolving performance bottlenecks in data pipelines and database systems. Tuning and optimizing queries, indexes, and data storage configurations to improve overall system performance.
- Collaboration and Leadership: Collaborating with cross-functional teams, including data scientists, analysts, and software engineers, to understand their data requirements and provide them with the necessary infrastructure and tools. Mentoring and providing technical guidance to junior data engineers.
- Monitoring and Troubleshooting: Implementing monitoring systems and practices to ensure the availability and reliability of data systems. Proactively identifying and resolving issues and investigating data-related incidents or anomalies.
- Technology Evaluation and Implementation: Keeping up with the latest trends and technologies in the data engineering field. Evaluating and recommending new tools, frameworks, and technologies to improve data engineering processes and efficiency.
Qualifications:
- 6+ years of experience working in data architecture, data modeling, and building data pipelines & distributed systems at scale.
- Recent accomplishments working with relational and NoSQL data stores, methods, and approaches (STAR, Dimensional Modeling).
- 2+ years of experience with a modern data stack (Kafka, Spark, Airflow, lakehouse architectures, real-time databases, dbt, etc.) and cloud data warehouses such as RedShift, Snowflake.
- Cloud Computing Platforms: Familiarity with cloud computing platforms like Amazon Web Services (AWS) and proficiency in leveraging cloud-based services for data storage, processing, and analytics, such as Amazon S3, EC2, and Lambda.
- Strong Technical Background: Proficiency in programming languages commonly used in data engineering, such as Python, Java, Scala, or SQL. Experience with data processing frameworks and tools like Apache Spark (including Databricks), and Hadoop and knowledge of database technologies like SQL databases (e.g., MySQL, PostgreSQL).
- Problem-solving and analytical thinking: Ability to identify and troubleshoot data-related issues, optimize systems, and propose innovative solutions.
- Communication and Collaboration: Excellent communication skills to effectively collaborate with cross-functional teams, stakeholders, and business users and ability to explain technical concepts to non-technical audiences and translate business requirements into technical solutions.
- Leadership and Mentoring: Experience in providing technical guidance, mentoring junior data engineers, and leading data engineering initiatives and ability to drive projects, prioritize tasks, and manage timelines.
Benefits:
- Competitive salary ($170-210K/annually)
- Equity compensation
- 100% health insurance premium coverage for you and your dependents
- Unlimited PTO and sick days
- Snacks, drinks, and catered lunches at the office
- Team building events
- $1000 annual continued education benefit
- $500 WFH reimbursement
- $125 pre-tax monthly stipend to spend on whatever you want
- Annual mental health awareness app reimbursement
- FSA and commuter benefits
- Monthly Company Wellness Day Off
- Hybrid RTO (currently 2 days in office). This is an in-office position
At Tatari, we believe in the importance of cultivating teams with diverse backgrounds and offering equal opportunities to all. We strive to create a welcoming, inclusive environment where every team member feels valued and diversity is celebrated.
#J-18808-Ljbffr