(Full Time) Data Engineer at BlueCargo (United States)
Data Engineer
BlueCargo United States
Date Posted: 31 Oct, 2022
Work Location: Los Angeles, CA, United States
Salary Offered: $120 — $150 yearly
Job Type: Full Time
Experience Required: 3+ years
Remote Work: No
Stock Options: No
Vacancies: 1 available
BLUECARGO
BlueCargo is a fast-growing startup based in Los Angeles. We are building a software to handle transportation of containers from the ports to the first warehouses by truck - also called first mile delivery. We are bringing the Freight Tech revolution.
The startup was founded by two female entrepreneurs, graduated from Y Combinator (2018 batch), and has raised a 4 million seed round. We are at the beginning of an exciting growth phase where we have already developed product-market fit, a working platform, and hundreds of daily active users .
We are looking for Data Engineers to design and implement our data pipelines and visualization platform. If working to bring technology to the logistics industry sounds exciting, then we’d like to connect with you!
Our Current Tech
- Technical stack :
Python
,AWS
,PostgreSQL
,DynamoDB
,Airflow
,Docker
,Kafka
,CircleCI
and other continuous integration tools - Technical team : 4 (objective: x2 in one year)
- Location : Los Angeles
OPPORTUNITY OVERVIEW
Mission/Responsibilities
- Design, build and operate BlueCargo’s data pipelines with a focus on performance and reliability
- Participate in new feature development for container tracking and data visualization platform
- Propose and evaluate storage technologies and methodologies with an eye toward scalability and performance
- Design and implement data pipelines that handle a lot of data streaming
- Lead data ingestion (web-scraping, APIs, or any other protocols) strategy
- Maintain a culture of data accuracy and data-driven decisions
- Decide Database infrastructure that will be the new norm in the freight industry
Qualifications
In addition to the following technical skills, we are looking for PROBLEM SOLVERS with an entrepreneurial mindset.
- 3+ years programming in Python, or Java/Kotlin/Scala is fine as well!
- 3+ years architecting with both SQL and no-SQL data stores.
- Experience designing schemas and maintaining representations for low latency, request-cycle queries
- Experience with streaming platforms (PubSub, Kafka, Kinesis) and near-real-time data pipelines
- Working knowledge of statistics and experimental design
- Comfortable building and maintaining data infrastructure in the cloud (AWS preferred)
- Experience in data sourcing technologies (external APIs, web-scraping, EDI files, etc.) and building data management platforms
- Autonomous in your work, proactive while working with cross-functional teams to build creative solutions
- Preferably with prior experience in a high-velocity startup environment
- Either living in Los Angeles or willing to relocate to LA
Classification
- Perks : Medical benefits + unlimited PTO
- Fun perks : flexible and international environment
- Support/Community : Enjoy being a member of a Y Combinator company!
- Contact : ,