Data Platform Engineer
Weekday AI
All India, Chennai • 1 month ago
Experience: 3 to 12 Yrs
PREMIUM
Deal of the Day
--:--:--
15 Days Free Trial
Upgrade to CVX24 Premium
- Free Resume Writing
-
Get a Verified Blue tick
- See who viewed your profile
- Unlimited chat with recruiters
- Rank higher in recruiter searches
- Get up to 10× more recruiter visibility
- Auto-forward profile to 10 top recruiters
- Receive verified recruiter messages directly
- Unlock hidden jobs, not visible to free users
$0
Activate
$0
A small token amount will be charged to verify.
Get Refund in 48 Hours.
After free-trial 6 Months subscription will be auto Activated @ $2.49 (Cancel Anytime).
Free Bluetooth earphones with 6 Months subscription only.
Enter Your Details
Job Description
Role Overview:
You will be responsible for designing and managing the foundational infrastructure for one of Weekday's clients. This role goes beyond simply maintaining pipelines as you will be leading the design and management of the data lakehouse, real-time stream processing frameworks, OLAP stores, self-service ETL and query frameworks, data movement APIs, reverse-ETL pipelines, and job orchestration layers.
Key Responsibilities:
- Take full ownership of the data lakehouse, including its architecture, ingestion from CDC sources, scalability, and reliability
- Develop and manage real-time stream processing frameworks for applications like anomaly detection, customer 360 views, and live supply chain signals
- Design and scale OLAP stores to support real-time and batch processing for internal analytics and AI/ML pipelines
- Create self-service ETL and query frameworks for data consumers
- Implement cost observability measures to reduce compute, storage, and query expenses
- Build data movement APIs and reverse-ETL pipelines for efficient data delivery at scale
- Establish stable job orchestration layers that remain consistent under scale
Qualifications Required:
- 312 years of experience in data engineering, with at least 17 years focused on building or managing a data platform
- Hands-on expertise with tools like Spark, Hudi/Delta Lake, Kafka, Airflow, Debezium, Presto/Trino, DBT, Airbyte
- Comfortable working with the AWS data ecosystem
- Experience in managing daily processing of terabytes and billions of events
- Demonstrated ability to reduce infrastructure costs and provide impact metrics
- Proficiency in Java, Python, or Scala
- Experience with OLAP engines like Pinot, Druid, or ClickHouse (Bonus)
- Experience with data movement or reverse-ETL APIs (Bonus)
- Familiarity with feature stores or data catalog tools (Bonus)
[Additional Company Details]:
The data platform you will be working on powers AI for supply chain decisions of Fortune 500 companies. You will directly witness the real business impact of your work and join a small team with high ownership and the challenge of working at true scale. Role Overview:
You will be responsible for designing and managing the foundational infrastructure for one of Weekday's clients. This role goes beyond simply maintaining pipelines as you will be leading the design and management of the data lakehouse, real-time stream processing frameworks, OLAP stores, self-service ETL and query frameworks, data movement APIs, reverse-ETL pipelines, and job orchestration layers.
Key Responsibilities:
- Take full ownership of the data lakehouse, including its architecture, ingestion from CDC sources, scalability, and reliability
- Develop and manage real-time stream processing frameworks for applications like anomaly detection, customer 360 views, and live supply chain signals
- Design and scale OLAP stores to support real-time and batch processing for internal analytics and AI/ML pipelines
- Create self-service ETL and query frameworks for data consumers
- Implement cost observability measures to reduce compute, storage, and query expenses
- Build data movement APIs and reverse-ETL pipelines for efficient data delivery at scale
- Establish stable job orchestration layers that remain consistent under scale
Qualifications Required:
- 312 years of experience in data engineering, with at least 17 years focused on building or managing a data platform
- Hands-on expertise with tools like Spark, Hudi/Delta Lake, Kafka, Airflow, Debezium, Presto/Trino, DBT, Airbyte
- Comfortable working with the AWS data ecosystem
- Experience in managing daily processing of terabytes and billions of events
- Demonstrated ability to reduce infrastructure costs and provide impact metrics
- Proficiency in Java, Python, or Scala
- Experience with OLAP engines like Pinot, Druid, or ClickHouse (Bonus)
- Experience with data movement or reverse-ETL APIs (Bonus)
- Familiarity with feature stores or data catalog tools (Bonus)
[Additional Company Details]:
The data platform you will be working on powers AI for supply chain decisions of Fortune 500 companies. You will directly witness the real business impact of your work and join a small team with high ownership and the challenge of working at true scale.
Skills Required
Posted on: March 30, 2026
Relevant Jobs
Step 2 of 2