Lead Data Platform Engineer
Celebal Technologies
All India, Noida • 2 months ago
Experience: 3 to 8 Yrs
PREMIUM
Deal of the Day
--:--:--
15 Days Free Trial
After Free Trial → Flat 50% OFF
Upgrade to CVX24 Premium
- Free Resume Writing
-
Get a Verified Blue tick
- See who viewed your profile
- Unlimited chat with recruiters
- Rank higher in recruiter searches
- Get up to 10× more recruiter visibility
- Auto-forward profile to 10 top recruiters
- Receive verified recruiter messages directly
- Unlock hidden jobs, not visible to free users
$0
Activate
$0
A small token amount will be charged to verify.
Get Refund in 48 Hours.
Free Earplugs Delivery Only after Payment of Rs. 99 for Five Consecutive Months.
After free-trial 6 Months subscription will be auto Activated @ $
1
(Cancel Anytime). Quoted price includes 50% discount.
Enter Your Details
Job Description
Role Overview:
As a Data Engineer, you will collaborate with multiple teams to deliver solutions on the Azure Cloud utilizing core cloud data warehouse tools such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and other Big Data related technologies. Your responsibilities will involve building the next generation of application data platforms and/or enhancing recent implementations. It is also relevant to have experience in Databricks and Spark on other cloud platforms like AWS and GCP.
Key Responsibilities:
- Define, design, develop, and test software components/applications using Microsoft Azure tools such as Azure Databricks, Azure Data Factory, Azure Data Lake Storage, Logic Apps, Azure SQL database, Azure Key Vaults, and ADLS.
- Demonstrate strong SQL skills with practical experience.
- Handle both structured and unstructured datasets effectively.
- Utilize expertise in Data Modeling and Advanced SQL techniques.
- Implement Azure Data Factory, Airflow, AWS Glue, or any other data orchestration tool using the latest technologies and techniques.
- Have exposure in Application Development.
- Work independently with minimal supervision.
Qualifications Required:
Must Have:
- Hands-on experience with distributed computing frameworks like DataBricks and Spark Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL).
- Willingness to collaborate with product teams to optimize product features/functions.
- Experience in handling batch workloads and real-time streaming with high-volume data frequency.
- Proficiency in performance optimization on Spark workloads.
- Ability in environment setup, user management, Authentication, and cluster management on Databricks.
- Professional curiosity and adaptability to new technologies and tasks.
- Good understanding of SQL, relational databases, and analytical database management theory and practice.
Good To Have:
- Hands-on experience with distributed computing frameworks like DataBricks.
- Experience in Databricks migration from On-premise to Cloud or Cloud to Cloud.
- Migration of ETL workloads from Apache Spark implementations to Databricks.
- Familiarity with Databricks ML will be a plus.
- Experience in migration from Spark 2.0 to Spark 3.5.
Key Skills:
- Proficiency in Python, SQL, and Pyspark.
- Knowledge of the Big Data Ecosystem (Hadoop, Hive, Sqoop, HDFS, Hbase).
- Expertise in Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks.
- Familiarity with Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse).
- Understanding of AWS (Lambda, AWS Glue, S3, Redshift).
- Proficiency in Data Modelling and ETL Methodology.
Kindly share your CVs at [Provided Email Address]. Role Overview:
As a Data Engineer, you will collaborate with multiple teams to deliver solutions on the Azure Cloud utilizing core cloud data warehouse tools such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and other Big Data related technologies. Your responsibilities will involve building the next generation of application data platforms and/or enhancing recent implementations. It is also relevant to have experience in Databricks and Spark on other cloud platforms like AWS and GCP.
Key Responsibilities:
- Define, design, develop, and test software components/applications using Microsoft Azure tools such as Azure Databricks, Azure Data Factory, Azure Data Lake Storage, Logic Apps, Azure SQL database, Azure Key Vaults, and ADLS.
- Demonstrate strong SQL skills with practical experience.
- Handle both structured and unstructured datasets effectively.
- Utilize expertise in Data Modeling and Advanced SQL techniques.
- Implement Azure Data Factory, Airflow, AWS Glue, or any other data orchestration tool using the latest technologies and techniques.
- Have exposure in Application Development.
- Work independently with minimal supervision.
Qualifications Required:
Must Have:
- Hands-on experience with distributed computing frameworks like DataBricks and Spark Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL).
- Willingness to collaborate with product teams to optimize product features/functions.
- Experience in handling batch workloads and real-time streaming with high-volume data frequency.
- Proficiency in performance optimization on Spark workloads.
- Ability in environment setup, user management, Authentication, and cluster management on Databricks.
- Professional curiosity and adaptability to new technologies and tasks.
- Good understanding of SQL, relational databases, and analytical database management theory and practice.
Good To Have:
- Hands-on experience with distributed computing frameworks like DataBricks.
- Experience in Databricks migration from On-premise to Cloud or Cloud to Cloud.
- Migration of ETL workloads from Apache Spark implementations to Databricks.
- Familiarity with Databricks ML will be a plus.
- Experience in migration from Spark 2.0 to Spark 3.5.
Key Skills:
- Proficiency in
Skills Required
Posted on: March 5, 2026
Relevant Jobs
Step 2 of 2