Data Engineer
Antal International
All India • 1 month ago
Experience: 2 to 6 Yrs
PREMIUM
Deal of the Day
--:--:--
15 Days Free Trial
Upgrade to CVX24 Premium
- Free Resume Writing
-
Get a Verified Blue tick
- See who viewed your profile
- Unlimited chat with recruiters
- Rank higher in recruiter searches
- Get up to 10× more recruiter visibility
- Auto-forward profile to 10 top recruiters
- Receive verified recruiter messages directly
- Unlock hidden jobs, not visible to free users
$0
Activate
$0
A small token amount will be charged to verify.
Get Refund in 48 Hours.
After free-trial 6 Months subscription will be auto Activated @ $2.49 (Cancel Anytime).
Free Bluetooth earphones with 6 Months subscription only.
Enter Your Details
Job Description
As a Data Engineer or Data Scientist with a minimum of 5 years of experience, you will be responsible for focusing on data engineering and ETL jobs. Your expertise should include a deep understanding of Data Warehousing, Data Modelling, and Data Analysis. You should have more than 2 years of hands-on experience in building pipelines and performing ETL tasks using industry-standard best practices on Redshift. Your role will also involve troubleshooting and resolving performance issues related to data ingestion, data processing, and query execution on Redshift. Proficiency in orchestration tools such as Airflow is essential. Strong coding skills in Python and SQL are required. Additionally, you should have a solid background in distributed systems like Spark.
Key Responsibilities:
- Utilize your experience with AWS Data and ML Technologies such as AWS Glue, MWAA, Data Pipeline, EMR, Athena, Redshift, and Lambda.
- Implement various data extraction techniques like Change Data Capture (CDC) or time/batch-based extraction using tools like Debezium, AWS DMS, Kafka Connect, etc., for both near real-time and batch data extraction.
Qualifications Required:
- Minimum 5 years of experience in the field of data engineering or data science, with a focus on data engineering and ETL jobs.
- Proficiency in Data Warehousing, Data Modelling, and Data Analysis.
- More than 2 years of experience in building pipelines and performing ETL tasks on Redshift.
- Strong understanding of orchestration tools like Airflow.
- Excellent coding skills in Python and SQL.
- Hands-on experience with distributed systems like Spark.
- Familiarity with AWS Data and ML Technologies.
- Experience in implementing data extraction techniques like CDC or time/batch-based extraction.
Kindly Note: No additional details about the company were provided in the job description. As a Data Engineer or Data Scientist with a minimum of 5 years of experience, you will be responsible for focusing on data engineering and ETL jobs. Your expertise should include a deep understanding of Data Warehousing, Data Modelling, and Data Analysis. You should have more than 2 years of hands-on experience in building pipelines and performing ETL tasks using industry-standard best practices on Redshift. Your role will also involve troubleshooting and resolving performance issues related to data ingestion, data processing, and query execution on Redshift. Proficiency in orchestration tools such as Airflow is essential. Strong coding skills in Python and SQL are required. Additionally, you should have a solid background in distributed systems like Spark.
Key Responsibilities:
- Utilize your experience with AWS Data and ML Technologies such as AWS Glue, MWAA, Data Pipeline, EMR, Athena, Redshift, and Lambda.
- Implement various data extraction techniques like Change Data Capture (CDC) or time/batch-based extraction using tools like Debezium, AWS DMS, Kafka Connect, etc., for both near real-time and batch data extraction.
Qualifications Required:
- Minimum 5 years of experience in the field of data engineering or data science, with a focus on data engineering and ETL jobs.
- Proficiency in Data Warehousing, Data Modelling, and Data Analysis.
- More than 2 years of experience in building pipelines and performing ETL tasks on Redshift.
- Strong understanding of orchestration tools like Airflow.
- Excellent coding skills in Python and SQL.
- Hands-on experience with distributed systems like Spark.
- Familiarity with AWS Data and ML Technologies.
- Experience in implementing data extraction techniques like CDC or time/batch-based extraction.
Kindly Note: No additional details about the company were provided in the job description.
Skills Required
Posted on: March 18, 2026
Relevant Jobs
Step 2 of 2