AWS Data Engineer
EPIQ INDIA SUPPORT SERVICES LLP
All India, Gurugram • 1 month ago
Experience: 3 to 7 Yrs
PREMIUM
Deal of the Day
--:--:--
A recruiter messaged CVX24 Premium users few seconds ago.
Upgrade to CVX24 Premium: Only $2.49
- Free Resume Writing
-
Get a Verified Blue tick
- See who viewed your profile
- Unlimited chat with recruiters
- Rank higher in recruiter searches
- Get up to 10× more recruiter visibility
- Get practical interview tips and guidance
- Receive verified recruiter messages directly
- Unlock hidden jobs, not visible to free users
$4.99
$2.49
🔥 50% OFF
Activate
$4.99
$2.49
all inc.
(Validity: 6 Months. After payment confirmation we will reach out to you)
Enter Your Details
Job Description
As an AWS Data Engineer at our company in Gurgaon/Gurugram, you will play a crucial role in designing, building, and maintaining scalable and reliable data pipelines on the AWS cloud platform. Your collaboration with data scientists, analysts, and other engineers will lead to the delivery of high-quality data solutions driving business insights and improving decision-making. Your efforts will directly impact our understanding of customer behavior, product optimization, and enhancement of overall customer experience.
**Key Responsibilities:**
- Design and implement robust and scalable data pipelines using AWS services like Glue, Lambda, and Redshift for ingesting, processing, and storing large datasets.
- Develop and maintain data models and schemas in Redshift to ensure data quality, consistency, and accessibility for reporting and analysis.
- Write and optimize Python and PySpark code for data transformation and cleansing to ensure efficient processing and accurate results.
- Automate data ingestion and processing workflows using AWS Lambda and other serverless technologies for improved efficiency.
- Monitor and troubleshoot data pipeline performance to ensure data availability and reliability.
- Collaborate with data scientists and analysts to understand their data requirements and provide necessary data infrastructure and tools.
- Implement data governance and security policies to ensure data privacy and compliance with regulations.
**Required Skillset:**
- Design, build, and maintain data pipelines on AWS using services like Glue, Lambda, Redshift, and S.
- Expertise in Python and PySpark for data transformation, cleaning, and analysis.
- Understanding of data modeling principles and experience with relational databases, especially Redshift.
- Ability to automate data workflows and implement data governance policies.
- Excellent communication and collaboration skills for effective teamwork.
- Bachelor's or Master's degree in Computer Science, Engineering, or related field. Enterprise platform experience.
(Note: Additional details of the company were not included in the provided job description) As an AWS Data Engineer at our company in Gurgaon/Gurugram, you will play a crucial role in designing, building, and maintaining scalable and reliable data pipelines on the AWS cloud platform. Your collaboration with data scientists, analysts, and other engineers will lead to the delivery of high-quality data solutions driving business insights and improving decision-making. Your efforts will directly impact our understanding of customer behavior, product optimization, and enhancement of overall customer experience.
**Key Responsibilities:**
- Design and implement robust and scalable data pipelines using AWS services like Glue, Lambda, and Redshift for ingesting, processing, and storing large datasets.
- Develop and maintain data models and schemas in Redshift to ensure data quality, consistency, and accessibility for reporting and analysis.
- Write and optimize Python and PySpark code for data transformation and cleansing to ensure efficient processing and accurate results.
- Automate data ingestion and processing workflows using AWS Lambda and other serverless technologies for improved efficiency.
- Monitor and troubleshoot data pipeline performance to ensure data availability and reliability.
- Collaborate with data scientists and analysts to understand their data requirements and provide necessary data infrastructure and tools.
- Implement data governance and security policies to ensure data privacy and compliance with regulations.
**Required Skillset:**
- Design, build, and maintain data pipelines on AWS using services like Glue, Lambda, Redshift, and S.
- Expertise in Python and PySpark for data transformation, cleaning, and analysis.
- Understanding of data modeling principles and experience with relational databases, especially Redshift.
- Ability to automate data workflows and implement data governance policies.
- Excellent communication and collaboration skills for effective teamwork.
- Bachelor's or Master's degree in Computer Science, Engineering, or related field. Enterprise platform experience.
(Note: Additional details of the company were not included in the provided job description)
Skills Required
Posted on: March 1, 2026
Relevant Jobs
Step 2 of 2