ETL Data Engineer
The Techgalore
Delhi • 2 months ago
Experience: 1 to 6 Yrs
PREMIUM
Deal of the Day
--:--:--
15 Days Free Trial
A recruiter messaged CVX24 Premium users few seconds ago.
Upgrade to CVX24 Premium
- Free Resume Writing
-
Get a Verified Blue tick
- See who viewed your profile
- Unlimited chat with recruiters
- Rank higher in recruiter searches
- Get up to 10× more recruiter visibility
- Auto-forward profile to 10 top recruiters
- Receive verified recruiter messages directly
- Unlock hidden jobs, not visible to free users
$0
Activate
$0
A small token amount will be charged to verify.
Get Refund in 48 Hours.
After free-trial 6 Months subscription will be auto Activated @ $2.49 (Cancel Anytime).
Free Bluetooth earphones with 6 Months subscription only.
Enter Your Details
Job Description
Job Description:
As an experienced ETL Developer and Data Engineer, your role involves ingesting and analyzing data from multiple enterprise sources into Adobe Experience Platform. You will be responsible for the following key responsibilities:
- Develop data ingestion pipelines using PySpark for both batch and streaming data processing.
- Utilize multiple Data engineering services on AWS such as Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, and Redshift.
- Load data from various sources like S3 bucket and on-prem data sources into Redshift.
- Optimize data ingestion into Redshift and design tables in Redshift considering distribution key, compression, vacuuming, etc.
- Design, develop, and optimize queries on Redshift using SQL or PySparkSQL.
- Develop applications consuming services exposed as ReST APIs and write complex and performant SQL queries.
Qualifications Required:
- 4-6 years of professional technology experience with a focus on data engineering.
- Strong experience in developing data ingestion pipelines using PySpark.
- Proficiency in AWS services like Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, and Redshift.
- Experience working with Redshift, including loading data, optimizing ingestion, and designing tables.
- Ability to write and analyze complex SQL queries.
- Experience with enterprise-grade ETL tools like Pentaho, Informatica, or Talend is a plus.
- Good knowledge of Data Modeling design patterns and best practices.
- Familiarity with Reporting Technologies such as Tableau or PowerBI.
In this role, you will be analyzing customers' use cases and data sources, extracting, transforming, and loading data into Adobe Experience Platform. You will design and build data ingestion pipelines using PySpark, ensure performance efficiency, develop and test complex SQL queries, and support Data Architects in implementing data models. Additionally, you will contribute to the organization's innovation charter, present on advanced features, and work independently with minimum supervision.
Please rate the candidate on the following areas:
- Big Data
- PySpark
- AWS
- Redshift Job Description:
As an experienced ETL Developer and Data Engineer, your role involves ingesting and analyzing data from multiple enterprise sources into Adobe Experience Platform. You will be responsible for the following key responsibilities:
- Develop data ingestion pipelines using PySpark for both batch and streaming data processing.
- Utilize multiple Data engineering services on AWS such as Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, and Redshift.
- Load data from various sources like S3 bucket and on-prem data sources into Redshift.
- Optimize data ingestion into Redshift and design tables in Redshift considering distribution key, compression, vacuuming, etc.
- Design, develop, and optimize queries on Redshift using SQL or PySparkSQL.
- Develop applications consuming services exposed as ReST APIs and write complex and performant SQL queries.
Qualifications Required:
- 4-6 years of professional technology experience with a focus on data engineering.
- Strong experience in developing data ingestion pipelines using PySpark.
- Proficiency in AWS services like Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, and Redshift.
- Experience working with Redshift, including loading data, optimizing ingestion, and designing tables.
- Ability to write and analyze complex SQL queries.
- Experience with enterprise-grade ETL tools like Pentaho, Informatica, or Talend is a plus.
- Good knowledge of Data Modeling design patterns and best practices.
- Familiarity with Reporting Technologies such as Tableau or PowerBI.
In this role, you will be analyzing customers' use cases and data sources, extracting, transforming, and loading data into Adobe Experience Platform. You will design and build data ingestion pipelines using PySpark, ensure performance efficiency, develop and test complex SQL queries, and support Data Architects in implementing data models. Additionally, you will contribute to the organization's innovation charter, present on advanced features, and work independently with minimum supervision.
Please rate the candidate on the following areas:
- Big Data
- PySpark
- AWS
- Redshift
Skills Required
Posted on: February 25, 2026
Relevant Jobs
Step 2 of 2