AI Data Integrator
EcoRatings
All India, Noida • 1 month ago
Experience: 3 to 7 Yrs
PREMIUM
Deal of the Day
--:--:--
15 Days Free Trial
A recruiter messaged CVX24 Premium users few seconds ago.
Upgrade to CVX24 Premium
- Free Resume Writing
-
Get a Verified Blue tick
- See who viewed your profile
- Unlimited chat with recruiters
- Rank higher in recruiter searches
- Get up to 10× more recruiter visibility
- Auto-forward profile to 10 top recruiters
- Receive verified recruiter messages directly
- Unlock hidden jobs, not visible to free users
$0
Activate
$0
A small token amount will be charged to verify.
Get Refund in 48 Hours.
After free-trial 6 Months subscription will be auto Activated @ $2.49 (Cancel Anytime).
Free Bluetooth earphones with 6 Months subscription only.
Enter Your Details
Job Description
As a Data Engineer at EcoRatings, you will play a crucial role in building and maintaining the data pipelines that feed into our AI ecosystem. Your responsibilities will include:
- **Data Ingestion & ETL**: You will design, develop, and maintain scalable ETL/ELT pipelines to ingest data from various enterprise sources such as SQL databases, Oracle ERPs, and flat files (FTP/SFTP).
- **Pipeline Orchestration**: Your role will involve building and managing data workflows using tools like Apache Airflow or Prefect to ensure timely and reliable data delivery to the AI/ML team.
- **Schema Design**: Collaborate with the Lead AI Engineer to define and implement data schemas that optimize the performance of RAG (Retrieval-Augmented Generation) pipelines.
- **Database Management**: Optimize and manage both relational (PostgreSQL/MySQL) and non-relational storage solutions, specifically Vector databases like Pinecone or Weaviate.
- **Data Cleaning & Validation**: Implement automated data validation and cleaning scripts to ensure high-quality, audit-ready data for the "intelligence" layer.
- **API Development**: Build and maintain internal APIs and connectors to facilitate seamless communication between the data warehouse.
- **Security & Compliance**: Ensure adherence to strict enterprise security protocols, including encryption at rest and in transit, to protect sensitive client information.
- **Collaboration**: Work closely with the Full Stack Lead and AI Engineers to synchronize data ingestion logic with application requirements and AI processes.
To qualify for this role, you should have:
- **Education**: Bachelors or Masters degree in Data Science, Computer Science, Information Technology, or a related quantitative field.
- **Technical Proficiency**: Advanced expertise in Python and SQL, along with a deep understanding of database internals and query optimization.
- **Enterprise Integration**: Proven experience in connecting to and extracting data from enterprise-grade systems like Oracle, SAP, or Microsoft Dynamics.
- **Data Engineering Tools**: Hands-on experience with modern data stack tools (e.g., dbt, Airflow, Snowflake, or Databricks).
- **Cloud Infrastructure**: Strong familiarity with AWS (S3, Redshift, Glue) or Azure data services.
- **Big Data Frameworks**: Knowledge of Spark or Flink for processing large-scale environmental datasets is preferred.
- **Version Control**: Proficiency in Git and experience working in an Agile development environment.
- **Problem Solving**: A systematic approach to debugging complex data flows and a commitment to data accuracy and reliability.
Join EcoRatings as a Data Engineer and contribute to the development of cutting-edge AI solutions by leveraging your expertise in data engineering and analytics. As a Data Engineer at EcoRatings, you will play a crucial role in building and maintaining the data pipelines that feed into our AI ecosystem. Your responsibilities will include:
- **Data Ingestion & ETL**: You will design, develop, and maintain scalable ETL/ELT pipelines to ingest data from various enterprise sources such as SQL databases, Oracle ERPs, and flat files (FTP/SFTP).
- **Pipeline Orchestration**: Your role will involve building and managing data workflows using tools like Apache Airflow or Prefect to ensure timely and reliable data delivery to the AI/ML team.
- **Schema Design**: Collaborate with the Lead AI Engineer to define and implement data schemas that optimize the performance of RAG (Retrieval-Augmented Generation) pipelines.
- **Database Management**: Optimize and manage both relational (PostgreSQL/MySQL) and non-relational storage solutions, specifically Vector databases like Pinecone or Weaviate.
- **Data Cleaning & Validation**: Implement automated data validation and cleaning scripts to ensure high-quality, audit-ready data for the "intelligence" layer.
- **API Development**: Build and maintain internal APIs and connectors to facilitate seamless communication between the data warehouse.
- **Security & Compliance**: Ensure adherence to strict enterprise security protocols, including encryption at rest and in transit, to protect sensitive client information.
- **Collaboration**: Work closely with the Full Stack Lead and AI Engineers to synchronize data ingestion logic with application requirements and AI processes.
To qualify for this role, you should have:
- **Education**: Bachelors or Masters degree in Data Science, Computer Science, Information Technology, or a related quantitative field.
- **Technical Proficiency**: Advanced expertise in Python and SQL, along with a deep understanding of database internals and query optimization.
- **Enterprise Integration**: Proven experience in connecting to and extracting data from enterprise-grade systems like Oracle, SAP, or Microsoft Dynamics.
- **Data Engineering Tools**: Hands-on experience with modern data stack tools (e.g., dbt, Airflow, Snowflake, or Databricks).
- **Cloud Infrastructure**: Stron
Skills Required
Posted on: March 7, 2026
Relevant Jobs
Step 2 of 2