We are seeking a skilled Data Engineer with extensive experience in Informatica PowerCenter to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining ETL workflows and data pipelines that enable enterprise-wide data integration, transformation, and loading. This role demands a detail-oriented professional who can work collaboratively with cross-functional teams to deliver high-quality and efficient data solutions that support the organization’s strategic goals.
Key Responsibilities
- Design, develop, and maintain scalable ETL processes using Informatica PowerCenter.
- Analyze source systems to define accurate data extraction, transformation, and loading rules.
- Build robust data pipelines to extract data from various structured and unstructured sources.
- Implement error handling, data quality checks, logging mechanisms, and optimize ETL performance.
- Collaborate closely with business analysts and data architects to understand and translate data requirements into technical solutions.
- Create and maintain comprehensive technical documentation for ETL jobs and data flows.
- Participate in code reviews to ensure adherence to best practices and maintain high-quality standards.
- Monitor and troubleshoot ETL workflows to resolve failures and improve overall system performance.
- Work in coordination with DevOps and infrastructure teams for job scheduling and deployment activities.
- Ensure compliance with data security policies and regulatory requirements throughout the ETL process.
Required Qualifications
- Bachelor’s degree in Computer Science, Information Systems, or a related field.
- Minimum of 4 years of hands-on experience in data engineering, with strong expertise in Informatica PowerCenter.
- Proficient in SQL for data analysis, troubleshooting, and query optimization.
- Experience working with relational database systems such as Oracle, SQL Server, or Teradata.
- Solid understanding of data warehousing concepts, including dimensional modeling and star schemas.
- Familiarity with job scheduling tools like Control-M or Autosys.
- Demonstrated experience in performance tuning of ETL jobs and SQL queries.
- Knowledge of version control systems such as Git or SVN.
- Strong analytical and problem-solving skills.
- Excellent verbal and written communication abilities.
This position offers the opportunity to contribute significantly to the organization’s data strategy by building efficient and reliable data pipelines. The role is ideal for professionals passionate about data engineering and eager to work in a collaborative, dynamic environment. If you thrive on solving complex data challenges and delivering impactful solutions, we encourage you to apply.