Job Summary
We are seeking a skilled Data Engineer to design, develop, and maintain robust data pipelines that support our analytics and reporting needs. The ideal candidate will work closely with data analysts, engineers, and business stakeholders to deliver reliable, high-quality data solutions. This role requires strong SQL expertise, a solid understanding of ETL processes, and the ability to ensure data integrity across multiple sources. If you are detail-oriented, collaborative, and passionate about optimizing data workflows, this opportunity is for you.
Key Responsibilities
Design, build, and maintain reliable data pipelines using SQL and various ETL tools to support business intelligence and analytics initiatives. Collaborate effectively with data analysts, engineers, and business stakeholders to gather requirements and translate them into scalable data solutions. Cleanse, transform, and validate data to ensure accuracy and usability for reporting and analytical purposes. Maintain data quality and integrity by monitoring and reconciling data across multiple sources. Continuously monitor existing workflows and optimize SQL queries and processes to improve performance and efficiency. Participate actively in code reviews to uphold coding standards and contribute to comprehensive technical documentation to support team knowledge sharing and project continuity.
Required Qualifications
Proficiency in writing complex SQL queries, including joins, window functions, common table expressions (CTEs), and aggregations, is essential. A foundational understanding of ETL processes and familiarity with data pipeline tools is required to effectively manage data workflows. Experience working with relational databases such as PostgreSQL, MySQL, or SQL Server is necessary. Strong problem-solving skills and meticulous attention to detail are critical for maintaining data accuracy and resolving issues. Excellent communication and collaboration skills are vital to work seamlessly with cross-functional teams and stakeholders.
Preferred Qualifications and Benefits
Experience with Python or other scripting languages is highly desirable, as it supports automation and advanced data manipulation tasks. Familiarity with workflow orchestration tools like Apache Airflow, transformation frameworks such as DBT, or cloud-based data warehousing solutions including Snowflake, BigQuery, or Redshift will be considered a strong advantage. Exposure to version control systems like Git is preferred to facilitate collaborative development and maintain code integrity.
This role offers the opportunity to contribute to a dynamic data environment, working with cutting-edge technologies and cross-functional teams to drive data-driven decision-making. If you are eager to grow your skills in a supportive and innovative setting, we encourage you to apply.