Job Description
Data Engineer I – Job Description
About the Team
We are a technology-driven organization focused on building a modern, data-centric culture. Our mission is to leverage advanced analytics, automation, and scalable platforms to improve decision-making and deliver meaningful impact across the business. Our data team supports high-visibility initiatives spanning operations, product development, marketing, client experience, and performance optimization.
If you are excited about building reliable, scalable data infrastructure that powers analytics, machine learning, and automation, this is the place for you.
Who You Are
You are an experienced Data Engineer who enjoys designing and maintaining high-performance data systems. You excel in environments that demand strong data quality, pipeline reliability, and clear communication. You’re comfortable partnering with cross-functional teams—including data scientists, analysts, and business leaders—to enable data-driven insights and smarter products.
Key Responsibilities
- Design, build, and maintain scalable ETL/ELT pipelines to support analytics and machine learning workloads.
- Develop and manage data models and warehouse structures for self-service analytics and reporting.
- Partner with stakeholders across the business to understand data needs and ensure data availability, accuracy, and usability.
- Implement and monitor data quality, validation, and governance checks to maintain trust in data assets.
- Collaborate with data science and analytics teams to ensure infrastructure supports model training, deployment, and monitoring.
- Optimize data storage and query performance across cloud-based and relational systems.
- Stay current with emerging tools, technologies, and architectural patterns in data engineering and promote best practices.
Required Qualifications
- 5+ years of experience in data engineering, data infrastructure, or a related technical field.
- Proficiency in SQL and at least one programming language (e.g., Python, Java, Scala).
- Hands-on experience with cloud data platforms (e.g., Snowflake, Redshift, BigQuery, or similar).
- Strong understanding of data modeling, warehousing concepts, and ETL/ELT pipeline development.
- Familiarity with modern data orchestration tools (e.g., Airflow, dbt).
- Excellent communication and collaboration skills.
Preferred Qualifications
- Experience with real-time data streaming technologies (e.g., Kafka, Kinesis).
- Familiarity with CI/CD workflows for data pipelines and infrastructure-as-code tools (e.g., Terraform).
- Experience supporting machine learning workflows and model deployment.
- Background in regulated industries such as insurance, financial services, or healthcare.
Security & Privacy Responsibilities
- Adhere to all organizational policies related to data security and privacy.
- Participate in ongoing training related to secure data handling and regulatory requirements.
- Maintain the highest standards of confidentiality when working with sensitive business or client data.
- Report any potential security or privacy incidents promptly.