Job Description
STRATEGIC STAFFING SOLUTIONS HAS AN OPENING!
This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below.
“Beware of scams. S3 never asks for money during its onboarding process.”
Job Title: Senior Data Engineer
Contract Length: 12+ Month contract
On Site
Location: PHOENIX AZ 85027
Ref# 245130
Position Overview
We are seeking a highly skilled Senior Data Engineer to design, build, and support scalable data platforms within a cloud-first environment. This role will focus on developing modern streaming and batch data pipelines, supporting lakehouse architectures, and enabling advanced analytics across the organization. The ideal candidate thrives in a collaborative team setting and brings strong experience with distributed data processing, real-time streaming technologies, and public cloud data services.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using Spark, Kafka, and cloud-native technologies.
- Build and support real-time data streaming solutions leveraging Kafka, Flink, and Spark Streaming.
- Implement and optimize data lakehouse architectures to support enterprise analytics and data science initiatives.
- Develop data workflows using Python, PySpark, SQL, and orchestration tools such as Airflow or Cloud Composer.
- Partner with cross-functional teams to deliver reliable, high-quality data solutions in a collaborative Agile environment.
- Manage large datasets across distributed systems including Hadoop and cloud storage platforms.
- Optimize data processing performance, reliability, and scalability.
- Work with NoSQL databases and diverse data formats to support evolving business requirements.
- Ensure data governance, security, and best practices are followed across the data ecosystem.
Required Qualifications
- 5+ years of data engineering experience, including hands-on work with Hadoop and Google Cloud data solutions.
- Proven experience building Spark-based processing frameworks and Kafka streaming pipelines.
- 2+ years of hands-on experience developing data flows using Kafka, Flink, and Spark Streaming.
- 3+ years of experience designing and implementing data lakehouse architectures.
- Strong programming experience with Python, PySpark, and SQL.
- Hands-on experience with Google Cloud Platform, including Cloud Storage, BigQuery, Dataproc, and Cloud Composer.
- 2+ years working with NoSQL databases such as columnar, graph, document, or key-value stores.
- Experience working within highly collaborative engineering teams.
Preferred Qualifications
- Public cloud certification such as GCP Professional Data Engineer, Azure Data Engineer, or AWS Data Analytics Specialty.
- Experience supporting both batch and real-time analytics workloads.
- Familiarity with modern data architecture patterns and distributed systems design.