Job Description
STRATEGIC STAFFING SOLUTIONS HAS AN OPENING!
This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below.
“Beware of scams. S3 never asks for money during its onboarding process.”
Job Title: Software Engineer – SparkFlow Framework
Contract Length: 12+ Month contract
Hybrid work (3 days on site/ 2 days remote
Location: CHARLOTTE, NC 28202
Ref# 245459
The Software Engineer will contribute to the SparkFlow enterprise data processing framework built on Apache Spark. The role focuses on implementing new framework features, strengthening existing components, improving developer experience, and delivering AI-enabled capabilities to simplify framework usage. This position will also support integration of SparkFlow into the Unity control plane by building and hardening required interfaces and workflows.
Key Responsibilities
- Build and enhance functional features in the SparkFlow framework, including sources, targets, transformations, governance and control capabilities, and reliability features.
- Implement and refine framework extension points such as APIs, configurations, and libraries to improve composability and reuse.
- Improve developer experience by simplifying configuration patterns (for example pipeline JSON/configs), reducing onboarding friction, and improving diagnostics and observability hooks.
- Develop AI-enabled solutions that assist developers, such as guided configuration generation, validation tools, troubleshooting accelerators, and other usability improvements.
- Contribute to Unity control plane integration by implementing adapters, operators, automation, and integration testing to support consistent orchestration.
- Participate in code reviews, design discussions, and operational support activities as needed.
Required Experience
- Strong hands-on engineering experience with Apache Spark using Scala and/or Java (Python is a plus), including Spark SQL-based processing.
- Experience building frameworks or libraries, including API and abstraction design.
- Working knowledge of CI/CD and engineering fundamentals, including Git, build tooling, and unit/integration testing.
- Experience with enterprise data ecosystem components such as Hadoop/Hive, Kafka, and cloud storage or warehouse patterns, including production hardening.
Nice to Have
- Experience improving pipeline onboarding and deployment patterns, such as config-driven artifacts, launcher scripts, and scheduler integration.
- Familiarity with governance capabilities, including audit trail capture, metadata or lineage integration, and data-in-motion controls.
- Cloud or hybrid experience, such as GCP Dataproc patterns supporting Spark workloads.