Contract
Posted on 13 March 26 by Bob Cromer
Powered by Tracker
Location: Charlotte, NC (Hybrid)
Duration: April 2026 – April 2028 (Long-term Contract)
Industry: Financial Services / Enterprise Risk Technology
We are seeking an experienced Big Data Developer to support enterprise data and risk technology initiatives within a large-scale financial services environment. This role focuses on designing, developing, and optimizing distributed data processing solutions using modern big data technologies.
The ideal candidate will have strong experience working with Apache Spark, Scala, Python, and large-scale data pipelines within the Hadoop ecosystem. You will collaborate with engineering teams to build high-performance data processing systems that support enterprise analytics, regulatory reporting, and risk management initiatives.
Design and develop scalable big data applications using Apache Spark, Scala, and Python.
Build and optimize large-scale distributed data processing pipelines.
Work with large datasets to support enterprise analytics, risk technology, and data platforms.
Develop and maintain ETL / ELT pipelines for data ingestion, transformation, and processing.
Integrate data from multiple sources within Hadoop-based data ecosystems.
Optimize data processing workflows for performance, reliability, and scalability.
Work within the Hadoop ecosystem including Hive, HDFS, Kafka, and HBase.
Support data platforms that process high-volume data workloads.
Implement batch and streaming data processing solutions.
Develop automated workflows and job scheduling using Autosys or similar scheduling tools.
Improve operational efficiency by automating data processing and pipeline management tasks.
Maintain source code using Git / GitHub version control.
Follow Agile development practices and collaborate with cross-functional engineering teams.
Participate in design discussions and contribute to data platform architecture decisions.
Support data governance, security, and regulatory compliance requirements in financial services environments.
Ensure data integrity, accuracy, and secure handling of sensitive enterprise data.
5+ years of software engineering or big data development experience
Strong experience with:
Apache Spark
Scala
Python
Experience working with large-scale distributed data systems
Experience developing ETL pipelines and data processing frameworks
Experience with Git or other version control systems
Experience working within the Hadoop ecosystem (Hive, HDFS, Kafka, HBase)
Experience with Autosys or other enterprise job scheduling tools
Experience in financial services or highly regulated environments
Experience with cloud data platforms or modern data lake architectures
Apache Spark
Scala
Python
Hadoop Ecosystem (Hive, HDFS, Kafka, HBase)
ETL / Data Pipelines
Autosys Job Scheduling
Git / Version Control
Distributed Data Processing
Data Governance & Compliance
Enterprise Data Platforms