Job Description
Job Title: Data Engineer
Work Allocation Responsibilities:Design, set up, and/or build the integration jobs needed to store and move data from existing SoR applications into—and through—the company Microsoft Cloud environment (Data Lake, Azure SQL, Azure DWH) and into our Data Science Solutions, leveraging the company's common/standard integration patterns, tools, and languages (Azure Data Factory, Data Bricks, Spark SQL, Python, PySpark). Create schemas and build the OC for teams to manage SoRs once they have been created. Ensure alignment with vendors and third parties.Required Technical Skills
- Experience (required) with data analysis/modeling, data acquisition/ingestion, data cleaning, and data engineering/pipelines
- Importing data via APIs, ODBC, Azure Data Factory, or Azure Data Bricks from systems of record such as Azure Blob, SQL DBs, No SQL DBs, Data Lake, Data Warehouse, etc.
- Configuring data flows and building Ansible pipelines into analytics tools like Power BI, Azure Analytics Service, or other data-science tools
- Building APIs in Azure Functions and creating OpenAPI 3.0 specifications
- Troubleshooting and supporting data issues across different solution products
- Building data models for structured and unstructured data
- Transforming data with various ETL tools and/or scripting
Required Functional Knowledge:- Domain Azure Functions
- Azure Storage
- Azure Data Factory
- Azure Synapse
- Azure Data Lake
- Data Bricks
- Databases (SQL, NoSQL, CosmosDB)
- Data structures – XML, JSON, RDBMS
- Data-flow transformations via SQL and Python
- Programming languages: C#, Ansible
- Business Intelligence: Power BI, Spotfire, Power Apps