Job ID:  29872

Title:  Sr Specialist, Data Engineer

Location: 

Melbourne, FL, US, 32919

Job Title: Sr Specialist, Data Engineer 

Job Code: 29872

Job Location: Melbourne, FL (On-Site)

Job Schedule: 9/80: Employees work 9 out of every 14 days – totaling 80 hours worked – and have every other Friday off

 

Job Description:

L3Harris Enterprise Data and AI team is seeking a Data Engineer with experience in managing enterprise-level data life cycle processes. This role includes overseeing data ETL/ELT pipelines, ensuring adherence to data standards, maintaining data frameworks, conducting data cleansing, orchestrating data pipelines, and ensuring data consolidation. The selected individual will play a pivotal role in maintaining ontologies, building scalable data solutions, and developing dashboards to provide actionable insights for the enterprise within Palantir Foundry. This position will support the company’s modern data platform, Unified Data Layer, focusing on data pipeline development and maintenance, data platform design, documentation, and user training. The goal is to ensure seamless access to data for all levels of the organization, empowering decision-makers with clean, reliable data.

 

Essential Functions:

  • Design, build, and maintain robust data pipelines to ensure reliable data flow across the enterprise.
  • Maintain data pipeline schedules, orchestrate workflows, and monitor the overall health of data pipelines to ensure continuous data availability.
  • Create, update, and optimize data connections, datasets, and transformations to align with business needs.
  • Troubleshoot and resolve data sync issues, ensuring consistent and correct data flow from source systems.
  • Collaborate with cross-functional teams to uphold data quality standards and ensure accurate data is available for use.
  • Utilize Palantir Foundry to establish data connections to source applications, extract and load data, and design complex logical data models that meet functional and technical specifications.
  • Develop and manage data cleansing, consolidation, and integration mechanisms to support big data analytics at scale.
  • Build visualizations using Palantir Foundry tools and assist business users with testing, troubleshooting, and documentation creation, including data maintenance guides.

 

Qualifications:

  • Bachelor’s Degree and minimum 6 years prior Palantir experience or Graduate Degree and a minimum of 4 years of prior Palantir experience In lieu of degree, minimum 10 years of prior Palantir experience.
  • 4+ years of experience with Data Pipeline development or ETL tools such as Palantir Foundry, Azure Data Factory, SSIS, or Python.
  • 4+ years of experience in Data Integration.
  • 4+ years experience with design, development of Data Pipelines in Palantir Foundry Pipeline Builder or Code Repo, PySpark and Spark SQL, and data build/sync schedule deployment in Palantir

 

Preferred Additional Skills:

  • Understanding of BI (Business Intelligence) & DW (Data Warehouse) development methodologies.
  • Experience with Snowflake cloud data platform including but not limited to hands-on experience with Snowflake
  • Experience with Python, Pandas, Databricks, JavaScript, Typescript or other scripting languages
  • Experience in ETL tools such as Palantir Foundry, ADF (Azure Data Factory), SSIS, Informatica or Talend is preferable
  • Working knowledge to connect and extract data from various ERP applications such as Oracle EBS, SAP ECC/S4, Deltek Costpoint, REST API, and more.
  • Experience with AI tools such as OpenAI, Palantir AIP, Snowflake Cortex or similar.

#LI-NR1


Nearest Major Market: Melbourne