Overview
We are seeking a Senior Data Engineer to join a remote project team focused on leveraging Palantir Foundry for a major insurance provider. The contractor will design and implement data pipelines and analytics products, collaborating closely with analysts and engineers to ensure data quality and functionality. This role is ideal for professionals experienced in data engineering and looking to impact high-volume data processing on a global scale.
Responsibilities
- Design, build, and own Palantir Foundry data pipelines end-to-end.
- Write production-grade Python, PySpark, and SQL for ELT on large distributed datasets.
- Implement warehousing patterns with strong data quality checks and lineage.
- Translate insurance requirements into robust data products via Foundry Ontology.
- Diagnose pipeline issues and document workflows for repeatability.
- Collaborate in Scrum with analysts, SMEs, and engineers to deliver project objectives.
Requirements
- Strong experience in data engineering for large-scale distributed systems.
- Hands-on development experience with Palantir Foundry is a must.
- Deep PySpark and Spark SQL skills.
- Strong ELT/warehouse fundamentals and CI/CD mindset.
- Comfortable working in Agile teams with clear communication skills.
- Experience in the insurance or financial services domain is a bonus.