Data Streaming Engineer

Apply

Overview

6-Month Contract | E-commerce | Data Streaming Engineer Duration: 6 months Location: Remote (UK-based preferred) IR35 Status: Outside IR35 Start Date: ASAP I'm currently hiring for a cutting-edge e-commerce company based in London that’s making a shift towards a streaming-first data architecture. As part of this transformation, they’re looking for an experienced Senior Data Streaming Engineer to join on a 6-month contract (outside IR35). This role is fully remote ***UK-based candidates preferred with full right to work, however we will consider EU candidates. The Opportunity You'll play a key role in shaping and delivering the technical strategy behind a modern, cloud-native, real-time data platform. From Kafka pipelines to Databricks orchestration, this is a chance to work on high-scale systems powering real-time insights across the business. What We’re Looking For The ideal contractor will bring a strong blend of strategic thinking and hands-on engineering. Specifically, you’ll have: Proven experience as a Senior Data Engineer, focused on real-time, distributed systems Deep expertise with modern streaming tools: Kafka, Flink, Spark, Databricks Strong cloud-native data architecture experience (AWS, GCP, or Azure) Solid understanding of streaming vs. batch processing paradigms Experience defining technical strategy and creating actionable roadmaps Familiarity with broader data ecosystem tools like Airflow, Dagster, DBT Strong skills in Python, Scala, or Java A background in ETL/ELT pipelines, data warehousing, and analytics engineering best practices Experience working with privacy-sensitive data and implementing data governance frameworks Clear, confident communication and a collaborative mindset to align diverse stakeholders

Responsibilities

  • Design and implement real-time data streaming solutions using modern tools like Kafka, Flink, and Spark.
  • Collaborate with cross-functional teams to define technical strategy and create actionable roadmaps.
  • Orchestrate data workflows using cloud-native platforms like Databricks.
  • Optimize ETL/ELT pipelines and improve data processing efficiency.
  • Implement best practices for data governance and handle privacy-sensitive data.
  • Communicate effectively with diverse stakeholders to ensure alignment and project success.

Requirements

  • Proven experience as a Senior Data Engineer focused on real-time distributed systems.
  • Deep expertise with streaming tools such as Kafka and Databricks.
  • Strong knowledge of cloud-native data architecture, particularly with AWS, GCP, or Azure.
  • Familiarity with data ecosystem tools like Airflow, Dagster, or DBT.
  • Strong programming skills in Python, Scala, or Java.
  • Experience with ETL/ELT processes and data warehousing best practices.
  • Demonstrated ability to define technical strategy and deliver actionable roadmaps.
SkillsJava, Python, Scala, AWS, GCP, Azure
LocationUnited Kingdom
TypeOn-site
SourceLinkedIn
Posted06/11/25