Our UK 2026 Salary Guide is live - download here
arrow-leftBack to the previous page

Senior Algorithm Engineer (Python/Spark–Distributed Processing)

Location:
Remote work, Germany
Salary:
Remote - UK (O/IR35) & NL, BE, GER (B2B)
Job Type:
RemoteContract
Date Posted:
3 minutes ago
Expiry Date:
07/05/2026
Job Ref:
BH-124039-4
Start Date:
23/03/2026
Contact:
Sergio Osman
Contact Email:
sergio.osman@xcede.com
Specialism:
Machine LearningData EngineeringUK RemoteGermanyEU RemoteEngland
Senior Algorithm Engineer (Python / Spark – Distributed Data Processing)
Location: UK (O/IR35) / Belgium / Netherlands / Germany (B2B)
Working model: Remote
Start: ASAP


Senior Algorithm Engineer (Python / Spark – Distributed Data Processing)
We’re hiring a Senior Algorithm Engineer to join a data-intensive SaaS platform operating in a complex, regulated industry. This is a hands-on senior IC role focused on building and optimising distributed data pipelines that power pricing, forecasting and billing calculations at scale. - This is not an ML / Data Science / GenAI role

What you’ll be doing
  • Design, build and deploy algorithms/data models supporting pricing, forecasting and optimisation use cases in production
  • Develop and optimise distributed Spark / PySpark batch pipelines for large-scale data processing
  • Write production-grade Python workflows implementing complex, explainable business logic
  • Work with Databricks for job execution, orchestration and optimisation 
  • Improve pipeline performance, reliability and cost efficiency across high-volume workloads
  • Collaborate with engineers and domain specialists to translate requirements into scalable solutions
  • Provide senior-level ownership through technical leadership, mentoring and best-practice guidance
Key experience required
  • Proven experience delivering production algorithms/data models (forecasting, pricing, optimisation or similar)
  • Strong Python proficiency and modern data stack exposure (SQL, Pandas/NumPy PySpark; Dask/Polars/DuckDB a bonus)
  • build, schedule and optimise Spark/PySpark pipelines in Databricks (Jobs/workflows, performance tuning, production delivery)
  • Hands-on experience with distributed systems and scalable data processing (Spark essential)
  • Experience working with large-scale/high-frequency datasets (IoT/telemetry, smart meter, weather, time-series)
  • Clear communicator able to influence design decisions, align stakeholders and operate autonomously
Nice to have
  • Energy/utilities domain exposure
  • Cloud ownership experience (AWS preferred, Azure also relevant)
  • Experience defining microservices / modular components supporting data products

APPLY FOR THIS JOB

For your job application, please fill in the form below.
SHARE THIS JOB
whatsappenvelopelinkedin
Sergio Osman

Sergio Osman

Specialisms: Data, Data Science, Digital & Product Analytics, Marketing & Insight Analytics, Data Engineering, Business Intelligence, Credit Risk & Analytics
whatsappenvelopelinkedin

Latest Jobs