Ophelos · We're hiring

Senior Data Engineer

Generalist data engineer to own ingestion and analytics — independently, AI-first, in a fast-moving startup.

We're a small data function inside a tech company at a real inflection point. This role takes ownership of two foundational projects: robust client-data ingestion, and giving the business a live view of how the Forest platform is performing. You'll work independently across teams, with AI as a core part of how you build.

See the role
The basics
Where you'll be and how we hybrid.
Location
1 Finsbury Ave
London EC2M 2PF
In the office
3 days a week
Team
Data
What we're looking for

Key skills

We're hiring for mindset as much as for skills. The right person can ramp fast in a small team and own outcomes without needing the work pre-scoped for them.

01

Independent in ambiguity

You can take a vague outcome and turn it into shipped work — scoping, deciding architecture, and building without waiting for direction. Most of what you'll work on starts as a problem, not a spec.

02

Generalist data engineering

Strong Python and SQL, comfortable with Databricks and Postgres. You've built ingestion pipelines end-to-end and have opinions on how to do it well — deduplication, multi-source, time-to-onboard.

03

AI-native development

You build with AI tools as part of your default workflow — most coding done through prompt-driven flows like Claude Code rather than direct edits. You know what these tools are good for and where they're not.

The team

Who you'll work with

A small data and engineering crew. You'd embed in cross-functional teams alongside data and software engineers — supported by, but not dependent on, the people below.

MJ
Matt Jackson
Engineering Director
LinkedIn
JG
Jacob Goss
Head of Data
LinkedIn
J
Jerry
Data Engineer
LinkedIn
L
Lily
Data Engineer
LinkedIn
The role

What you'll be doing

In your first six months you'll work across two priority projects: client data ingestion, and building out Forest analytics for the business. You'll be the dedicated data engineer for these — making your own architecture calls and shipping with AI tooling as a default.

Example projects
Client data ingestion
Build and improve the pipelines that bring client data into Forest. Files (not APIs) are the norm — every client gives data differently. The goal: reduce time-to-onboard and handle the classic problems of duplicates and clashes across sources.
Forest analytics
Set up analytics so business users — including the Intrum markets — can see how Forest is performing in depth. Engagement, performance, client-facing metrics. Today only basic reporting exists; this is core to letting the business act on what's working.
Internal BI migration
Later in the year, lead the move off Thoughtspot to a new BI tool that works for technical and non-technical users at Ophelos. Choose the tool, set it up, get the data in. May involve interfacing with Intrum's BI systems.
A day in the life

Most days you're working independently on whichever of the priority projects is most urgent. You'll be embedded in a small cross-functional team alongside other data engineers (Jerry, Lily) and software engineers from the Forest and Engage sides — you can ask anyone, but no one is going to spoon-feed you the work. You'll spend most of your coding time in Claude Code, prompt-driven rather than typing edits, and regularly be making your own calls on architecture and scope. Quick syncs with Matt or Jacob keep things moving; the rest of the day is you, the problem, and the build.

How we work

Ways of working and the stack

How we work
  • Cross-functional small teams — embedded with data engineers, data scientists, and software engineers.
  • Independent by default — you're given outcomes, not tickets. Scope and architecture calls are yours.
  • AI-first — almost all coding done through prompt-driven workflows, not direct edits.
  • Fast-moving and ambiguous — the org changes, the priorities change, and you stay productive through it.
Tech stack
  • Python
  • SQL
  • Databricks — the data platform
  • Postgres — relevant if you've worked close to the database
Tools
  • Claude Code — primary AI coding tool
  • Datadog — observability and monitoring
  • GitHub — code hosting (assumed)
  • Thoughtspot — current BI, migrating away
What to expect

Our hiring process

Designed to be honest about how you'll actually work — not a series of hoops. Four stages, no take-homes.

1
Stage 1
Screening call
25 min · with Will from People & Talent
A quick intro — your background, why Ophelos, and the practicals.
  • Walk through your experience
  • Why Ophelos, why now
  • Practicalities — comp, notice, location
2
Stage 2
Live technical
60 min · with two of our engineers
Live coding with our data engineers. We care more about how you use your tools than whether you can solve the problem.
  • Pair on a data engineering task
  • Use your real day-to-day toolkit, including Claude Code
  • Discuss tradeoffs and architecture as you go
3
Stage 3
Values & culture
45 min · with two members of the wider team
A real values interview — focused on how you operate independently in ambiguity. Not a rubber stamp.
  • How you handle ambiguous, fast-changing work
  • What you need from a team to do your best work
  • Meet some of the wider team
4
Stage 4
Founder final
30 min · with one of our founders
A conversation with one of our founders to close out the decision.
  • The bigger picture for Ophelos and Intrum
  • Mutual fit check
  • Any open questions