Job Description
This is for researchers who have already built frontier systems - and want to do it again.
We’re assembling a small, high-conviction research group in San Fran.
The founding team includes:
- Ex-DeepMind research leads
- Ex-Meta FAIR core contributors
- Researchers who have trained and shipped frontier-scale models
- Engineers who have built distributed training systems from scratch
They’ve done it inside big labs.
Now they’re building without the constraints.
The mandate
Build core model capability - not products around models.
That means:
- Novel architecture work
- Training runs at real scale
- Mechanistic understanding, not just loss curves
- Post-training that materially shifts reasoning behaviour
- Systems that generalise beyond curated eval sets
What makes this different
- Backed by serious capital with long runway
- Compute is a strategic priority, not a bottleneck
- Single-digit research team
- No quarterly product theatre
- No bloated org chart
This is closer to an early DeepMind research cell than a typical startup. High agency. High expectations. High upside.
Who this resonates with
You probably:
- Have publications at NeurIPS / ICML / ICLR that people actually cite
- Have worked at DeepMind / OpenAI / Anthropic / FAIR or similar frontier labs
- Have trained large models or owned core subsystems
- Care about capability jumps, not feature velocity
- Want influence again
-
If you’re comfortable being one of 200 researchers, this isn’t for you.
If you’d rather be one of eight shaping direction - we should talk.
Comp:
Structured to compete directly with frontier labs.
- Base in the high six figures
- Meaningful equity
- Total packages that can exceed $1.5M+ for true top-tier profiles
Discretion guaranteed.