Job Description
About Opticore
Opticore is a photonic computing company building the next generation of energy-efficient AI hardware. Spun out of MIT and based in Berkeley, CA, we are a Series A startup on a mission to solve the energy and scaling challenges of modern AI computing. Our team of is tight-knit, technically deep, and moving fast. All roles are on-site in Berkeley, CA.
About the Role
You understand how large language models and AI workloads map onto hardware, and you know how to build the software infrastructure to make that mapping efficient. At Opticore, you will co-design the programming model and compiler stack for our photonic computing platform — working directly with the chip team to shape hardware architecture decisions from day one. This is a rare greenfield opportunity: you will help initiate our compiler toolchain and define how software runs on a fundamentally new type of processor.
Responsibilities
- Develop a deep understanding of how LLM inference workloads map to Opticore's photonic hardware
- Design and implement the early-stage compiler infrastructure for our platform (stack TBD — you will help choose it)
- Model and optimize memory pipelines, data flow, and compute scheduling for photonic execution
- Actively co-design hardware architecture with the chip team — this is not a spec-driven role
- Identify software-hardware co-optimization opportunities across the stack
- Lay the foundation for a full compiler and runtime software organization
Requirements
- Strong background in computer architecture, with hands-on experience implementing memory controllers, pipelines, and hardware execution models
- Experience mapping ML/LLM workloads to custom or novel hardware
- Compiler engineering experience (MLIR, LLVM, TVM, XLA, or equivalent)
- Ability to work at the intersection of hardware and software, and communicate fluently with chip designers
- Industry or research experience with ML accelerators, custom ASICs, or novel compute architectures
Nice to Have
- Experience building compiler infrastructure from scratch
- Background in quantization, sparsity, or other model optimization techniques for hardware efficiency
- Prior work on non-von-Neumann or emerging compute architectures