Research & Systems Programmer -- AI Code Evaluation & Training (Remote)
Job Description
Job Description
Now prioritizing candidates with a research, simulation, or systems focus!
List of accepted countries and locations
If you’ve worked in academic labs, simulation environments, or low-level performance engineering, this role is built for you.
Help train large-language models (LLMs) to write clean, high-performance scientific code.
Your expertise will support this feedback loop:
-
Compare & rank AI-generated code used in scientific or data-heavy environments
-
Repair & refactor code in MATLAB, Zig, or related tools
-
Inject feedback to help the model learn how to reason through complex logic
End result: The model gets better at working in research, simulation, or performance-critical domains.
What You’ll Need
-
3+ years of software engineering experience in Python
-
Familiarity with MATLAB (academic/research) or Zig (low-level performance)
-
Ability to assess and explain code quality with precision
-
Excellent written communication and attention to detail
-
Comfortable in async, remote workflows
What You Don’t Need
-
No RLHF or machine learning experience required
Tech Stack
We're looking for strength in MATLAB, Zig, or scientific computing tools.
Logistics
-
Location: Fully remote — work from anywhere
-
Compensation: $30–$70/hr depending on location and seniority
-
Hours: Minimum 15 hrs/week, up to 40 hrs/week available
-
Engagement: 1099 contract
Straightforward impact. Zero fluff.
If this sounds like a fit, apply here!