Mindrift
See if you qualify before applying
Get your match score and detailed fit analysis in 10 seconds.
We're building a dataset to evaluate AI coding agents by creating challenging tasks and evaluation criteria within realistic simulated environments. The tasks involve building virtual companies, assembling and calibrating tasks, designing tasks set in isolated environments, writing tests, and iterating with AI agents. Contributors will work part-time, non-permanent projects, with a focus on testing and evaluating AI systems.
Originally posted on Himalayas