JOBSEARCHER
<Back to Search

Rust Developer - Ai Training

Job Description Work Mode: RemoteEngagement Type: Independent ContractorSchedule: Full-Time or Part-Time ContractLanguage Requirement: Fluent EnglishRole OverviewWe partner with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems.This project focuses specifically on evaluating and improving how AI systems reason about code, generate programming solutions, and explain technical concepts across various complexity levels.The role involves rigorous technical evaluation of AI-generated responses in coding and software engineering contexts.What Youll DoEvaluate LLM-generated responsesto coding and software engineering queries for accuracy, reasoning, clarity, and completenessConduct fact-checkingusing trusted public sources and authoritative referencesConduct accuracy testing byexecuting code and validating outputs using appropriate toolsAnnotate model responsesby identifying strengths, areas of improvement, and factual or conceptual inaccuraciesAssess code quality, readability, algorithmic soundness, and explanation qualityEnsuremodel responses align with expected conversational behaviorand system guidelinesApply consistent evaluation standardsby following clear taxonomies, benchmarks, and detailed evaluation guidelinesWho You AreYou hold aBS, MS, or PhD in Computer Science or a closely related fieldYou havesignificant real-world experience in software engineeringor related technical rolesYou are an expert in atleast one relevant programming language (e.g., Python, Java, C++, JavaScript, Go, Rust)You are able to solveHackerRank or LeetCode Medium and Hardlevel problems independentlyYou have experience contributing to well-known open-source projects, including merged pull requestsYou havesignificant experience using LLMs while codingand understand their strengths and failure modesYou have strong attention to detailand arecomfortable evaluating complex technical reasoning, identifying subtle bugs or logical flawsNice-to-Have SpecialtiesPrior experience with RLHF, model evaluation, or data annotation workTrack record in competitive programmingExperience reviewing code in production environmentsFamiliarity with multiple programming paradigms or ecosystemsExperience explaining complex technical concepts to non-expert audiencesWhat Success Looks LikeYou identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussionsYour feedback improves the correctness, robustness, and clarity of AI coding outputsYou deliver reproducible evaluation artifacts that strengthen model performance

771 matching similar jobs near Austin, TX