<Back to Search
Test Engineer-AI/LLM
Palo Alto, CAApril 3rd, 2026
Job Description
OPPO US Research Center is seeking a full-time meticulous and innovative AI/LLM Test Engineer to join our cutting-edge AI team. In this critical role, you will evaluate the performance, reliability, and safety of Large Language Models (LLMs) in real-world product scenarios and test end-to-end generative AI solutions. Your work will directly shape how users experience AI-powered features by ensuring robustness, accuracy, and alignment with product goals. This is a unique opportunity to pioneer testing methodologies for next-generation AI systems at the forefront of technology.We are also seeking a Contractor based LLM Evaluation & QA Engineer to support the testing and validation of large language model (LLM)-powered applications. You will help implement test strategies, execute evaluation workflows, and assist in model performance validation across diverse generative AI use cases.This contract role is ideal for someone with hands-on experience in AI/ML evaluation, QA engineering, or data analysis who wants to deepen their exposure to generative AI systems.RequirementsFull-time position requirement:Core Testing & EvaluationDesign and execute performance tests for LLMs across diverse product use cases (e.g., chatbots, content generation etc.).Develop automated test frameworks to evaluate LLM outputs for accuracy, bias, safety, and coherence.Conduct end-to-end testing of integrated generative AI solutions, including APIs, data pipelines, and user interfaces.Optimization & ValidationCollaborate with ML engineers to validate fine-tuned models and optimize prompts for target scenarios.Analyze model failures, edge cases, and adversarial inputs to identify risks and improvement areas.Benchmark LLM performance against industry standards and product-specific KPIs.Collaboration & Quality AssurancePartner with product, engineering, and research teams to define test requirements and acceptance criteria.Document defects, performance metrics, and test results to drive data-driven improvements.Advocate for AI ethics and safety through rigorous testing of fairness, bias mitigation, and content moderation.Innovation & ToolingBuild scalable tools for synthetic test data generation, prompt variation testing, and automated evaluation workflows.Stay current with advancements in generative AI testing, including red-teaming techniques and evaluation frameworks (e.g., HELM, Dynabench).Propose novel testing strategies for emerging challenges (e.g., hallucinations, context drift).Basic Qualifications:Bachelor's degree in Computer Science, Data Science, Engineering, or a related technical field, or equivalent practical experience.1+ years of experience in software testing, data science, or ML validation, with exposure to AI/ML systems.Proficiency in Python and testing frameworks (e.g., PyTest, Selenium).Hands-on experience evaluating LLMs in production environments (e.g., GPT, Claude, Llama, Gemini).Strong analytical skills for dissecting model behavior, statistical performance, and failure modes.Familiarity with cloud platforms (GCP, Azure, or AWS) and MLOps tooling (e.g., MLflow, Weights & Biases).Experience with version control (Git) and agile development methodologies.Preferred Qualifications:Master's degree in AI, Machine Learning, or a related field.Expertise in prompt engineering, LLM fine-tuning (e.g., LoRA, RLHF), or optimization techniques.Experience with automated evaluation tools (e.g., LangChain, TruLens) or LLM-specific test suites.Knowledge of data pipelines, SQL/NoSQL databases, and API testing (e.g., Postman).Background in statistics, quantitative analysis, or data visualization for test insights.Contributions to AI safety/ethics initiatives or open-source LLM evaluation projects.Experience testing mobile-integrated AI solutions (Android/iOS).Contractor position requirements:Testing & Evaluation Support:Execute pre-defined performance tests for LLMs across various tasks (e.g., summarization, Q&A, chatbot flows).Run scripted evaluations to assess outputs for factuality, coherence, and safety.Perform manual and automated test execution on APIs and LLM-integrated user interfaces.Prompt & model validation:Assist ML engineers in evaluating prompt variations and prompt-tuning outcomes.Log and analyze failure cases, anomalies, and edge cases based on provided guidelines.Collabration & DocumentationWork with QA leads, product managers, and ML engineers to understand test goals and criteria.Report defects, compile evaluation summaries, and maintain testing logs.Tooling & Antomation:Use existing internal tools or frameworks to automate test runs and result collection.Contribute to prompt generation, input templating, or result tagging processes.Basic Qualifications:Bachelor's degree or equivalent work experience in a technical field (e.g., Computer Science, Engineering, Data Science).6+ months experience in software QA, data labeling, LLM evaluation, or ML testing projects.Basic Python proficiency, especially for data processing and automation tasks.Familiarity with LLMs (e.g., GPT, Claude, Gemini) and prompt-based outputs.Comfortable working with tools like Jupyter, Postman, or testing dashboards.Detail-oriented with good documentation habits.Contractor Details:Duration: Long termRate: Commensurate with experienceConversion Opportunity: High-performing contractors may be considered for full-time rolesBenefitsOPPO is proud to be an equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements.The US base salary range for this full-time position is $100,000-$200,000 + bonus + long term incentives benefits. Our salary ranges are determined by role, level, and location.
659 matching similar jobs near Palo Alto, CA
- Network Devops L3- Terraform
- Senior Software Engineer, Factory Software
- Healthcare AI/ML Data Engineer for Pipelines
- Augmented and Virtual Reality Software Engineer
- Accelerate Vehicle App Builds: Mobile Build Engineer
- Forward-Deployed AI Integration Engineer
- Staff Firmware Engineer
- AI/LLM Test Engineer: Build Reliable AI
- Foundation Model Engineer for Biotech AI
- Remote ML Ops Engineer - Cloud & Compute Clusters
- Software Engineer - Artifact Management & Continuous Integration
- AI-Native Data Engineer - TrueMeter
- Senior Software Engineer, Dev Productivity & CI/CD
- ML Engineer — End-to-End Autonomous Driving
- Senior AI Engineer I - Agentic AI
- GTM Automation Architect for AI-Driven Revenue Ops
- Senior AI Engineer II - Agentic AI
- AI Engineer II - Agentic AI
- Senior AI Engineer - Agentic AI - Global Dining
- Senior Backend Engineer — AI Compute Platform (Hybrid)
- AI Product Engineer — Frontend, Backend & ML
- Senior AI Infra Observability Engineer (GPU Clusters)
- Senior Software Engineer
- Developer Advocate DevRed
- Senior Software Development Engineer, Stores Foundational AI - Rufus
- Backend Engineer, Generative AI Services
- Backend Engineer
- Senior/Staff Software Engineer- Backend
- Staff AI Engineer
- Staff Product Manager (Build Experience - Platform)
- Senior Software Engineer
- Senior AI Engineer - Machine Learning (US)
- Software Engineer
- Senior Backend Engineer - Recommendation Systems (Contractor)
- Sr DevOps Engineer
- Senior DevOps Engineer
- Senior Inference Platform Engineer — Low-Latency, Multi-Tenant
- Senior Cellular Software Engineer
- Senior Product Manager, AI Agents and Platform
- System Architect