Bilingual French Generalist Evaluator Expert
About the role
Mercor is seeking native French speakers with exceptional writing skills to contribute to a high-impact AI research project with a leading lab. Freelancers will author French/English prompt–golden answer pairs that train and evaluate advanced language models. This is a short-term, flexible opportunity for professionals who combine language mastery, strong critical thinking, and a knack for instructional clarity. Ideal for those who enjoy distilling complex concepts into well-crafted, culturally grounded French text while maintaining technical precision in English.
Job Details Multilingual Prompt Design & Optimization: Create detailed prompts in French and/or English with multiple constraints and instructions, ensuring natural phrasing and real-world relevance for French-speaking users.
Define and Document Evaluation Standards: Establish high-level expectations for correct responses in French consumer contexts, and develop comprehensive rubrics that account for linguistic nuance, tone, and cultural conventions.
Model Testing and Grading (Bilingual): Run prompts through models and assess preliminary outputs for accuracy, fluency, and cultural fit in French, comparing results against English where needed.
Benchmarking & Quality Assurance: Collaborate in QA review processes to ensure prompt tasks and rubrics meet rigor—maintaining consistency and reliability across French-language benchmarks before integration into official evaluations.
Minimum Qualifications
- Native-level fluency in French (written) with strong reading/writing ability in English.
- BS or BA from a reputable institution (completed or in progress).
- Strong writing and critical thinking skills.
- Ability to work independently and meet deadlines.
- Significant familiarity with ChatGPT or similar tools for personal decision-making, hobbies, or general interests.
- Based in France (or able to reliably produce France-specific, culturally accurate French).
Preferred Qualifications
- Experience in teaching, research, editing, or academic writing.
- Experience creating evaluation criteria, rubrics, or grading guidelines.
- Familiarity with LLMs, prompting, or model evaluation (helpful but not required).
Application & Onboarding Process
- Complete an AI-led interview (about 15 minutes).
- Complete a 45-minute written assessment focused on writing and rubric creation.
- If selected, you will be invited to work on the project.
More Details About This Role
- Expect to contribute at least 20 hours per week.
- Expect a commitment of around 2+ months.
- You’ll be working in a structured project environment with clear goals and tools.
- We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.
Similar jobs you might like
Bilingual French Generalist Evaluator Expert
About the role
Mercor is seeking native French speakers with exceptional writing skills to contribute to a high-impact AI research project with a leading lab. Freelancers will author French/English prompt–golden answer pairs that train and evaluate advanced language models. This is a short-term, flexible opportunity for professionals who combine language mastery, strong critical thinking, and a knack for instructional clarity. Ideal for those who enjoy distilling complex concepts into well-crafted, culturally grounded French text while maintaining technical precision in English.
Job Details Multilingual Prompt Design & Optimization: Create detailed prompts in French and/or English with multiple constraints and instructions, ensuring natural phrasing and real-world relevance for French-speaking users.
Define and Document Evaluation Standards: Establish high-level expectations for correct responses in French consumer contexts, and develop comprehensive rubrics that account for linguistic nuance, tone, and cultural conventions.
Model Testing and Grading (Bilingual): Run prompts through models and assess preliminary outputs for accuracy, fluency, and cultural fit in French, comparing results against English where needed.
Benchmarking & Quality Assurance: Collaborate in QA review processes to ensure prompt tasks and rubrics meet rigor—maintaining consistency and reliability across French-language benchmarks before integration into official evaluations.
Minimum Qualifications
- Native-level fluency in French (written) with strong reading/writing ability in English.
- BS or BA from a reputable institution (completed or in progress).
- Strong writing and critical thinking skills.
- Ability to work independently and meet deadlines.
- Significant familiarity with ChatGPT or similar tools for personal decision-making, hobbies, or general interests.
- Based in France (or able to reliably produce France-specific, culturally accurate French).
Preferred Qualifications
- Experience in teaching, research, editing, or academic writing.
- Experience creating evaluation criteria, rubrics, or grading guidelines.
- Familiarity with LLMs, prompting, or model evaluation (helpful but not required).
Application & Onboarding Process
- Complete an AI-led interview (about 15 minutes).
- Complete a 45-minute written assessment focused on writing and rubric creation.
- If selected, you will be invited to work on the project.
More Details About This Role
- Expect to contribute at least 20 hours per week.
- Expect a commitment of around 2+ months.
- You’ll be working in a structured project environment with clear goals and tools.
- We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.