We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results

Research Engineer, Multimodal Safety

OpenAI
$310K - $460K
medical insurance, dental insurance, vision insurance, parental leave, paid holidays
United States, California, San Francisco
Apr 22, 2025

About the Team

Our team is dedicated to shaping the future of artificial intelligence by equipping ChatGPT with the ability to hear, see, speak, and create visually compelling images, transforming how people interact with AI in everyday life. We prioritize safety throughout the development process to ensure that our most advanced models can be safely deployed in real-world applications, ultimately benefiting society. This focus on safety is central to OpenAI's mission of building and deploying safe AGI, reinforcing our dedication to AI safety and promoting a culture of trust and transparency.

About the Role

We are seeking a research engineer to pioneer innovative techniques that redefine safety, enhancing the comprehension and capabilities of our state-of-the-art multimodal foundation models. In this role, you will conduct rigorous safety assessments and develop methods, such as safety reward models and multimodal classifiers, to ensure our models are intrinsically compliant with safety protocols. You will also help with red teaming efforts to test the robustness of our models, collaborating closely with cross-functional teams, including safety and legal, to ensure our systems meet all safety standards and legal requirements.

The ideal candidate has a solid foundation in multimodal research and post training techniques, with a passion for pushing boundaries and achieving tangible impact. Familiarity with large suites of metrics or human data pipelines is a plus. You should be adept at writing high-quality code, developing tools for model evaluation, and iteratively improving our metrics based on real-world feedback. Strong communication skills are essential to work effectively with both technical and non-technical stakeholders.

This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

In this role, you will:

  • Build evaluation pipelines to assess risk along various axes, especially with multimodal inputs and outputs.

  • Implement risk mitigation techniques such as building safety reward models and RL.

  • Develop and refine multimodal moderation models to detect and mitigate known and emerging patterns of AI misuse and abuse.

  • Work with other safety teams within the company to iterate on our content policies to ensure effective prevention of harmful behavior.

  • Work with our human data team to conduct internal and external red teaming to examine the robustness of our harm prevention systems and identify areas for future improvement.

  • Write maintainable, efficient, and well-tested code as part of our evaluation libraries.

You might thrive in this role if you:

  • Are a collaborative team player - willing to do whatever it takes in a start-up environment.

  • Have experience working in complex technical environments.

  • Are passionate about bringing magical AI experiences to millions of users.

  • Enjoy diving into the subtle details of datasets and evaluations.

  • Have experience with multimodal research and post-training techniques.

  • Are very proficient in Python.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement.

Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Compensation Range: $310K - $460K

Applied = 0

(web-7fb47cbfc5-rmspx)