We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Strategic Risk Analyst

OpenAI
$288K - $320K
medical insurance, dental insurance, vision insurance, parental leave, paid time off, paid holidays, 401(k), retirement plan
United States, California, San Francisco
Feb 28, 2026
About the team

The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analyzing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI's overarching goal of developing AI that benefits humanity.

We are building a horizontal "radar" for AI abuse and strategic risk-correlating internal signals, external intelligence, and real-world events into clear, actionable priorities for OpenAI's safety and product decision-makers.

About the role

As a Strategic Risk Analyst, you will help develop and maintain our central view of strategic risk across OpenAI's products and platforms. You will synthesize internal abuse patterns, upstream and external intelligence, and product and conversational signals into decision-ready risk insights, recurring briefs, and practical prioritization inputs

You will partner closely with investigators, engineers, and policy and trust and safety counterparts, as well as measurement and forecasting teammates, to translate messy signals into structured judgments (including assumptions and confidence), ranked priorities, and actionable recommendations. This is an opportunity to do high-leverage analysis in a fast-moving environment, where crisp thinking and communication directly shape safety decisions, mitigations, and product readiness.

In this role, you will
  • Monitor and analyze internal risk signals (abuse telemetry, investigations outputs, model and product signals) to identify trends, shifts in tactics, and new abuse patterns.

  • Conduct upstream and external scanning (OSINT, ecosystem developments, real-world events) and distill implications for OpenAI's products and threat landscape.

  • Identify and deep dive into harms and misuse across products and channels, turning messy signals into clear analytic findings.

  • Connect individual incidents into system-level narratives about actors, incentives, product design weaknesses, and cross-product spillover-pressure-testing hypotheses early.

  • Produce concise, decision-ready risk briefs and intelligence estimates with explicit assumptions, confidence levels, and what would change the assessment.

  • Convert analysis into clear, ranked priorities and actionable recommendations that product, safety, and policy teams can execute on.

  • Define and track key risk indicators and outcome metrics to evaluate whether mitigations are working and drive course corrections when needed.

  • Build early-warning and monitoring capabilities with data, engineering, and visualization partners, including dashboards that highlight leading indicators and unusual changes.

  • Contribute to product readiness and launch reviews; develop reusable playbooks, FAQs, and briefing materials that help teams respond consistently.

  • Drive cross-functional alignment by tailoring readouts to investigations, engineering, policy, trust and safety, and product stakeholders-and ensuring decisions and follow-ups are crisp.

You might thrive in this role if you
  • Significant experience (typically 5+ years) in trust and safety, integrity, security, policy analysis, or intelligence work.

  • Demonstrated ability to analyze complex online harms and AI-enabled misuse (e.g., harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues) and convert analysis into concrete, prioritized recommendations.

  • Strong analytical craft: you can identify weak signals, form hypotheses, test them quickly, state assumptions explicitly, and communicate confidence and uncertainty clearly.

  • Comfort working across qualitative and quantitative inputs, including (1) casework, incident reports, OSINT, product context, and policy frameworks, and (2) basic metrics and trends in partnership with data science (e.g., harm prevalence, severity profiles, exposure, escalation rates).

  • Strong adversarial and product intuition: you can anticipate how actors may adapt AI and creative tools for misuse, and evaluate how product mechanics, incentives, and UX decisions shape risk.

  • Experience designing and using risk frameworks and taxonomies (e.g., harm classification schemes, severity/likelihood matrices, prioritization models) to structure ambiguity and support decision-making.

  • Proven ability to work cross-functionally with product, engineering, data science, operations, legal, and policy teams-pushing for clarity on tradeoffs and driving follow-through on mitigation work.

  • Excellent written and verbal communication skills, including producing concise, executive-ready briefs and explaining sensitive, complex issues in grounded, concrete terms.

  • Comfort operating in fast-changing, ambiguous environments: you can prioritize under uncertainty, iterate quickly, and adjust as the product and threat landscape evolves.

  • A builder mindset: you like creating reusable workflows and artifacts (dashboards, playbooks, FAQs, briefing materials) and using modern tools, including OpenAI's, to scale rigorous analysis.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement.

Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Compensation Range: $288K - $320K

Applied = 0

(web-6bcf49d48d-j4skk)