We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

AI Compute Architect, AI Hardware

Tesla Motors, Inc.
124,000 - 282,000 USD
paid holidays, flex time, 401(k)
United States, California, Palo Alto
Jan 22, 2026
What to Expect

The Tesla AI Hardware team is at the forefront of revolutionizing artificial intelligence throughcutting-edgehardware innovation. Comprising brilliant engineers and visionaries, the team designs and develops advanced AI chips tailored to accelerate Tesla's machine learning capabilities. Their work powers the neural networks behind Full Self-Driving (FSD), andTeslahumanoid robot, Optimus, pushing the boundaries of computational efficiency and performance. By creating custom silicon and optimized architectures, the team ensures Teslaremainsa leader in AI-driven automotive and energy solutions, shaping a future where intelligent machines enhance human life.

As a key member of the Tesla AI hardware team, the AI/ML Compute Architect will drive the innovation and optimization of computing architectures tailored for artificial intelligence and machine learning applications. This role involves visionary hardware-software co-design, accelerator development, and distributed systems engineering to create efficient, scalable solutions that power cutting-edge AI workloads. The ideal candidate is a strategic thinker with proven expertise in balancing performance, power, and scalability, delivering simple yet high-impact implementations that accelerate Tesla's AI initiatives.


What You'll Do
  • Design and architect comprehensive end-to-end compute infrastructure for AI/ML pipelines, encompassing hardware specifications, system topologies, and custom accelerators
  • Collaborate closely with system architects, micro architects, IP vendors, and program management to guide SoC (System-on-Chip) development from initial concept through to production-ready implementation
  • Optimize the interplay between compute, storage, and interconnect to maximize throughput, minimize latency, and reduce energy consumption across training and inference scenarios
  • Partner with ML model designers, compiler engineers, and software developers to build intuitive tools, frameworks, and abstractions that streamline the deployment and scaling of AI/ML workloads
  • Lead performance modeling initiatives, simulating architecture and microarchitecture tradeoffs to inform design decisions and predict system behavior under real-world conditions
  • Evaluate emerging technologies, such as novel accelerators or interconnect fabrics, and prototype innovative architectures to anticipate and address evolving AI compute demands
  • Maintain expertise in the latest advancements in AI workloads, domain-specific languages (e.g., for ML optimization), computer architecture principles, and advanced simulation methodologies

What You'll Bring
  • Degree in Electrical Engineering, Computer Science, Computer Engineering, or a related field; or equivalent practical experience demonstrating exceptional ability
  • Deep knowledge of CPU, GPU, and ML accelerator microarchitectures, including their design principles and performance characteristics
  • Strong understanding of Large Language Models (LLMs), transformer-based architectures, and techniques for their training, inference, quantization, and optimization
  • Proficiency in analyzing physical design constraints, including power, performance, and area (PPA) tradeoffs in hardware systems
  • Exceptional problem-solving abilities, with a track record of dissecting complex technical challenges and devising innovative, practical solutions
  • Hands-on experience with deep learning frameworks like PyTorch, JAX, Pallas, or similar tools for model development and optimization
  • Excellent interpersonal and communication skills, enabling effective collaboration across diverse teams, from leadership to individual contributors
  • Prior experience in performance analysis, including the use of simulation frameworks (e.g., gem5, SST) to model and benchmark systems
  • Ability to work on-site at Tesla's Palo Alto office, contributing to a fast-paced, collaborative environment
  • Proficiency in programming languages such as C/C++ and Python, with applications in hardware simulation, modeling, or system-level software

Compensation and Benefits
Benefits

Along with competitive pay, as a full-time Tesla employee, you are eligible for the following benefits at day 1 of hire:

  • Medical plans > plan options with $0 payroll deduction
  • Family-building, fertility, adoption and surrogacy benefits
  • Dental (including orthodontic coverage) and vision plans, both have options with a $0 paycheck contribution
  • Company Paid (Health Savings Accounts) HSA Contribution when enrolled in the High-Deductible medical plan with HSA
  • Healthcare and Dependent Care Flexible Spending Accounts (FSA)
  • 401(k) with employer match, Employee Stock Purchase Plans, and other financial benefits
  • Company paid Basic Life, AD&D
  • Short-term and long-term disability insurance (90 day waiting period)
  • Employee Assistance Program
  • Sick and Vacation time (Flex time for salary positions, Accrued hours for Hourly positions), and Paid Holidays
  • Back-up childcare and parenting support resources
  • Voluntary benefits to include: critical illness, hospital indemnity, accident insurance, theft & legal services, and pet insurance
  • Weight Loss and Tobacco Cessation Programs
  • Tesla Babies program
  • Commuter benefits
  • Employee discounts and perks program
    Expected Compensation
    $124,000 - $282,000/annual salary + cash and stock awards + benefits

    Pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. The total compensation package for this position may also include other elements dependent on the position offered. Details of participation in these benefit plans will be provided if an employee receives an offer of employment.

    Applied = 0

    (web-df9ddb7dc-vp9p8)