company logo

Research Scientist, Trustworthy Learning under Uncertainty (TLU) - Large Behavior Models

Toyota Research Institute.com

176k - 264k USD/year

Office

Los Altos, CA

Full Time

At Toyota Research Institute (TRI), we’re on a mission to improve the quality of human life. We’re developing new tools and capabilities to amplify the human experience. To lead this transformative shift in mobility, we’ve built a world-class team in Automated Driving, Energy & Materials, Human-Centered AI, Human Interactive Driving, Large Behavior Models, and Robotics.
The MissionTo conduct cutting-edge research that will enable general-purpose robots to be reliably deployed at scale in human environments.
The ChallengeWe envision a future where robots assist with household chores and cooking, aid the elderly in maintaining their independence, and enable people to spend more time on the activities they enjoy most. To achieve this, robots need to be able to operate reliably in messy, unstructured environments. Recent years have witnessed a surge in the use of foundation models in various application domains, particularly in robotics. These “large behavior models” (LBMs) are enhancing the abilities of autonomous robots to perform various complex tasks in open and interactive environments. TRI Robotics is at the forefront of this emerging field by applying insights from foundation models, including large-scale pre-training and generative deep learning. However, it remains a challenge to ensure the reliability of LBMs for large-scale deployment in diverse operating conditions. 
The TeamWe aim to make progress on some of the hardest scientific challenges around the safe and effective usage and development of machine learning algorithms within robotics. To this end, the research mission of the Trustworthy Learning under Uncertainty (TLU) team within the Robotics division is to enable the robust, reliable, and adaptive deployment of LBMs at scale in human environments.To guarantee dependable deployment at scale in the years to come, we are dedicated to enhancing trustworthiness of LBMs through three key principles, as detailed (i) ensuring objective assessment of policy performance (Rigorous Evaluation), (ii) improving the ability to detect and handle unknown situations and return to nominal performance (Failure Detection and Mitigation), and (iii) developing the capability to identify and adapt to new information (Active / Continual Learning).
Our team has deep cross-functional expertise across controls, uncertainty-aware ML, statistics, and robotics. We measure our success in terms of algorithmic advancements in the state-of-the-art and publications of these results in high-impact journals and conferences. We value contributions of reproducible and usable open-source software.
The OpportunityWe’re looking for a driven research scientist or research engineer with a strong background in embodied machine learning and a “make it happen” mentality. Specifically, we are looking for expertise in a variety of areas such as Policy Evaluation, Failure Detection and Mitigation, and Active Learning in the context of Large Behavior Models (LBMs) for robotic manipulation. Our topics of interest include but are not limited to: Multi-Modal Foundation Models, Generative Modeling, Imitation Learning, Reinforcement Learning, Planning & Control, Statistics, Uncertainty Estimation, Out-of-Distribution Detection, Safety-Aware & Robust ML, (Inter)Active Learning, and Online / Continual Learning.
The ideal candidate is able to conduct research independently, but also works well as part of a larger research team at the cutting edge of state-of-the-art robotics and machine learning. Experience with robots is preferred, particularly in the manipulation domain.
If our mission of  robust, reliable, and adaptive deployment of LBMs at scale in human environments resonates with you, reach out by submitting an application!

Responsibilities

  • Work as part of a dynamic, closely-knit team conducting research on reliable, robust, and adaptive deployment of machine learning models in robot manipulation. 
  • Push the boundaries of knowledge and the state-of-the-art in Robotics and LBMs.
  • Contribute to cutting-edge development in the areas of: Rigorous Policy Evaluation, Failure Detection and Mitigation, and Active / Continual Learning.
  • Be a key member of the team and play a critical role in rapid progress measured by both the development of internal capabilities and high-impact external publication.
  • Collaborate with internal research scientists and engineers across the TLU team, Robotics division, TRI, and Toyota, as well as our university partners across top academic research universities, such as MIT, Stanford, CMU, Columbia, USC, and Princeton.
  • Present results in verbal and written communications at international conferences, internally, and via open-source contributions to the community.

Qualifications

  • 4+ years of relevant industry experience or a Ph.D. in Machine Learning, Robotics, or related fields.
  • Passionate about large scale challenges in ML grounded in physical systems, especially in the space of robotic manipulation.
  • Expertise in Multi-Modal Foundation Models, Generative Modeling, Imitation Learning, Reinforcement Learning, Planning & Control, Statistics, Uncertainty Estimation, Out-of-Distribution Detection, Safety-Aware & Robust ML, (Inter)Active Learning, and/or Online / Continual Learning.
  • A strong track record of publication at high-impact conferences/journals (e.g., CoRL, ICLR, NeurIPS, ICML, UAI, AISTATS, AAAI, TMLR, RSS, ICRA, IROS, RA-L, T-RO, CDC, L4DC, etc.) on some of the aforementioned topics.
  • Proficiency with one or more coding languages and systems, preferably Python, Unix, and a Deep Learning framework (e.g., PyTorch). 
  • Ability to collaborate with other researchers and engineers of the TLU team, and, more broadly, the Robotics division to invent and develop interesting research ideas.
  • A reliable teammate who loves to think big, go deeper, and deliver with integrity.

Bonus Qualifications

  • Some familiarity with robots and the challenges inherent in conducting research on physical hardware platform
  • Familiarity with data pipelines, model serving and optimization, cloud training, and dataset management is also useful.
The pay range for this position at commencement of employment is expected to be between $176,000 and $264,000/year for California-based roles; however, base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. Note that TRI offers a generous benefits package (including 401(k) eligibility and various paid time off benefits, such as vacation, sick time, and parental leave) and an annual cash bonus structure. Details of participation in these benefit plans will be provided if an employee receives an offer of employment.
Please reference this Candidate Privacy Notice to inform you of the categories of personal information that we collect from individuals who inquire about and/or apply to work for Toyota Research Institute, Inc. or its subsidiaries, including Toyota A.I. Ventures GP, L.P., and the purposes for which we use such personal information.
TRI is fueled by a diverse and inclusive community of people with unique backgrounds, education and life experiences. We are dedicated to fostering an innovative and collaborative environment by living the values that are an essential part of our culture. We believe diversity makes us stronger and are proud to provide Equal Employment Opportunity for all, without regard to an applicant’s race, color, creed, gender, gender identity or expression, sexual orientation, national origin, age, physical or mental disability, medical condition, religion, marital status, genetic information, veteran status, or any other status protected under federal, state or local laws.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records for employment.

Research Scientist, Trustworthy Learning under Uncertainty (TLU) - Large Behavior Models

Office

Los Altos, CA

Full Time

176k - 264k USD/year

September 25, 2025

company logo

Toyota Research Institute

ToyotaResearch