Kate Rakelly

I am currently self-employed, pursuing personal projects. Previously, I was a Research Scientist on the AI Research Team at Cruise, where I worked on prediction and planning for simulation testing of the autonomous vehicle. In 2021, I spent time as a Research Scientist Intern at DeepMind.

I received my PhD in AI from UC Berkeley in 2021, where I was advised by Sergey Levine as part of BAIR. During my PhD I also worked with Alyosha Efros and Trevor Darrell.

I completed my Bachelor's in EECS at UC Berkeley, where I did undergraduate research with Shiry Ginosar and Alyosha Efros in computer vision as well as Insoon Yang and Claire Tomlin in control.

Email  /  CV  /  Google Scholar  /  LinkedIn

Research

In recent years, AI has excelled in training extremely specialized agents to solve specific problems, from mastering games to achieving impressive classification accuracy on benchmark datasets. I view broadening the skills of these specialist agents as an exciting frontier. To achieve this, agents must be able to re-use knowledge gained in one setting and apply it in another. To this end, my PhD thesis research focused on representation learning and meta-learning for reinforcement learning problems. In the first few years of my PhD, I worked in computer vision on image segmentation. I got my start in machine learning in undergrad where I contributed to a project applying semi-supervised learning techniques to historical photographs to discover trends in fashion and hairstyle over the past century.

Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Kate Rakelly, Abhishek Gupta, Carlos Florensa, Sergey Levine
Neurips, 2021

Unsupervised representation learning techniques can be used to extract compact state representations from observations, making the RL problem easier and more tractable. Recently, contrastive learning methods that learn lossy representations and can be interpreted as maximizing mutual information objectives have proven effective and popular. However, are the learned representations guaranteed to be capable of learning and representing the optimal policy? In other words, might they inadvertantly discard needed information for control? We present a framework for analysis and seek to answer this question for several popular and representative objectives.

MELD: Meta-Reinforcement Learning from Images via Latent State Models
Tony Z. Zhao*, Anusha Nagabandi*, Kate Rakelly*, Chelsea Finn, Sergey Levine
CoRL, 2020
Code, Website, Video

We leverage the perspective of meta-learning as task inference to show that latent state models can also perform meta-learning given an appropriately defined observation space. Building on this insight, we develop meta-RL with latent dynamics (MELD), an algorithm for meta-RL from images that performs inference in a latent state model to quickly acquire new skills given observations and rewards. We demonstrate that MELD enables the WidowX robotic arm to quickly insert an Ethernet cable into the correct port at a novel location and orientation given only a task completion reward.

Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables
Kate Rakelly*, Aurick Zhou*, Deirdre Quillen, Chelsea Finn, Sergey Levine
ICML, 2019
Code, Blog, Slides

Leverage off-policy learning and a probabilistic belief over the task to make meta-RL 20-100X more sample efficient. PEARL performs online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration during adaptation. Unlike prior approaches, our method integrates easily with existing off-policy RL algorithms, greatly improving meta-training sample efficiency.

Few-Shot Segmentation Propagation with Guided Networks
Kate Rakelly*, Evan Shelhamer*, Trevor Darrell, Alyosha Efros, Sergey Levine
Preprint, 2018
Code

Few-shot learning meets segmentation: given a few labeled pixels from few images, segment new images accordingly. Our guided network extracts a latent task representation from any amount of supervision and is optimized end-to-end for fast, accurate segmentation of new inputs. We show state-of-the-art results for speed and amount of supervision on three segmentation problems that are usually treated separately: interactive, semantic, and video object segmentation. Our method is fast enough to perform real-time interactive video object segmentation.

Clockwork Convnets for Video Semantic Segmentation
Evan Shelhamer*, Kate Rakelly*, Judy Hoffman*, Trevor Darrell
Video Semantic Segmentation Workshop at European Conference in Computer Vision (ECCV), 2016
Code

A fast video recognition framework that relies on two key observations: 1) while pixels may change rapidly from frame to frame, the semantic content of a scene evolves more slowly, and 2) execution can be viewed as an aspect of architecture, yielding purpose-fit computation schedules for networks. We define a novel family of "clockwork" convnets driven by fixed or adaptive clock signals that schedule the processing of different layers at different update rates according to their semantic stability.

A Century of Portraits: A Visual Historical Record of American High School Yearbooks
Shiry Ginosar, Kate Rakelly, Sarah Sachs, Brian Yin, Alyosha Efros
IEEE Transactions on Computational Imaging, 2017
Extreme Imaging Workshop, International Conference on Computer Vision (ICCV), 2015
Website
Portrait Dating Code

What were the tell-tale fashions of the 1950s? Did everyone in the 70s really have long hair? In this age of selfies, is it true that we smile in photos more than we used to? And can a CNN pick up on all these trends to accurately date an old portrait? We address these questions and many others using data-driven semi-supervised learning techniques on a novel dataset of American high school yearbook photos from the past 100 years.

Talks

An Inference Perspective on Meta-Reinforcement Learning - An invited talk at the Neurips 2020 Workshop on Meta-Learning. I make a case for why viewing meta-RL as task inference is a fruitful direction for future research in meta-RL.

An Inference Perspective on Meta-Learning - An invited talk at the Sheffield Seminar, a weekly seminar of the Machine Learning group at Sheffield University. I talk about how meta-learning as inference leads to effective algorithms for few-shot learning not just in RL, but also in image segmentation.

An Overview of Meta-Reinforcement Learning - A guest lecture presenting an overview of meta-RL in the Fall 2019 offering of CS294 at UC Berkeley

Exploration in Meta-RL - A guest lecture looking at the problem of exploration in meta-RL in the Fall 2019 offering of CS330 at Stanford University

Efficient Meta-RL with Probabilistic Context Embeddings - contributed talk to the Workshop on Structure and Priors in RL at ICLR 2019

Writing

Which Mutual Information Representation Learning Objectives are Sufficient for Control? - on the BAIR blog about our work on analyzing representation learning objectives for RL.

Learning to Learn with Probabilistic Task Embeddings - on the BAIR blog about our work on off-policy meta-RL.

Code

A collection of collateral damage from doing research that might be useful to others.

pytorch-maml - a PyTorch implementation of Model-Agnostic Meta Learning for supervised learning.

Teaching

CS294-112 - Fall 2018 (Head Teaching Assistant)

Deep Reinforcement Learning is a special topics course covering modern deep reinforcement learning techniques.

CS70 - Summer 2014 (Teaching Assistant)

Discrete Mathematics for Computer Science covers proof techniques, modular arithmetic, polynomials, and probability.

EE40 - Summer 2013 (Teaching Assistant)

Introduction to Circuits covers analyzing, designing, and building electronic circuits using op amps and passive components. (Note this class along with EE20 have been replaced by the EE16A/B series as of Fall 2015.)

Languages

English (fluent), Spanish (proficient)

Other skills and interests

I am an amateur vocalist and guitar player, and have recently dabbled in writing songs! I'm particularly interested in folk, soft rock, and Latin pop. I love listening to all kinds of live music, from symphony orchestra to singer-songwriters to dance bands.

I love to be oustide, near the ocean or in the mountains. Rock climbing and hiking are some of my favorite ways to enjoy the outdoors.

I love good food, wine, coffee, and tea!


(this guy makes a nice wesbite)