Our team at Petuum is made up of incredibly talented people. This series will feature the engineers, managers, and creators that keep our company moving forward and make us proud of the work we do together.
Xiaodan Liang works as a Project Scientist in the Machine Learning Department at Carnegie Mellon University, and as a research scientist at Petuum. She researches artificial neural networks, data mining, and artificial intelligence, focusing on developing structured machine learning techniques for computer vision tasks. Specifically, she investigates how to exploit and incorporate human common sense in building advanced artificial intelligence systems.
Xiaodan was recently awarded the 2017 CCF Best Doctoral Dissertation (among 10 Computer Science Ph.D.s) and the 2017 ACM China Best Doctoral Dissertation (among two Computer Science Ph.D.s) for her work in deep structural models for fine-grained visual parsing. She has also worked on Adobe research and Snapchat research and was a visiting scholar at the Department of EECS at the National University of Singapore. She obtained her Ph.D. in the School of Data and Computer Science at Sun Yat-sen University.
Why did you join Petuum?
I’ve been working on computer vision problems tangential to those Petuum is tackling for my entire research career. Before this, I was working on a few different object detection and segmentation projects like human parsing and semantic object parsing.
For the past 3 months, I’ve been working with Petuum’s CEO and founder, Dr. Eric Xing, on a research project. Our work has been exciting and successful, so I joined Petuum this year.
What do you do at Petuum?
Right now, my team is focused on incorporating human knowledge into deep learning networks. Because deep learning is a black box, we’re trying to expose human-friendly structures for concepts, like common sense, and incorporate them into deep learning structures. On the application side, we’re trying to deploy these structured deep learning algorithms in a project to address problems researchers have been facing in a novel way.
What’s most exciting about your work?
I’m most excited about getting to work with Petuum’s team of amazing scientists! There is a lot of research focused on the domain shift problem (the discrepancy between real and virtual domains), and we’re looking into new training models to address it. We want our model to automatically learn critical cues for specific controls, which is a very different approach from more traditional role-based learning algorithms.
Our team has so much experience and expertise — I’m sure we’ll be able to develop more complex systems to tackle this problem.
What’s been most challenging in your work so far?
Unfortunately, in the work we’re doing right now, there is little research to help us. We’re exploring a very new direction and the stakes are high — if we’re going to trust deep learning with human lives, we need to know what’s happening inside the black box systems. It’s very difficult to make deep learning systems interpretable and the idea of incorporating human common sense into network architecture is under-exploited in both academia and the industry.
Do you have any advice for people interested in pursuing similar work?
Making your research promising for the future is key, and I think this field is moving toward using mammal cognition to help us figure out how to get computers to automatically learn from data without annotations, especially in computer vision.
If you are a junior Ph.D. in this field or are researching similar problems, I highly recommend looking into human cognition and learning as much as you can about how human knowledge is formulated.
What do you love to do outside of work?
I love to travel and experience different cultures and countries — they inspire me to do better both in life and in work. Most recently, I visited South Korea and Chile. I got to see Seongsan IIchulbong and hike to the hilltop of Hallasan National Park in South Korea!
We’re hiring! We’re growing quickly and looking for highly trained and talented technical staff, including engineers, architects, and more. Contact us at firstname.lastname@example.org.