Source: Microsoft Research
Episode 52, November 28, 2018
Dr. Christopher Bishop is quite a fellow. Literally. Fellow of the Royal Academy of Engineering. Fellow of Darwin College in Cambridge, England. Fellow of the Royal Society of Edinburgh. Fellow of The Royal Society. Microsoft Technical Fellow. And one of the nicest fellows you’re likely to meet! He’s also Director of the Microsoft Research lab in Cambridge, where he oversees a world-class portfolio of research and development endeavors in machine learning and AI.
Today, Dr. Bishop talks about the past, present and future of AI research, explains the No Free Lunch Theorem, talks about the modern view of machine learning (or how he learned to stop worrying and love uncertainty), and tells how the real excitement in the next few years will be the growth in our ability to create new technologies not by programming machines but by teaching them to learn.
- Microsoft Research Podcast: View more podcasts on Microsoft.com
- iTunes: Subscribe and listen to new podcasts each week on iTunes
- Email: Subscribe and listen by email
- Android: Subscribe and listen on Android
- Spotify: Listen on Spotify
- RSS feed
- Microsoft Research Newsletter: Sign up to receive the latest news from Microsoft Research
Chris Bishop: The amount of data in the world is – guess what – it’s growing exponentially! In fact, it’s doubling about every couple of years or so. And that’s set to continue for a long, long time to come as we instrument our cities, as we have the Internet of Things, as we instrument our bodies, as we gather more data. And that will fuel this revolution in machine learning.
Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.
Host: Dr. Christopher Bishop is quite a fellow. Literally. Fellow of the Royal Academy of Engineering. Fellow of Darwin College in Cambridge, England. Fellow of the Royal Society of Edinburgh. Fellow of The Royal Society. Microsoft Technical Fellow. And one of the nicest fellows you’re likely to meet! He’s also Director of the Microsoft Research lab in Cambridge, where he oversees a world-class portfolio of research and development endeavors in machine learning and AI.
Today, Dr. Bishop talks about the past, present and future of AI research, explains the No Free Lunch Theorem, talks about the modern view of machine learning (or how he learned to stop worrying and love uncertainty), and tells how the real excitement in the next few years will be the growth in our ability to create new technologies, not by programming machines, but by teaching them to learn. That and much more on this episode of the Microsoft Research Podcast.
Host: Chris Bishop, welcome to the podcast.
Chris Bishop: It’s great to be here.
Host: You are quite a fellow, literally. In broad strokes, what gets a fellow like you up in the morning? What does a day in the life of Chris Bishop look like?
Chris Bishop: Great question. If I had to summarize my work life in one word, I’d say it’s varied. I have to do many, many different things. Of course, think about the strategy for the research lab, think about research directions. Recruitment’s a very big part of what I do, really finding great talent. And then looking after the career development of people that we’ve hired, nurturing that great talent. I think a lot about inclusion and diversity. But also thinking about our external visibility, giving presentations, engaging with universities, engaging with customers. But also scanning the horizon thinking about new opportunities for us. So, no two days are ever the same.
Host: Well, as the lab director of MSR Cambridge in Cambridge, England, not to be confused with Cambridge, Massachusetts over here…
Chris Bishop: Correct!
Host: …give our listeners a sense of the vision for the work in your lab and what constitutes what you call thought-leadership in AI today?
Chris Bishop: Yeah, that’s a great question. The field of AI is really evolving very rapidly, and we have to think about what the implications are, not just a few years ahead, but even further beyond. I think one thing that really characterizes the MSR Cambridge Research lab is that we have a very broad and multi-disciplinary approach. So, we have people who are real world experts in the algorithms of machine learning and engineers who can turn those algorithms into scalable technology. But we also have to think about what I call the sort of penumbra of research challenges that sit around the algorithms. Issues to do with fairness and transparency, issues to do with adversaries because, if it’s a publication, nobody is going to attack that. But if you put out a service to millions of people, then there will be bad actors in the world who will attack it in various ways. And so, we now have to think about AI and machine learning in this much broader context of large scale, real-world applications and that requires people from a whole range of disciplines. We need designers, we need social scientists, a whole spectrum of different talent. And then those people have to come together and collaborate. I think that’s quite a special feature of the MSR Cambridge lab.
Host: So, on the work that’s happening in machine learning, how are you pushing the boundaries when it comes to developing and furthering the science of machine-learning and artificial intelligence?
Chris Bishop: So, really, we take a very bottom-up view in that we hire very smart, creative people and give them a lot of flexibility to go and explore the many different frontiers of machine learning. But part of it, too, comes back to this multi-disciplinary approach. So, one of the areas, for example, that we’re very interested in is confidential machine learning. Machine learning, of course, is fueled by data, and we know that data is precious, we need to protect it. It may be a very personal data if it’s healthcare data, for example. And so, how can we make sure that the technology has access to the data, but at the same time, preserve the appropriate levels of privacy? And then, what technologies can we create to support that? And so, we need to bring together people who understand the algorithms with people who understand security and privacy and the engineering skills, as well, to create viable, scalable technologies that we can actually use in the real world.
Host: Let’s talk about you for a second. You started out in physics and then moved to computer science. What prompted that move and how do you see the two fields complementing each other in what’s going on in computer science today?
Chris Bishop: Right, yes. I started out in physics, as you say because, as a teenager, I was just fascinated by quantum mechanics and relativity and it was a very exciting time in physics. So, I actually did a PhD in Edinburgh with David Wallace and Peter Hicks in quantum field theory. And after that I wanted to do something more applied and I worked on the fusion program. So, that’s the challenge of heating hydrogen up to hundreds of millions of degrees and getting it to fuse into helium, much as happens in the sun. And one day we’ll crack that and that will give humanity unlimited amounts of clean energy. But that’s still a long way off. But, when I was working on that, of course, I developed a lot of expertise in certain kinds of mathematics, in particular, linear algebra and continuous maths, multivariate calculus and probabilities. And it turns out that those are just the kinds of maths skills you need for machine learning. In fact, much more so than traditional computer science because traditional computer science is really based on logic and determinism, whereas machine learning requires continuous maths and dealing with uncertainty and so, physics actually turns out to be a pretty good starting point for machine learning. In terms of how I made the switch, that’s actually quite interesting. I was working on the fusion program, and Hinton published his paper on neural nets, on back-propagation, and it got quite a bit of attention and I thought, this sounds pretty interesting. I’d never really been interested in traditional computing in the sense of telling a machine what to do step-by-step, and how to do it step-by-step. But the idea that a machine could learn from experience so that it could acquire its own intelligence was just incredibly fascinating. And so, when this research was published, I got very interested. I persuaded my boss to buy me a workstation and I taught myself how to program. Got some software and started to play about with these neural networks. And the first thing I did with the neural nets was to apply them to data from the fusion program because I was working down at Oxford on the world’s biggest fusion experiment. And in its day, it was the big data of the day, very high frequency, high spatial resolution diagnostics, huge amounts of data pouring off. Lots of interesting data analysis problems. And I found myself, almost uniquely in that field, in possession of this rather flexible non-linear technique of neural nets. And so, I published a lot of papers, solved a lot of problems in that space, had great fun for a couple of years, and then decided this field was so interesting that I actually wanted to move out of physics and actually do machine learning full-time. That was nearly thirty years ago now.
Host: Well, let’s talk about MSR Cambridge. You’ve been there from the beginning. And you said at one point that you’ve noted, over the years, that progress in artificial intelligence and machine learning has been both much slower and much faster than you expected.
Chris Bishop: Right.
Host: What do you mean by that?
Chris Bishop: Okay, so, I think the reality, the underlying reality is that progress has been actually been relatively steady and quite good. But the perception of it is that nothing much happened for a very long time and then suddenly it all took off. And I think what really happened is that there were some particular developments, specifically around multi-layered neural nets, so-called deep learning, which allowed ideas that had really been around for quite a long time to improve in accuracy to the point where they became of great practical value. And this was noticed, particularly, in speech recognition and also in certain image analysis problems, detecting objects and images for example. And the qualitative improvement in performance, across the threshold at which these techniques became of great practical relevance. And so, we went from a world where I would have said insufficient attention was being paid to machine learning. It had a lot of potential and yet it was sort of being ignored and it was rather frustrating. And now we’re almost in the opposite situation where there’s this huge amount of attention and excitement around it. We’re kind of running just to keep up.
Host: The middle child is getting attention, finally.
Chris Bishop: That’s right.
Host: You once said that being a researcher is better than being a rock star. I don’t know if you remember that, but… I do…
Chris Bishop: Did I say that?
Host: You did. It was funny. I started laughing. So, what do you know that Mick Jagger doesn’t, and why do you feel research is so rewarding?
Chris Bishop: Well, I find it strange that I said that as I’m not entirely sure what it’s like to be a rock star. But I can tell you what’s great about being a researcher and it’s the fact that every day is new. By definition, in research, you always do new things. There’s no point being the second person to discover something. And so, you have that sort of endless variety that will last an entire career and I always think it’s just wonderful to be in a field where you’re always doing new things, it’s always fresh rather than doing the same thing over and over again. So, for me, that’s just one of the great things about research. And also, another great aspect about being a researcher, as a career, is that there’s this ocean of possibilities. I may I’ve been quite lucky and I’ve worked in very abstract theoretical physics. My PhD, I worked on a very applied area, fusion research, for a while, then switched into machine learning, and within machine learning, I’ve been interested, for a while, in algorithms. Then I’ve shifted my interest more to applications. There’s this infinite ocean of possibility. And so, this is why I think people don’t retire because why would you? It’s just so interesting. There’s always more to be done; always new things to explore.
Host: I would propose that every field is searching for its own version of a silver bullet or a grand theory of everything. And for AI, some people have suggested that there might be a universal algorithm for machine learning. Should researchers be spending any time on that? And if not, what should we be looking for instead?
Chris Bishop: That’s a really interesting question. So, there’s this theorem in machine learning. It has a wonderful name. It’s called the No Free Lunch theorem. I love it. It basically says that, if you look at all possible problems that you might apply machine learning to, then, on average, any algorithm is just as good or bad as any other. In other words, the theorem says there cannot be a single universal machine learning algorithm that will solve all problems. Now, we have to be a little bit careful because it’s a piece of abstract theory. So, it’s correct, but we’ve got to be careful when we interpret it because it may be that there are certain algorithms that are very good at solving all the kinds of problems that we’re going to encounter in the real world. So, it may be that techniques like deep neural networks are really quite generic and broadly applicable. However, what the No Free Lunch theorem does teach us is that you cannot learn just from data. You learn from data in the context of assumptions. Or they’re sometimes called prior knowledge or constraints. The terminology varies. But it’s data in the context of a model, or a set of assumptions, that allows you to learn or allows the machine to learn. And those assumptions are dependent on the particular problem you’re solving. So, what it means is that instead of searching for the single universal algorithm that will solve all problems, instead you need to think about the particular problem that you are trying to solve and finding the best technique and that involves thinking about the assumptions you want to make or the domain knowledge.
Host: Well, let’s talk about the concept of uncertainty for a minute. I had a researcher on the show a couple weeks ago and he talked about, sometimes we need to embrace the idea of “well-calibrated uncertainty” in complex autonomous systems, so how might we work to quantify uncertainty?
Chris Bishop: This is really fundamental to machine learning. I call it the “modern view of machine learning.” So, traditionally, we thought of machine learning as a kind of a function that you fitted to some data, like fitting a curve through data so that you can make predictions, where you tune up the parameters so that the neural net gets it right on the training set, and you hope that it works on the test set. I think there’s a broader view of machine learning in which we say that what’s really happening is the machine is building a model of the world, and that model of the world is quantified through uncertainty and the unique calculus of uncertainty is probability. And so, the machine is built on probabilities, and its understanding of the world carries uncertainty. But as it sees more data, that uncertainty typically will reduce, so it becomes less uncertain. In other words, it’s learned something. It’s learned from the data. And that notion is all captured in a very elegant piece of mathematics called Bayes’ Theorem. And so, I view Bayes’ Theorem, and the quantification of uncertainty through probabilities, as being the bedrock of machine learning. And from that, everything else can follow. So, I agree. I think it’s totally fundamental to the field.
Host: So, you’ve used a phrase “model-based machine learning.” Is that what you are talking about here?
Chris Bishop: Right. The idea of model-based machine learning is really taking that idea of prior knowledge, or constraints, domain knowledge, and making that a first-class citizen. Think of it less as being a specific technique. Think of it more as a viewpoint, a way of understanding what machine learning is about. So, imagine you’re newcomer to the field of machine learning. The first thing you discover is that there are thousands and thousands and thousands of papers with hundreds or thousands of different algorithms with lots of different names. It’s like you’re at sea without a compass. Do you have to read all those papers? Do you have to understand them all? You want to solve some practical problem, but you can’t possibly be familiar with all of the different techniques. So, what are you going to do? Well, instead you can adopt this model-based viewpoint. And the model-based viewpoint says, think about the assumptions that you want to make in your machine learning solution, and actually write them down, be explicit about it, and then translate those assumptions into a model. So, the model is just a mathematical representation of your assumptions. And you can then combine that with the data and then when you turn the crank, the machine will learn. And if you’ve made good assumptions, the machine will learn very efficiently from the data. So, if you’re able to make strong assumptions and they’re correct, you get a lot more information out of the same amount of data. The risk, though, is that if you make a strong assumption and it’s wrong, then not only can the machine make bad predictions, but it could be very confident about those bad predictions, so you have to be careful.
Host: So, we’re in the middle of what many people are calling an AI revolution. And you’ve suggested that we’re seeing machine learning usher in a sort of Moore’s Law of data, even as we see Moore’s law in the traditional sense, sort of on the wane. But it’s changing the way that we write software. Can you tell us a bit more about what we’re seeing at the revolution?
Chris Bishop: Yeah, I’d love to talk about this. It’s really a viewpoint on all of the hype and excitement that we have right now around artificial intelligence. So, the term artificial intelligence, for me, refers to that grand aspiration, that very long-term goal of producing human-level intelligence and beyond. We’re a long way from that. So, you might ask, well, does that mean that all this excitement around AI is just misplaced or is just way too early, that it’s just a hype bubble, it will go away? And I say not. I think there is something happening which is very profound and very transformational. And it’s not to do with artificial intelligence, it’s to do with a revolution in the way we create technology. So, I can explain that, by an analogy, with hardware. So, you need hardware and you need software to build technology. And the hardware, if you think about computers over the years, all the time, hardware is getting faster and cheaper and better. And that progression, though, is not linear. It was sort of linear up until a certain moment when a particular technology was created called photolithography. And that allows us to print transistors. So, instead of manufacturing the components of a computer and then assembling them, instead you print the whole circuit in one go on a silicon chip. And that was profound because it switched the progression to exponential and that’s Moore’s Law. And everything else follows: the existence of Microsoft, the fact that you have a super computer in your pocket, all follows from Moore’s Law. So, I think we’re seeing, in the so-called AI revolution, which is really a machine learning revolution, a similar, singular moment in the history of software. So, go back to the beginnings of software. Ada Lovelace – she was the world’s first software developer and she wrote software for Babbage’s analytical engine – she had to specify exactly what every gear wheel did at every moment. Software developers today are sort of much the same, but they’re much more productive. But nevertheless, software developers today still have to tell the machine how to solve the problem. The bottleneck is the human intellect. But with machine learning, we have a radically different way of creating software because instead of programming the machine to solve the problem, we program the machine to learn and then we train it using data. The rate limiting step now is the fuel that powers machine learning: it’s the data. So, we write these machine learning algorithms, the computer can learn from experience and now we train it using data, and what’s really interesting is the amount of data in the world is – guess what – it’s growing exponentially! In fact, it’s doubling about every couple of years or so. And that’s set to continue for a long, long time to come as we instrument our cities, as we have the Internet of Things, as we instrument our bodies, as we gather more data. And that will fuel this revolution in machine learning, which is why I think the hype around artificial intelligence is not incorrect, it’s just misplaced. The real excitement, for the next few years is going to be this exponential growth in our ability to create new technologies, not by programming machines, but by having them learn.
Host: Let’s come back to healthcare. I know this is a passion of yours. So, talk about some of the strategies you are working on to improve healthcare with AI and ML.
Chris Bishop: Healthcare is possibly the biggest opportunity of all for machine learning. But it’s a very, very difficult field in which to work. And in many ways, healthcare, as an industry, is still in a relatively primitive state compared to some areas of manufacturing or other sectors. The healthcare industry still buys fax machines, for example, which is extraordinary! I didn’t realize that fax machines were still being manufactured, but they are and it’s the health industry that buys them! So, in a way, before we even think about machine learning in healthcare, we have to think about the digital transformation of healthcare. My personal medical records are probably stored on bits of paper scattered across various different cities in the UK according to where I’ve lived and so on. We first of all have to think about digital transformation of healthcare and then we could think about the machine learning that builds on top of it. So, it’s a bit of a long-term bet. On the other hand, the societal benefit that could come by taking a more evidence-driven approach to healthcare is phenomenal. One of the things that it can allow is the potential for personalized healthcare because we’re individuals. Personalized healthcare, though, is something, if we’re to deliver that at scale, it’s got to be done in an automated way and machine learning offers that potential. In just the same way that a machine can learn your preferences for movies, it can learn what would be an appropriate course of treatment for you as an individual. But it can only do that by learning about the patterns that occur in large numbers of people. So, by analyzing data from millions of people, we have the potential to create personalized health solutions for each of those people as individuals. I call it the “paradox of personalization.” So, in terms of the strategies, we come with two things, but really only two things. One is our cloud technology, and the other is on machine learning expertise. But for everything else, we need to work in partnership, so all of our healthcare work is done in collaboration with medics, with clinicians, we work with local hospitals, actually hospitals around the world, bringing together domain experts from the healthcare sector with machine learning experts, jointly, to work on solutions.
Host: If we have personalized healthcare that’s digital, or you know machine learning based, where does that put the doctor? How much of this is, maybe, going to displace medical professionals? Any?
Chris Bishop: There’s always a question asked about whether machine learning will replace people or whether it will help them. One particular project we’re working on is using machine learning to find the boundaries in three dimensions of tumors, brain tumors for example, in order to be able to use that information for radiation therapy planning. Now, this is a job which is ideal for a machine because the machine can do it very much faster than the human with less variability, more accurately. And the clinicians that we work with, the radiation oncologists love this technology. They are very excited to have it because this is a part of their job which is tedious and time-consuming, but they have to get it right and they have to pay attention to the detail, so they’re excited to work with us to help to create tools that will allow them to do that piece of their job more effectively, to free up time to do things that machines aren’t very good at. Of course, you need the machine to know where the tumor is in order to plan the radiation therapy. But you also need a conversation with somebody about whether you want that therapy or not, what are the outcomes going to be, what are the implications going to be for you and your family? You got to make these complex decisions. And I, for one, wouldn’t want to do that by interacting with an app, I would like to talk to a clinical expert and really understand, based on their experience, but also on their human empathy, what is the best path forward for me.
Host: Circling back to the healthcare issue and the big data, what’s your take on how we can work to protect privacy and trust, then, in a world where big data sets are essential to machine learning and data is even a new form of currency?
Chris Bishop: This is a really important issue. And part of it is technical, but part of it is more general. Somebody told me the other day of a lovely Portuguese expression, apparently, which says that, “Trust arrives on foot, but it leaves on a horse.” Meaning that it’s hard won, but easily lost. So, trust is not just about technology, it’s about perception and it’s about the confidence that people have in the technology. That said, though, technology can help with some aspects of trust. So, in particular, one of the areas we’re working on in the MSR Cambridge research lab, we call confidential machine learning. And the idea is to be able to take data, which normally would be protected by encrypting it, so it’s scrambled in a way that others can’t access if you don’t have the keys, so that’s standard, and it would be stored in an encrypted form, again, that’s standard practice. But now when you want to process that data, for example, you want to use it as the fuel for machine learning, then you have to decrypt the data and once the data is decrypted, it becomes vulnerable to attack. So the technology that we’ve been developing, and it’s been deployed on Azure, the Microsoft cloud, allows for the data to be decrypted only inside what are called secure enclaves. These are very tightly controlled software environments, protected with certain hardware technologies, making them very secure and meaning that only those with access to the keys to the data could ever access the data itself, even when it’s being processed, not just when it’s being stored. Even to the extent that Microsoft itself can’t access the data of its customers if it’s being decrypted inside these secure enclaves. So, that’s a kind of technology which can help to protect the most valuable kinds of data. But it’s not enough just to have the technology, you need to have the trust. But it’s important, I think, also, to understand the benefits that can arise from the application of data to machine learning so that we are able, as a society, to find the right balance between how we use data and how we protect data. Because, at one extreme, we don’t want a free-for-all where data is readily available to everybody when it’s clearly private. On the other hand, we don’t want to miss out on the enormous opportunity to improve lives and save lives that could come through applying machine learning in fields like healthcare. So, technology can help, but it’s by no means the complete solution.
Host: So, let’s talk about talent. You’ve said that you’re looking for researchers that not only are the world’s best, but also fit with “our values.” How do you define those values? In other words, what kind of incentives can any company offer top researchers now?
Chris Bishop: That’s a great question because there is a tremendous competition right now for top talent in the field of machine learning. What I would say to somebody going into the field, or somebody who is on the job market and thinking about coming to Microsoft, and in particular to Microsoft Research, is to say that we have this great combination of, on the one hand, the opportunity to have a lot of freedom and to set the direction of your research and to go after things that you’re particularly excited about, but at the same time, mechanisms to take the output of your research and get them out there into the real world. So, the people that I really want to attract are those who want to go after really hard, deep, challenging research problems, but with a view to the output of their research being used to make the world a better place, but actually to see that work being used at scale and that’s one of the great things about Microsoft is that we can reach hundreds of millions, or even billions, of people with our technologies.
Host: We’re at a stage now where we have unprecedented compute power. We have huge data sets and sophisticated algorithms. But I’m hearing that people in the field are starting to recognize that we need more than computer scientists to solve them. So, is that true, and if so, talk about the trend toward this interdisciplinary approach to problem-solving.
Chris Bishop: Sure. So, that’s definitely been one of the transformations in the field over the last thirty years. For the first twenty-five years or so that I’ve been in machine learning, the goal was to get the error rate down, to have the performance of the algorithm be sufficiently good that it was interesting. And so that was really all that anybody was focused on. But now that the error rates are low, now that the algorithms are working on real-world problems with high accuracy and therefore becoming of great interest for practical application, we now have to explore and understand a whole swathe of new research problems that arise when you take that algorithm and you put that in the real world. So first of all, there’s going to be an end-user, and that user will have some sort of experience. And they’re not going to want to engage directly with an algorithm. There will be some sort of user interface, some kind of user experience. So, having designers who can design that user experience is crucial. But also, we need social scientists who can understand the way in which people engage with the technology. That opens up many research challenges that need to be explored. We need to think about adversarial attacks on the AI and how to defend against those. Again, that opens up a whole range of research challenges. We need to think about explanations. And you don’t want an explanation expressed in some sort of mathematics if you’re just a regular end user. You need something expressed in normal language that you can understand. So how are we going to address that? And then one of the real challenges is getting people from different fields working successfully together because they often have different cultures and different language and different terminology, and yet, they need to collaborate if we’re going to tackle some of these problems.
Host: You’ve just answered two of the last three questions I have. But I do want to ask you, you know, we talked about what gets you up in the morning and I always ask my guests is there anything that keeps you up at night? And you’ve alluded to a couple of the things that we need to be concerned about when we are developing, designing and implementing AI and machine learning technologies. And most people are talking about this idea of bias and fairness and transparency. But there are other concerns out there about AI in general. Some of them are a bit fantastical. Chris, what would you say to the – could we call them fearmongers? – of AI beyond bias in data?
Chris Bishop: So, I think there is another danger that we haven’t talked about. But it’s not the risk of super-human robots taking over the universe. I think that is fantastical and farfetched and at best lies, you know, many years in the future, or at worst lies many years in the future.
Chris Bishop: So, I don’t think we need to worry about that. The very real concerns around bias and fairness and transparency are incredibly important, but the good news is people are thinking about it. There’s a lot of discussion about this, a lot of very smart researchers are working on this. So, I feel good about that, not because they are not hard problems or they are not important, but at least we are aware of them, we are talking about them, we’re researching them and we’re making progress. There’s another danger though that, if there’s anything that keeps me up at night about machine learning and AI, it’s this: in some way we will have some sort of bump in the road. Perhaps it will be something around bias, perhaps it will be something around privacy, perhaps there will be some security issue. There will be something which causes us to turn our backs on the technology and we would forgo the amazing opportunities which machine learning can offer us, let’s say just in the healthcare space, where this could literally improve the lives and save lives countless people around the world for decades and centuries to come. So, I think we have to be very careful, at the same as we discuss all the challenges and risks of the technology, to keep in mind the enormous potential benefits so that we find the right balance.
Host: Tell us a bit about your history, Chris. How did you come to MSR and come to lead a lab at MSR?
Chris Bishop: The way I came to join MSR was actually quite interesting. I’d been in the field of machine learning for six or seven years, something like that, and I was an academic in Birmingham, a research professor. And I submitted a proposal to what’s called the Isaac Newton Institute in Cambridge, which is an institute for mathematical sciences, and it runs six-month research programs and I proposed a program on neural networks and machine learning. This is back in 1997. And the proposal was accepted. And so, I got to bring to Cambridge, over a six-month period, essentially all of the top people in the field of machine learning at the time, and we had a tremendous time. It really was an amazing six-month research program. What was interesting, though, was that program began on the first of July of ’97 which happened to be the exact same day that Microsoft set up its first ever research lab outside of the US. And it chose Cambridge in the UK as the location for that lab. So, I arrived in Cambridge on the exact same day that Microsoft Research started. And so, the whole of Microsoft Research Cambridge came to see me in a taxi, three of them, the founding lab director and a couple of deputies, came to visit me at the Newton Institute and said, “Hey, we’re setting up this lab and you know we’re going to do all this great stuff and do you want to join us?” And I thought about it for a few nanoseconds and said, “Yes, I’d love to.” And of course, I stayed on to finish the six-month program, so I didn’t actually start working in the lab until January. But it was an amazingly happy coincidence. And so, I’ve really been at the MSR Cambridge lab since it began, leading the machine learning group, really helping build machine learning group, and leading it for many years, and then, three years ago, the opportunity arose to become lab director and that’s an exciting and new and different challenge and I’ve been having a lot of fun doing that.
Host: As we close, perhaps you can give some parting advice to our listeners, many of whom are just getting started in their research careers. We talked about some of the most exciting problems and challenges that you see on the horizon already. What would you tell your 25-year-old self if you were listening to this podcast?
Chris Bishop: I think my top-level advice would be not worry too much about optimizing your career and planning your trajectories in paths through different job opportunities and so on. But instead, just go after something that you really care about because at the end of the day, you’ll have much more energy and you will most likely achieve a lot more and the career will kind of sort itself out. And especially if you are in a field anything like machine learning, there are so many opportunities out there that really, just focus on the thing that really excites you whether it’s working on the algorithms, whether it’s working in healthcare, whatever it is. Focus on the thing that you are most passionate about and the rest will take care of itself.
Host: Would 25-year-old Chris Bishop listen to you?
Chris Bishop: Probably not, but I didn’t listen to anybody.
Host: You know what? That’s why you are successful in research. Chris Bishop, thank you for joining us on the show today.
Chris Bishop: It’s been fun. Thank you very much.
To learn more about Dr. Christopher Bishop, and the innovative research he directs at MSR Cambridge, visit Microsoft.com/research
The post Machine learning and the learning machine with Dr. Christopher Bishop appeared first on Microsoft Research.