Source: Deep Learning on Medium
Eric Schmidt and Robert O. Work
AI for National Security and the Challenge of China.
CRAIG: The US and China are strategic competitors in many arenas, but none is more critical in shaping the future of the world than the competition for dominance in artificial intelligence. AI will likely determine which country wins in the economic realm and, in turn, in the national security realm.
In 2017, China announced its goal to become the world leader in AI by 2030. The US responded by creating a commission to review America’s competitive position and to advise Congress on what steps are needed to maintain US leadership in this important field. Former Google chief executive Eric Schmidt and former Deputy Defense Secretary Bob Work were chosen from among fifteen appointed commissioners to lead the work.
On the eve of the commission’s release of its interim report to Congress, they spoke about the challenges that the US faces in winning support from a skeptical private sector and in maintaining engagement with China while ensuring that that engagement doesn’t work to America’s detriment. I hope you find the conversation as riveting as I did.
ERIC: So, I think you know that, by an act of Congress, we were created about a year ago, maybe a year and a half ago, and the Congress requested that on the order of 15 commissioners, and Bob and I were fortunate enough to be sort of more or less put in charge of the process, act collectively and in a bipartisan way to study the AI issues involving national security.
And there’s reason to think that AI matters a lot. It’s a new technology. It applies very broadly across industries, research, national defense and so forth and so on. And I think there was a consensus that we didn’t have our act together as a country.
And I think most people would agree with that. So, we set out with a two-stage process. The first process is to get the facts, sort of report out where we are, if you will. And then the second stage, which is roughly a year from now, a little bit more than a year from now, is to report on our recommendations. So, this is an interim report and it should reflect, it does reflect the consensus of the commissioners so far. This is an interesting process for Bob and me because everything is by consensus, that we don’t speak as individuals. We speak as a group. And so, if you asked me a question, I’ll say it’s my opinion versus what the report says.
BOB: Craig, I think you’ve actually seen a copy. As Eric said, this is the end of our assessment phase where we tried to get as good of an understanding of what was happening across the entire breadth of the US government in the intelligence community and the Department of Defense and Department of Energy, Department of Commerce. And so, it ends with this report, which really has seven consensus principles, five lines of effort that we identified that will guide our next phase because we will come up with recommendations in those lines of effort. But we do have 27 initial judgments on areas where we might need more attention or action.
So, people might say, Hey, where were the recommendations? But this is an interim report, which kind of gets the commissioners all on an even consensus keel.
ERIC: And can I just add, I’m not aware of a group with this distinction being assembled before. So, you’ve got head of Microsoft research, you’ve got one of the greatest computer science researchers of his generation, the head of In-Q-Tel, the person who runs Amazon cloud. You’ve got the staff from the government. So, what we don’t have in claims we have in discipline, thorough and completeness. That’s all I’m going to say.
CRAIG: Yeah. And actually, one of the questions is about the scope of the commission’s mandate. China, as you note in your report it has a national AI strategy. A lot of countries have come out with national AI strategies. Are the commission’s efforts intended to draft a national AI strategy or is it a subset that is only focused on national security? I mean, is national security just part of it or are you looking at the broader national strategy?
BOB: Well, there is a clear focus on maintaining AI advantages in the realm of national security. It is very clear in the guidance to the commissioners and what we have concluded is that AI’s contribution to our economic prosperity is a national security issue. So, we have to get it right for the sake of both national security and economic prosperity. But I don’t believe we intend to come up with a US AI strategy. We are going to be making recommendations to the Congress, who is our primary customer, on the legislative moves that they could make to make sure that we are in the best competitive position as possible.
ERIC: And, I agree with what Bob said. What I would tell you is the country needs a collective AI strategy. This one is, I think, going to be the definitive one around national security, which is what we were chartered for.
CRAIG: You said in the interim report that you’re eager to hear from more Americans and our allies. I was wondering what are the mechanisms or channels you have in place to collect those inputs?
BOB: Well, we’ve actually made good headway on our allies. We’ve met so far with the United Kingdom, the European Union, Japan, Canada, and the Australians, and we have solicited their views and we’ve made the case that working together we can solve any data disparities. For example, a lot of people say China has an advantage, because it has so much more data. But by aggregating all of the democratic nations and working together, you know, we feel that we can offset any problem in that regard. And we intend to talk with other countries also. But I just wanted to give you the five that we’ve talked to now.
CRAIG: Is there a dialogue with China? Because certainly there needs to be some coordination, whether or not it’s cooperation.
BOB: I know of several tracks, you know, the University of California, San Diego has a track with the Chinese talking about AI. CSET, the center for security and emerging technologies, you know, they are talking with the Chinese, but we as a commission have not yet made any interaction with the Chinese.
CRAIG: Yeah. Do you expect to?
BOB: There’s nothing currently planned.
CRAIG: You note in the, in the interim report that China has deployed AI to advance autocratic models and are setting an example for other authoritarian regimes who are already adopting their model. How can you counter that? Brendan McCord suggested exporting an alternative, more inspirational set of technologies around, for example, humanitarian assistance and disaster relief that would promote American values.
ERIC: So, in the first place I think, the report makes a very important statement, which is that AI in the United States needs to be done with American values. And American values are broadly liberal in the sense that they’re not authoritarian, they’re democratic, they encourage discussion, they’re not, we don’t favor censorship. There are many, many aspects of those values which I think are easily included by my comment. And we also pointed out in the report that the Chinese are functioning in an alternative system and that it’s just a different system. It’s not our system. And that there is a question of diffusion of those models. And you’d see this in their surveillance techniques.
The report does not suggest the kind of path that you’re describing or any other path. It doesn’t say, ‘Oh, as a result we should, you know, build a system and then we should basically get people to use it and in return they promised not to use the Chinese one.’ That’s a possible path for the future, but we don’t recommend one way or the other. We, the purpose of this report was to sort of say, ‘there’s somebody on the horizon who is different in values from us who is quite capable.’
CRAIG: Certainly. You say that the academic and private sectors should reconceive their sense of responsibility for the welfare insecurity of the American people, but there’s already so much distrust of both government and the private sector when it comes to data privacy and surveillance and AI generally. Can you talk a little bit about how you convince people that the government is being transparent in their application of AI?
BOB: Well, I think in my view, the decision by Google to remove itself from Project Maven was viewed by some people as a canary in the coal mine and people were worried that it would cause a broader stampede of private sector innovation away from the government when it comes to AI, and thankfully that never happened. The department has spent a lot of time trying to explain how it intends to use AI. It talks a lot about the ethical use of AI and how the department is going to make sure that it’s true. And in my view, you’ve hit the nail on the head. Most of the companies that work with the department have two big concerns. The first concern is we would just like the government to be transparent on how you intend to use the AI. They’re not looking for some type of manifesto that says we will never use AI in this case, this case, this case.
BOB: It’s just, give us a sense, are you using AI for computer vision? Are you using it for, you know, translation? What do you want to use our AI for? And then they can make the decisions on whether or not it meets the ethical and mission statement of the company. And the second thing is they would just like the government to be much more agile in their contracting, etc. So, I would say that the department and the government is trying to go out of its way. The model for this, I think, is the Defense Innovation Board’s AI ethics. And they had numerous interchanges with American citizens and different companies throughout the country, as they were developing this. And this was one of the things we make clear is that we want to learn all sides of an issue before the commissioners bear down to get a consensus recommendation.
BOB: So, we do not have them scheduled yet, but our plan is over the next year to have public, private meetings with many, many different groups. And Eric and I have already talked about going out and talking with folks. So, I’m cautiously optimistic. We’ve kind of turned a corner here, Craig. I don’t know. But the early part of your question is the most important one. What we tried to do in this report is explain to the American public what are the stakes of this competition and why it is important to their own welfare and security. And so, we spent a lot of time on that.
CRAIG: And just on the DIB’s recommendations on the AI principles. You know, critics are going to argue that it’s just for show and that they’re too vague to be meaningful and that this is an effort to appease Silicon Valley after the Project Maven debacle. I mean, are you concerned? It’s very difficult to write principles that are broad enough to apply to many different use cases but are not so broad that they’re just words. I imagine the commission is struggling with that as well.
ERIC: So, we released the AI principles recommendations for the DOD on Thursday, Halloween. Read the document. And what you’ll see is, just to sort of summarize, there’s a bit of background is why is ethics important? One of the arguments that we made. And, of course, I’m the chairman of the DIB, so I was there, is that there’s a long history of ethics in the military and in war, it’s governed by the constitution, it’s in the laws of war and so forth and so on. And the DIB conducted a 15-month study that was as inclusive as possible. There’s a long list of people who were consulted. It’s in the document. They define what AI is, they talk about existing frameworks and values. And then they come up with five basic principles that should be used. And I won’t summarize them, but think of it in the following way.
ERIC: If you’re using AI and in a commercial or a military system, they need to be under human control. They need to avoid having unintended bias. You have to be traceable, they have to be reliable and otherwise have to do what you want. And we also took a strong position that there has to be some variant of the ability to stop the system if it goes awry. So, so those five principles I think are, are hard to argue against. And with respect to your question about the motivation of this, since I was part of the creation of this, I can tell you that I believe very strongly that every organization should have an AI ethics principles, which forces the organization to confront these questions and allows it to be judged against its principles.
CRAIG: On one thing that Bob said, do you expect there to be a new breed of defense primes [primary defense contractors work directly with the government, manage any subcontractors, and are responsible for ensuring that the work is completed as defined in the contract.] coming up that are more nimble, more high tech-first and closer to the Silicon Valley than the legacy defense contractors?
ERIC: I believe that a couple of things, we presented in the public meeting at the DIB that there is a change in the procurement process. There’s a complicated procurement process as you know, and this process will allow for software procurement in the kind that I am used to in the commercial world.
Assuming that occurs, which is highly likely because it’s in the approval process, not done yet, then it should be possible for there to be significant software suppliers to the defense department, the national security community and so forth. That is a key component of the need here because the national security apparatus of the country doesn’t have its own technology to first order, it buys it from people and it needed a mechanism to buy from these new companies. I think that’s a great, a great thing. It will create more competition, more choices, more investments. So, I think the answer is yes.
BOB: Just to jump on what Eric said. I don’t know if you’ve heard Will Roper, but he says, my goal is to change the United States Air Force into a software company, a software organization. And if he is successful, just as Eric said, then you might expect to see some software primes that are reactive and a proactive, trying to solve, you know, problems that the services have. So, it’s early days. I know all of the primes are doing everything they can to reach out to the innovation community itself and to exploit. And the new changes to the OTA, the Other Transaction Authority that Ellen Lord is going to announce. She says, look, OTA’s were designed for mid-tier programs, but what about if we used OTAs for the components, the software, the AI stacks inside major defense acquisition programs?
BOB: Well, that will really change things a lot also. So, as I said, it’s early days, but I agree with Eric. I mean, I’m cautiously optimistic that this is going to change things in a way that we can’t completely foresee now, but we can anticipate a would really be a big change.
CRAIG: Yeah. Yeah. Okay. And then focusing on a bit of the report, I understand that, that the commission’s looking at national security from domains beyond military, but there’s a popular notion that US policy doesn’t allow the use of lethal autonomous weapons when in fact the DOD policy does allow those weapons if they were developed with appropriate levels of human judgment. And the definition of judgment is quote unquote flexible, which raises a lot of questions. Do you see those weapons as a necessary bargaining chip in developing international treaties regulating their use, or is the nature of war about to change forever?
BOB: The reports said lethal autonomous weapons are an important area for study. It doesn’t suggest a path.
ERIC: I think that’s possible, but I could imagine many other paths. Bob?
BOB: Yes, I agree with Eric. This was one of the hard problems. The commissioners said, look, we have to understand all sides of this issue before we can come to a conclusion. So, in this regard, we are having a full day with the Campaign to Stop Killer Robots, the International Committee of the Red Cross, a whole number of external civil society, FFRDCs, to really understand what they consider to be the objections to these types of weapons so that we can really understand what is their concern. And the only thing the report said is we anticipate that AI and AI enabled system will have an enormous impact on the way the Department of Defense and the government does business, the way it prepares through training, et cetera, and in direct combat applications. We, in fact, there’s one line that says we’re not trying to glorify an AI future, but we fully expect it to have as big a change in warfare as it is going to have in our economy. And we left it at that.
BOB: This is an area that Eric and I want to lead the commissioners through so we have a full understanding, but I did want you to know that we’re actually talking to the folks that think these weapons, we should not pursue them.
ERIC: Let me just add, I think that your questions are about essentially diplomacy.
So, what does it look like between nation states in the presence of these technologies? The report does not today make a recommendation in that area. I can imagine many scenarios. The most obvious one is that there are a certain set of things that you would want to have with no surprises. So, you want to make sure that AI systems do what they said. You don’t want things escaping into the wild, if you will, with unintended consequences. So, you could imagine basic agreements even with global competitors that are occurring. But there is not a basis in our country today for thinking through that. In other words, there’s not a consensus on what those things are, and so we could consider that. But that’s sort of not what we were asked to do. I think we should probably just focus on AI leadership in the United States and then allow the diplomats to kind of figure out how they want to organize it.
CRAIG: Yeah. Yeah. And admittedly that was a leading question, but partly because China’s been very clear that it is developing lethal autonomous weapons, although it supports a ban, I believe, but not the development. So, if we’re going to try and come to some sort of an understanding internationally, you need to be negotiating from a position of strength and so I would assume that the U S would want to develop those capabilities simply as, as a bargaining chip, but understood that that’s beyond the commission’s mandate.
ERIC: Yeah. Again, I think, I think that’s a strategy. I can imagine other strategies. The other thing our report talks about, using China as an example, is that we are not taking a position of decoupling versus entanglement. We’re trying to thread the needle, as Bob likes to say, through those choices. So, I would encourage you not to think of this as black and white. Right? You know, us versus them. There’s probably competition and cooperation in many of these areas.
CRAIG: Yeah. And can you talk a little bit in concrete terms, how you might thread that needle? How you can constrain collaboration without separating or untangling the two communities?
ERIC: Well, one of the things that the report does say is that it would be very expensive to totally disconnect from China.
And when I say expensive, I don’t mean in money. I mean in terms of strategy and the commission states quite clearly that we heard from the research community of America that Chinese nationals, for example, are important for our research enterprise in our universities. We also point out in the report that there are ways in which you can build systems that are trusted of untrusted components. And so, my point here is that it’s not, you know, ban or be married. Right?
There is a middle path, which is probably the most powerful from a United States perspective. Our job is to make sure that the United States wins. Right? We should do whatever it takes to make sure that, that the U S wins in this space.
CRAIG: Yeah. And, I agree that it would be a disaster if we head down the road of banning Chinese researchers in the US or something like that.
BOB: You know, I think the report is very clear. Look, we have to act to protect our interests in light of state directed espionage, the concerted efforts to extract AI knowledge from private and public institutions and the centrality of AI to China’s strategic ambition. But as Eric said, there are benefits from cooperation, especially in the AI field. There are areas where I would say, and I think Eric says this all the time, we both want AI that is explainable and trustworthy.
And so being able to share research in these type areas would be good for us. This is one of those things where it’s not a zero-sum game. Both sides win if they have AI that they trust, et cetera. So, it’s really trying to understand the direct and indirect costs of the choice between disentanglement and entanglement, and that’s what we want to try to conceive of over the next year.
CRAIG: Yeah. You talk about encouraging high skilled talent to come stay in the U S and again, I understand this is interim and you haven’t reached final recommendations, but are you talking about raising the H1B visa cap or are there other ways that the commission is considering that would create new pathways for talent from China and elsewhere to come live and work in the United States?
BOB: We have not come to any conclusion yet on any specific recommendation.
You know, Eric and I keep saying this, but it’s really important to understand that we think this will be much more powerful if we can get 15 commissioners of such diverse experience in this area to come to consensus judgment. We think that will be the most powerful thing.
BOB: But we are all absolutely convinced that we want the United States to remain the magnet for innovation talent in the world. That is kind of an underlying desire. We want to remain the magnet. Now how to do that, as Eric tells us all the time, there are many different strategies to get there. We don’t have any specific recommendations for you, but we are intent on trying to spur the government to keep this in mind.
CRAIG: Yeah. Okay.
ERIC: Can I just add a general statement. I want be a little bit stronger in terms of our claims about what this report is. There’s a lot of stuff in this interim report. We have 27 initial judgments across a waterfront of issues. As far as I know, this is far more substantive than any other interim report that has happened and because we are scientists, right? Or science led or a whole bunch of scientists and Bob’s like a member of the scientific community here, the analytical depth and the specificity of judgements and the recommendations will be very, very high. So, we’re trying to do this thoroughly and precisely. At the end of the day, there’s a balance of engagement, disengagement, access to information, foreign students, so forth and so on, and we’ll come up with what we think is the best way through that path.
CRAIG: China’s famously committed $150 billion to reach global leadership in AI by 2030. Can you give an idea of how much the government is spending currently annually on AI development and implementation and then sort of a ballpark of what kind of additional spending is required between now and 2030 if we’re going to maintain our leadership role? I mean is it on the order of $100 billion?
BOB: The answer is there is no specific number in the U S government that tells us exactly how much we are spending on AI. One of the problems is that we have not defined what we consider to be within the programmatic boundaries of AI in government spending. Once you list that, you can actually go in and find it out and we have tried working with some data science companies to actually ferret that out.
BOB: But the key thing that we are saying is, look, you have got to spend. Global leadership and AI is a national security imperative and you have to invest in AI research and development. So, I know Eric has some big thoughts on this, but we have not yet come to a number and I’m not certain we will ever come to a specific number and say, Hey, you have to spend this much money per year to stay in the fight.
ERIC: So, we actually looked at this question. Bob actually tried hard to get this number and there is no government category that corresponds to this group. So, I am aware of legislation proposals that people have been making that are on the order of, you know, $10 billion a year of incremental spending across many, many factors, which is roughly the number you’re talking about. To give an example, the national science foundation number is less than that.
So my guess is that we will make a recommendation for increased spending, but I don’t know methodologically how we can show you what that number should be. China has been very aggressive in its statement of a national plan and its increase in funding. China has a national plan for 2030, it has a national budget for 2030, is has identified, four national champions in different areas. It is working very, very hard to generate national capabilities in this and a number of other strategic areas. What is the US answer? And we highlight that without proposing an answer because it’s not time yet.
CRAIG: Yeah. In your views, either from the point of view of the commission or individually, which is the greater threat, China’s economic competitiveness or their version of techno authoritarianism.
ERIC: The governance model of China?
CRAIG: Yes, but backed by artificial intelligence. So, the economic effect of China’s AI push or the global political effect of their AI backed governance model.
BOB: Now this is not in the report, but I know this to be the case through a wide variety of national security studies, the United States has never faced a strategic competitor with a gross domestic product. Greater than 40% of its own. The Soviet Union just barely got there in the Cold War. Even if you added Japan and Germany together and World War II, it was right around 40%. China has surpassed in the United States and purchasing power parity already. And if trends continue, and you know, this is no guarantee, there is a possibility they could surpass the United States in absolute terms of GDP in the late twenties. So, we’ve never faced a strategic competitor with an economy greater than 40% of our GDP. We may be faced with a competitor that has an economy that’s bigger than us. Then China, beyond AI, wants to become the world’s innovation leader.
BOB: It wants to surpass the United States as the world’s innovation leader, not only in AI but in a wide variety of different things. So, they’re inextricably linked, Craig. I mean if they meet their economic goals, they will be able to put more money in the technology innovation side. And this becomes a tremendous competitor for the United States. And that’s why the commission spends so much time saying, look, this is a competition that we could lose.
BOB: We don’t actually say that baldly, but we were in this competition and China is very, very, very intent on winning the competition and therefore we really need to take it seriously and structure ourselves to remain on top.
ERIC: Let me just say, I understand the question you asked, which is worse. And the answer is, I think our joint goal here is to make sure that the United States has a plan to win in an important area of technology. Right? So, let me give you an example of why we have to take this seriously with respect to China. On October 1st the CCP said they wanted to lead the world in the following technologies and let me just list them for you. Quantum communications, supercomputing, aerospace, 5G, mobile payments, new energy vehicles, high-speed rail, financial technology, and artificial intelligence, right? So, they’ve stated their intention, you know, cross important technologies and I think the United States needs to respond. This is a competition and it’s important that we get our act together to lead in these very important areas.
CRAIG: Yeah. Is the commission addressing quantum computing and neuromorphic computing?
ERIC: We have a note in it and you’ll see this in the paragraph which says that this is an associated technology. Today quantum is extremely interesting, very important. It becomes fundamental when quantum AI is possible. Quantum AI is where the calculations that are done today are done infinitely faster, but that’s not in the next year or two.
That’s a ways away. Most people believe that it will eventually occur. And so, the commission notes that this is an associated technology, but it does not say anything much about it.
CRAIG: Okay. In all of these points that are in the interim report, is there one or two that you want people to pay attention to more than others? Are there some salient messages in the report that I’m missing, for example?
ERIC: I think the most important thing to say is that it’s a new world. It’s a new world of AI. It’s a new competitive world, and we need to get our act together. These are my words, but it’s important that the United States have a view, a strategy that’s consistent with our values, right? We’re not China. We’re not Russia. We are the United States. We’re proud of ourselves. We want to do all of that. The second thing is that there are a bunch of tricky issues which are not easily solved by black and white solutions. We have to find a way to be engaged with China, but also careful to not be taken advantage of and to make sure that the US wins. And that’s the purpose of the next year for us.
BOB: An American way of AI is beneficial not only to our country, but the entire world. You know, a focus on data privacy, a focus on ethical use, a focus on not using AI to disrupt other countries, find fissures in their societies and break them apart. This is a competition that reflects our values, between a democratic vision of how AI can be very hopeful, and an authoritarian vision where it’s a little bit more dark. So, I don’t think we come out explicitly and say it, but I think anybody who reads the report, will set it down and say, okay, we understand the stakes of this competition a little bit more. And it’s important that the American view of AI prevails over the long run.
CRAIG: Yeah. I know you are dodging this question from many quarters, but can we expect that the final report will recommend some new structure to coordinate funding and AI initiatives across government? I mean, Eric, you said that there is a need for a national strategy. Is the commission likely to recommend some structure that would embody a national strategy or is this narrower than that?
ERIC: Well, as I said, I am aware of a number of legislative efforts that are trying to consider that. I think the commission has not debated this sufficiently to give you an answer to this question. I’m sure we will recommend more money, more focus, more strategic help. But, you know, should you create the equivalent of, you know, this institution or this foundation or what have you, I don’t think it’s there. Bob, what is your opinion?
BOB: You know, for the first time the White House actually put out an AI strategy. Now a lot of people will say, well, it didn’t have specific goals like the China strategy and in my view, that’s okay. We’re setting up the position where the White House has come out with an AI strategy. Congress has set up this commission to help them think it through. It’s specifically designed, so the primary customer of this commission is Congress so that they can make the legislative moves that they feel will make us more competitive. And so, it’s just too early for us, for I think Eric and I, to say where the commissioners might come out.
ERIC: Let me add that there’s this sort of belief that somehow there’s a unitary solution to this. If we just built one building and put a whole bunch of people in it and we put a whole bunch of money in they would solve this problem. That’s not how AI works. AI needs to be pervasive. It needs to be throughout everything, to be everywhere. And so, whatever we say will be more AI in the transportation department, in the defense department, in the finance department, in the university. More students, more education, more private sector, more private sector engagement. As Bob said, we’re arguing strongly in favor of greater connections between the government and the private sector. So, it’s more, but it’s diffused. It’s not in one place. It’s multi-headed and I think that anything we recommend will be of that nature. It won’t be a division that controls everything. Cause that’s not going to work.
CRAIG: Yeah. And certainly, the reason that works in China is it’s still very much a centrally planned economy.
ERIC: Yeah. I’ve actually read a set of articles recently that said that the good news is that China’s central planned things in the past have not worked as well as they claimed. So that’s the other side that says, well, okay, China announced this, but they may not be as successful. But it would be crazy for us to say, well, let’s just assume that they are not competent. They’re very competent. And in fact, in the report we talk about how good China is, that we should take them very, very seriously.
CRAIG: Certainly. And that $150 billion, a lot of that will be wasted. But even if a quarter of it is wasted, that’s still a lot of money in the right places. What’s the commission’s view on funding the creation of new data sets or unlocking data with privacy preserving AI like federated learning that might open up, for example, healthcare data from DOD, the VA, Tricare and the other data pools for researchers.
ERIC: The report notes that anything we do in data needs to be privacy preserving, and it also notes that AI needs this data. Okay. It doesn’t take a legislative position on that. But I think we collectively believe that we would like to have data available. We don’t want to give up our values, we don’t want privacy violations. You know, that kind of stuff. You have to figure out a path through that.
CRAIG: Yeah. The government’s purchasing power can do a lot. They did a lot to spur the solar power industry, for example, so it could be used to deploy accountable AI systems, or fair lending systems, or accountable facial recognition systems, or interpretable criminal justice systems. Is that something that’s likely to come out of the commission? A recommendation to use government purchasing power, not only to fund initiatives, but to fund markets?
ERIC: Why don’t we take that as excellent feedback from you as a citizen for our deliberation.
CRAIG: You say in the report that AI is not hidden in a top-secret Manhattan Project, but it’s something that I’ve talked to a lot of people about. And knowing China pretty well, isn’t there a danger that China will develop a massively ambitious secret project to reach, for example, artificial general intelligence ahead of any other country or develop some killer application that would put them squarely ahead of the rest of the world? I mean, is there a concern about that?
ERIC: Again, let me give you a personal opinion. This is not in the report. So, so let’s, let’s be clear. The report does not discuss this. You could imagine that there is a city in China that we don’t know about where they’re inventing some horrific new achievement in the military or national security context. Right? It wouldn’t have to be AI though. Right? This is not a new concept, right? So rhetorically you said they would have a Manhattan project of their own and it would provide AGI so forth. So, the question to me is, if that were to occur, how long would they be able to keep it secret? And my judgment in software is, it’s different from hardware in that the collective community of AI researchers is moving so quickly that if one country got an advantage like that, there would be a very small gap between it achieving that and then the other guys achieving the same thing. So, it’s a global race and the global race is fast. Right? So, the strategy that you’re describing on anyone’s part is unlikely to work in my personal opinion.
CRAIG: And you’re saying in effect that basic research cannot really succeed in a silo. So, what would be happening are specific applications based on well-known …
ERIC: I want to make a general statement about software. So, software capability is now global. And so, you could imagine a scenario where one country invents something new. Well given the global nature of technology, the sharing of ideas, how long before other countries would have something similar and the answer is not very long. I think that’s a generically true statement. And so, I call this diffusion.
So, the diffusing of the ideas is so rapid now that the traditional strategies of control and limiting and so forth don’t give much competitive advantage, in my opinion. And again, this is not something that the report covers.
BOB: First of all, I agree in the main with what Eric just said, but inside the department of defense, generally when you have a competitor, like in the interwar period, everyone knew that mechanization was happening. Everyone knew and had radios and everyone knew about the advances in aviation. It was diffuse, it was all over the place. It was the competitor that put them together in an operational concept they called Blitzkrieg that gave them an enormous advantage. So, what the department of defense worries about is could the Chinese in developing an operational concept, take machine learning in decision support systems, 5G in their communications, could they put them together in a way which would give them a battlefield advantage? That is something that is very difficult to judge. You have to actually know what they’re doing. So, it’s not so much the technology I worry about. They’re going to be fast followers. We want the U S to keep winning. We want to be the leader. We don’t want to be a fast follower. We want to be able to keep all of the competitors in a tail chase with us.
ERIC: And I think Bob makes a very good point that, you asked a question that was on innovation. You know, the AGI question. But Bob makes the point that there’s another kind of innovation, which is integration innovation.
So, you could imagine that we won the research component, that we invented this stuff new, but our competitor did a better job of integrating it and then we’re in a pickle.
So, I would say that you have to do both.
BOB: Amen. Amen.
CRAIG: Again, on this idea of global transparency or the speed at which information travels, at least for commercial AI, and I presume that the government doesn’t have any secret algorithms, algorithms are pretty much free and an open. Compute, you know, there’s no magic there. Really, the edge and the IP comes down to labeled data, when we’re talking about supervised learning. Do you guys view it that way? That whoever has the most labeled data, whoever has, in effect, encoded the most human knowledge in data sets will be the leader.
ERIC: One of the things that we talk about in the report is that the model that you’re describing, which is labeled data, is only one of the attributes of modern AI. There is a whole new area called self-supervised learning where you have unlabeled data and other techniques that really can give us advantage even without some of the data. So, there’s this sort of meme where everybody says, Oh, you know, data’s the new oil and you need more data. You need data, data, data, data. It’s more subtle than that. I think that the correct answer is the very top people in the world need to be in the United States working on these problems because there are technical solutions that are very, very clever in front of us on these issues.
CRAIG: Yeah. Yeah. Self-supervised learning certainly holds a lot of promise. I mean, it doesn’t work very well yet. But that would be an area that the commission would recommend funding for, I presume.
ERIC: I can assure you we’re going to focus on talent, global talent and funding AI research in order to maintain competitive advantage.
To me, the US needs to win in the form of being the place where the next generation of artificial intelligence protocols, algorithms and services are invented. We need to be the first to deploy, consistent with our values and to make our economy stronger and make our nation safer.
BOB: Well, I can’t say it better than that. I just want to end by just pulling a little on what Eric said earlier. The commission really is quite unique in that, you know, we have, Andrew Moore who was the head of computer science at Carnegie Mellon University, now at Google. We have the head of In-Q-Tel. We have a former deputy secretary of defense. We have Eric. We have people who work at NASA. Put them together with the staff that we have. The work that they have done in support of the commissioners’ work, is quite different than any other commission that we’re aware of. So this really is trying to get to analytical defensibility of our recommendation. And that’s what we’re intent on doing.
CRAIG: I want to thank you both for your time. I encourage readers to download and read the interim report. You can find a link to it here. You’ll also find a link to a recording of this episode on our website, eye-on.ai. I welcome comments.
The singularity may not be near, but AI is about to change your world, so pay attention.