ログインしてさらにmixiを楽しもう

コメントを投稿して情報交換!
更新通知を受け取って、最新情報をゲット!

ウィスパリング同時通訳研究会コミュのJoint Artificial Intelligence Center Director Briefs Reporters on Efforts to Scale AI

  • mixiチェック
  • このエントリーをはてなブックマークに追加
https://www.defense.gov/Watch/Video/?dvpTag=Defense+on+Demand
https://www.facebook.com/watch/live/?v=1545173179206535&ref=watch_permalink
LIEUTENANT GENERAL MICHAEL S. GROEN: Okay, Good afternoon, welcome. I'm Mike Groen, Lieutenant Colon— Lieutenant General, United States Marine Corps. I'm the new Director of the Joint Artificial Intelligence Center, the JAIC. I'm very glad for the opportunity to interact with you, look forward to our conversation today.

It's my great privilege to serve alongside the members of the JAIC but also the much larger numbers across the department that -- that are committed to changing the way we decide, the way we fight, the way we manage, and the way we prepare.

It's clear to me that we do not have an awareness problem in the department, but like with any transformational set of technologies, we have a lot of work to do in broadly understanding the transformative nature and the implications of AI integration.

We're challenged not so much in finding the technologies we need but rather to get – to getting about the hard work of AI implementation. I've often used the analogy of the transformation into Industrial Age warfare, of literally lancers riding into battle against guns that were machines, flying machines that scouted positions or dropped bombs, of massed long range artillery machines or even poison gas to use as a weapon, used as a weapon, at an industrial scale.

That transformation that had been underway for decades suddenly coalesced into something very lethal and very real. Understanding that came at great cost. Another example is blitzkrieg, literally lightning war, that leveraged technology known to both sides to create – but – but was used by one side to create tempo that overwhelmed the slower, more methodical force.

In either case, the artifacts of the new technological environment were plain to see in the society that surrounded the participants. These transformational moments were imminently foreseeable but in many cases not foreseen.

I would submit that today we face a very similar situation. We're surrounded by the artifacts of the Information Age. We need to understand the impacts of this set of globally available technologies on future of warfare. We need to work hard now to foresee what is foreseeable.

We have a tech-native military and civilian workforce that enjoys a fast-flowing, responsive, and tailored information environment, at home when they're on their mobile phones. They want that same experience in the militaries and department systems that they operate. Our warfighters want responsive, data-driven decisions. Our commanders want to operate at speed and with a mix of manned and unmanned capabilities. The citizens seek efficiency and effectiveness from their investments in defense. Artificial intelligence can unlock all of these.

We're surrounded by examples in every major industry of data-driven enterprise, that operate with speed and efficiency, that leaves their competitors in the dust. We want that. Most important of all, we need to ensure that the young men and women who go in harm's way on our behalf are prepared and equipped for the complex, high tempo battlefields that – of the future.

I often hear that AI is our future, and I don't disagree with that, but AI also needs to be our present. As an implementation organization, the JAIC will continue to work hard with many partners across the department to bring that into being.

So let me just talk a little bit about our priorities in the JAIC today, and you can ask questions.

In JAIC 1.0, we helped jumpstart AI in the DOD through pathfinder projects we called mission initiatives. So over the last year, year and a half, we've been in that business. We developed over 30 AI products working across a range of department use cases. We learned a great deal and brought a – on board some of the brightest talent in the business. It really is amazing.

When we took stock, however, we realized that this was not transformational enough. We weren't going to be in a position to transform the department through the delivery of use cases.

In JAIC 2.0, what we're calling our -- our effort now, we seek to push harder across the department to accelerate the adoption of AI across every aspect of our warfighting and business operations. While the JAIC will continue to – to develop AI solutions, we're working in parallel to enable a broad range of customers across the department. We can't achieve scale without having a broader range of our – of participants in the integration of AI. That means a renewed focus on the Joint Common Foundation (JCF), which most of you are familiar with, the DevSecOps platform that – and the key enabler for AI advancement within the department. It's a resource for all, but especially for disadvantaged users who don't have the infrastructure and the tech expertise to do it themselves.

We're – we 're recrafting our engagement mechanism inside the JAIC to actively seek out problems and help make others successful. We will be more problem pull than product push.

One thing we note is that stovepipes don't scale, so we will work through our partners in the AI Executive Steering Group and the -- and the subcommittees of that group, to integrate and focus common architectures, AI standards, data-sharing strategies, educational norms, and best practice for AI implementation. We'll continue to work across the department on AI ethics, AI policy, AI governance, and we'll do that as a community.

We'll also continue to work with like-minded nations to enhance security cooperation and interoperability through our AI partnership for the – for defense. All of the JAIC’s work comes back to that enabling, that broad transformation across the department. We want to help defense leaders see that AI is about generating essential warfighting advantages. AI is not IT (information technology). It's not a black box that a contractor's going to deliver to you. It's not some digital gadget that an IT rep will show you how to log into.

Our primary implementation challenge is the hard work of decision engineering. It's commanders' business at every level and in every defense enterprise. How do you make warfighting decisions? What data drives your decision-making? Do you have that data? Do you have access to it? If -- it's -- it's driving leaders to think, "You know, I could make a better decision if I knew 'X'."

JAIC wants to help leaders at every level get to that "X". We want to data-informed, data-driven decisions across warfighting and functional enterprises. We want to understand the enemy and ourselves, and benefit from data-driven insight into what's -- what happens next. We want the generation of tempo to respond to fast-moving threats across multiple domains. We want recursive virtualized war-gaming and simulation at great fidelity. We want successful teaming among manned and unmanned platforms, and we want small leaders – or small unit leaders that go into harm's way to go with a more complete understanding of their threats, their risks, their resources, and their opportunities.

We're grateful to Congress. We'll – we're grateful to DOD leadership, the enthusiastic service members who – who are helping us with this, and the American people for their continued trust and support.

I really appreciate your attention and look forward to your questions. Thank you very much.

STAFF: Thank you, sir. Appreciate that. We'll go up to the phones now. The first question is going to come from Sydney Freedberg from Breaking Defense. Go ahead, Sydney.

Q: Hello, General. Sydney Freedberg here from Breaking Defense. Thank you for doing it. And apologies if we ask you to repeat yourself a little bit because those of us on the phone line were not dialed in until you'd started speaking.
You know, you have talked repeatedly about the importance of this being commanders' -- AI being commanders' business, about the importance of this not being seen as, you know, nerd stuff. How – how have you actually socialized, institutionalized that across the Defense Department? I mean, clearly, there's a lot of high-level interest from, you know, service chiefs in AI. There's quite a lot of lip service, at least, to AI and people in the briefing slides. But how do you really familiarize for, not the technical people, but the commanders with the potential of this? You know, once we added the JAIC, we're – we're from a fairly limited number of people. You don't have – you can't send a missionary out to every office, you know, in the Pentagon to preach the virtues of AI.

GEN. GROEN: Yeah, great – great question, Sydney. And – and so this – this really is the heart of the implementation challenge. And so getting commanders, senior leaders across the department to really understand that this is not IT. AI is not IT. This is warfighting business. It is assessment and analysis – analysis of warfighting decision-making or enterprise decision-making in our support infrastructure and in our business infrastructure.

If you – if you understand it that way, then – then we open the doors to – to much better and much more effective integration into our warfighting constructs, our service enterprises, our support enterprises across the department, and we really start to – to get traction.

This is why our focus on – on the joint common foundation, because what we find – I – I think there are two aspects that I think are important: the joint common foundation, which provides a technical platform. So now we have a technical platform. It'll – it'll become IOC (initial operating capability) here early in – in 2021, and then we will – we will – we will rapidly change it. We expect to do monthly updates of tools and capabilities to that platform.

But that platform now provides a technical basis for especially disadvantaged users who don't have access to data scientists, who don't have access to algorithms, who are not sure how to leverage their data. We can bring those – those folks to a place where now they can store their data. They might be able to leverage training data from some other program. We might be able to identify algorithms that can be repurposed and reused, you know, in similar problem sets. So there's that technical piece of it.

There's also the soft, what I call the soft services side of it, which is now we help them with AI testing and evaluation for verification and validation, those critical AI functions, and we help them with best practice in that regard. We help them with AI ethics and how to build an ethically-grounded AI development program. And then we create an environment for, for sharing of, of all of that through best practice.

If we – if we do that, then we will, in addition to the platform piece of this, we're building our – we're – what we call our missions directorate now. We are re-crafting that to be much more aggressive in – in going out to find those problems, find those most compelling use cases across the department that then we can bring back home and help that user understand the problem, help that user get access to contracting vehicles, help that user access to technical platform and do everything we can to facilitate AI – a thousand AI sprouts across the department so that it really starts to take hold and we start to see the impact on decision-making.

STAFF: Thanks, sir. The next question is coming from Khari Johnson of VentureBeat. Khari, if you're still on the line, go ahead, sir.
He's not on the line, so we're going to go the next question, which is from Jasmine from National Defense. Jasmine, if you're still on the line go ahead.

Q: Thank you, sir.
I do know defense companies faced a volley of attacks from adversarial nations attempting to steal their IP (intellectual property) and get peeks at sensitive information. How is the JAIC keeping the important work it does with industry, safe from these countries or bad actors who may want to steal and replicate it?

GEN. GROEN: Yeah, great question, Jasmine.
And, you know, we're reminded every day that the Artificial Intelligence space is a competitive space and there's a lot of places that we compete. I probably, the first thing I would throw out there is cybersecurity and you know obviously we participate along with the rest of the department in our cybersecurity initiative here in the department; to defend our networks, to defend our cloud architecture, to defend our algorithms.

But in addition to that we have developed a number of cybersecurity tools that we can help that industry detect those threats. And then the third thing I'd throw on there is our efforts now to secure our platform, so obviously we'll use defense-certified accessibility requirements. What we're focused on is building a trusted ecosystem. Because one of the things that will make this powerful is our ability to share. So we have to be able to ascertain our data. We have to know its provenance. We have to know that the networks that we pass that data on are sound and secure. We have to create an environment where we can readily move through, you know, containerization or some other method; developments or codes that's done in one platform to another platform.

So to do all of this securely and safely is a primary demand signal on the joint common foundation and it is on all of our AI developments across the department, in the platforms, the other platforms that are out there across the department. We are wide awake to the threat posed by foreign actors especially who have a proven track record of stealing intellectual property from wherever they can get their hands on it; we're going to try to provide an effective defense to ensure that doesn't happen.

STAFF: Okay, the next question is going to go out to Brandi Vincent from NextGov. Go ahead, Ma'am.

Q: Hi. Thank you so much for the call today.
My question is on the Joint Common Foundation. You mentioned these soft services that it'll have and I read recently that there will be some, to keep users aware of ethical principles and other important considerations they should make when using AI in warfare.
Can you tell us a little bit more about how the platform will be fused with the Pentagon's ethical priorities? And from your own experience, why do you believe that that's important?

GEN. GROEN: Yeah, great question.
And I really, I think this is so important, and I tell you, I didn't always think that way. When I came into the JAIC job I had my own epiphany about the role of an AI ethical foundation to everything that we do and it just jumped right out at you. Many people might think well, yeah, of course, you know we do things ethically so when we use AI we'll do them ethically as well.

But I think of it through the lens of, just like the law of war; the law of war, you know, the determination of military necessity, the unnec— limiting unnecessary suffering; all of the principles of the law of war that drive our decision-making actually has a significant impact on the way that we organize and fight our force today and you can see it; anybody, you know – the fact that we have a very mature targeting doctrine and a targeting process that is full of checks and balances helps us to ensure that we are complying with the law of war.

This process is unprecedented and it is thoroughly ingrained in the way we do things. It changes the way we do business in the targeting world. We believe that there's a similar approach for AI and ethical considerations. So when you think about the AI principles or its ethical principles, these things tell us how to build AI and then how to employ them responsibly.

So when we think about building AI we want to make sure that our outcomes are traceable. We want to make sure that it's equitable. We want to make sure that our systems are reliable and we do that through test and evaluation in a very rigorous way. But then we also want to ensure that as we employ our AI that we're doing it in ways that are responsible and that are governable. So we know that we're using an AI within the boundaries in which it was tested for example. Or we use an AI in a manner that we can turn it off or we can ask it in some cases, hey, how sure are you about that answer? What is your assessment of the quality of the answer you provide? And AI gives us the window to be able to do that.

Honestly, we and the nations that we're working with in our AI partnership for defense really are kind of breaking ground here for establishing that ethical foundation and it will be just as important and just as impactful as application of the law of war is on our targeting doctrine, for example. So if you have that it's really critical then. There are not that many experts, ethicists who really understand this topic and can communicate it in a way that helps designers design systems, help testers test systems, and help implementers implement them.

And so we have some of them in the JAIC; they're fantastic people and they punch way above their weight. We're really helping – hoping they'd give access to their expertise across the department by linking it to the Joint Common Foundation. Thanks for the question. I think that's a really important one.

コメント(0)

mixiユーザー
ログインしてコメントしよう!

ウィスパリング同時通訳研究会 更新情報

ウィスパリング同時通訳研究会のメンバーはこんなコミュニティにも参加しています

星印の数は、共通して参加しているメンバーが多いほど増えます。