1 00:00:00,000 --> 00:00:00,000 2 00:00:12,989 --> 00:00:14,878 The the definition adopted by the 3 00:00:14,878 --> 00:00:16,989 defense department is that artificial 4 00:00:16,989 --> 00:00:19,100 intelligence refers to the ability of 5 00:00:19,100 --> 00:00:20,989 machines to accomplish tasks that 6 00:00:20,989 --> 00:00:23,211 ordinarily require human intelligence . 7 00:00:23,211 --> 00:00:25,267 Well , there's kind of a moving goal 8 00:00:25,267 --> 00:00:26,933 post problem embedded in that 9 00:00:26,933 --> 00:00:29,100 definition because the things we think 10 00:00:29,100 --> 00:00:30,656 of today that require human 11 00:00:30,656 --> 00:00:32,711 intelligence are very different from 12 00:00:32,711 --> 00:00:32,310 the things we think of maybe 10 years 13 00:00:32,319 --> 00:00:34,486 ago that required human intelligence . 14 00:00:34,486 --> 00:00:36,541 And so as A I keeps getting better , 15 00:00:36,541 --> 00:00:38,597 the definition of what counts as A I 16 00:00:38,597 --> 00:00:41,470 keeps changing . We've had A I 17 00:00:41,479 --> 00:00:44,259 technology in the dod for decades , but 18 00:00:44,270 --> 00:00:46,492 something did happen in this century in 19 00:00:46,492 --> 00:00:48,603 the last 15 years or so that caught a 20 00:00:48,603 --> 00:00:50,659 lot of people's attention . And that 21 00:00:50,659 --> 00:00:52,659 was a transition from deterministic 22 00:00:52,659 --> 00:00:54,492 systems to something called deep 23 00:00:54,492 --> 00:00:56,714 learning where researchers were able to 24 00:00:56,714 --> 00:00:58,714 use artificial neural networks with 25 00:00:58,714 --> 00:01:00,714 multiple layers of these artificial 26 00:01:00,714 --> 00:01:02,881 neurons to be able to make predictions 27 00:01:02,881 --> 00:01:04,826 on real world data . But with deep 28 00:01:04,826 --> 00:01:07,048 learning , because we're stacking these 29 00:01:07,048 --> 00:01:06,930 artificial neurons into these deep 30 00:01:06,940 --> 00:01:09,459 neural networks , oftentimes , when the 31 00:01:09,470 --> 00:01:11,720 algorithm produces a result , it might 32 00:01:11,730 --> 00:01:13,563 not be the result that the human 33 00:01:13,563 --> 00:01:14,563 intended . 34 00:01:17,610 --> 00:01:19,777 Our A I systems need to be responsible 35 00:01:19,889 --> 00:01:22,330 equitable , traceable , reliable and 36 00:01:22,339 --> 00:01:24,395 governable so responsible just means 37 00:01:24,395 --> 00:01:26,617 that an appropriate person , that every 38 00:01:26,617 --> 00:01:28,450 phase of the life cycle needs to 39 00:01:28,450 --> 00:01:30,617 maintain responsibility for the system 40 00:01:30,617 --> 00:01:32,672 equitable means that we need to keep 41 00:01:32,672 --> 00:01:32,430 one eye on the biases and try to 42 00:01:32,440 --> 00:01:34,580 prevent some of those biases on the 43 00:01:34,589 --> 00:01:36,811 basis of say race or gender , et cetera 44 00:01:36,811 --> 00:01:38,645 that we've seen in industry . Uh 45 00:01:38,645 --> 00:01:40,645 traceable is kind of leaning in the 46 00:01:40,645 --> 00:01:42,589 direction of what some people call 47 00:01:42,589 --> 00:01:44,645 explainable A I . And that principle 48 00:01:44,645 --> 00:01:44,589 says that we need to know , users need 49 00:01:44,599 --> 00:01:46,710 to know enough about what's happening 50 00:01:46,710 --> 00:01:48,877 under the hood to be able to make wise 51 00:01:48,877 --> 00:01:51,099 decisions about how to employ it . They 52 00:01:51,099 --> 00:01:53,099 need to be reliable uh in the sense 53 00:01:53,099 --> 00:01:55,266 that they've been tested and evaluated 54 00:01:55,266 --> 00:01:54,669 for the specific use case they're 55 00:01:54,680 --> 00:01:56,791 intended for and then they need to be 56 00:01:56,791 --> 00:01:59,013 governable , meaning that we can impose 57 00:01:59,013 --> 00:02:00,958 the right bounds , the right guard 58 00:02:00,958 --> 00:02:03,069 rails . Uh And that we have sort of a 59 00:02:03,069 --> 00:02:02,790 kill switch that we can always be able 60 00:02:02,800 --> 00:02:04,800 to sort of turn these systems off . 61 00:02:09,089 --> 00:02:11,520 The A I ready by 2025 goal comes from 62 00:02:11,529 --> 00:02:13,529 the National Security Commission on 63 00:02:13,529 --> 00:02:15,640 Artificial Intelligence . And the way 64 00:02:15,640 --> 00:02:17,862 that they define that is kind of in two 65 00:02:17,862 --> 00:02:19,918 layers . So first , it says that A I 66 00:02:19,918 --> 00:02:21,862 ready means that people across the 67 00:02:21,862 --> 00:02:23,699 force have access to the uh the 68 00:02:23,710 --> 00:02:26,070 workforce development required and the 69 00:02:26,080 --> 00:02:28,429 software tools needed to be able to , 70 00:02:28,440 --> 00:02:30,440 then this is the second layer to be 71 00:02:30,440 --> 00:02:32,218 able to make A I ubiquitous and 72 00:02:32,218 --> 00:02:34,329 exercises and training and ultimately 73 00:02:34,329 --> 00:02:36,496 in combat . So there's two things that 74 00:02:36,496 --> 00:02:38,496 have to happen . First , we have to 75 00:02:38,496 --> 00:02:40,718 make sure that we develop the workforce 76 00:02:40,718 --> 00:02:42,940 and provide access to the tools so that 77 00:02:42,940 --> 00:02:44,996 second , we can have this ubiquitous 78 00:02:44,996 --> 00:02:47,162 development of A I across the forest . 79 00:02:47,162 --> 00:02:46,600 And so when we think about A I ready 80 00:02:46,610 --> 00:02:48,888 within the Department of the Air Force , 81 00:02:48,888 --> 00:02:50,832 we're focused on that first , that 82 00:02:50,832 --> 00:02:50,789 first level , how do we make sure that 83 00:02:50,800 --> 00:02:52,800 people have access to the workforce 84 00:02:52,800 --> 00:02:54,911 development tools and to the software 85 00:02:54,911 --> 00:02:56,856 tools to be able to start to uh to 86 00:02:56,856 --> 00:02:59,022 become proficient at both developing A 87 00:02:59,022 --> 00:03:01,164 I and employing A I . One way to look 88 00:03:01,175 --> 00:03:03,397 at this is that there are high risk use 89 00:03:03,397 --> 00:03:05,508 cases and low risk use cases . And we 90 00:03:05,508 --> 00:03:07,564 should allow airmen and guardians to 91 00:03:07,564 --> 00:03:09,675 experiment to the max extent possible 92 00:03:09,675 --> 00:03:11,786 with low risk use cases . So they can 93 00:03:11,786 --> 00:03:11,434 start to get the practice in , so they 94 00:03:11,445 --> 00:03:13,389 can start to get reps and sets and 95 00:03:13,389 --> 00:03:15,556 identify how these things might fail , 96 00:03:15,556 --> 00:03:17,778 what , what a failure mode looks like , 97 00:03:17,778 --> 00:03:19,889 uh how to recognize a hallucination , 98 00:03:19,889 --> 00:03:21,945 things like that . So eventually , I 99 00:03:21,945 --> 00:03:21,804 think the entire workforce will be 100 00:03:21,815 --> 00:03:23,982 upskilled in these tools , but only if 101 00:03:23,982 --> 00:03:26,148 we give them the ability to experiment 102 00:03:26,148 --> 00:03:25,835 at scale . 103 00:03:30,440 --> 00:03:32,384 So sometimes people approach these 104 00:03:32,384 --> 00:03:34,440 questions as though , you know , the 105 00:03:34,440 --> 00:03:36,662 ethics side and the innovation side are 106 00:03:36,662 --> 00:03:38,829 at odds with one another . And I don't 107 00:03:38,829 --> 00:03:37,860 think that's the right way to think 108 00:03:37,869 --> 00:03:40,091 about it , especially within the dod or 109 00:03:40,091 --> 00:03:42,202 the Department of the Air Force , our 110 00:03:42,202 --> 00:03:44,425 mission is to accomplish the tasks that 111 00:03:44,425 --> 00:03:44,110 have been given us by the American 112 00:03:44,119 --> 00:03:46,286 people . And we always want to do that 113 00:03:46,286 --> 00:03:48,230 as well as we can . And so when we 114 00:03:48,230 --> 00:03:50,452 start talking about response A I in the 115 00:03:50,452 --> 00:03:52,563 military context responsible A I will 116 00:03:52,563 --> 00:03:54,841 make our models perform better , right ? 117 00:03:54,841 --> 00:03:56,897 If we adopt these principles , if we 118 00:03:56,897 --> 00:03:59,119 make sure that we're keeping one eye on 119 00:03:59,119 --> 00:04:01,119 responsibility and on equity and on 120 00:04:01,119 --> 00:04:01,115 governability , et cetera , we're , 121 00:04:01,125 --> 00:04:03,069 we're going to produce models that 122 00:04:03,069 --> 00:04:05,014 perform better than we would if we 123 00:04:05,014 --> 00:04:07,069 hadn't adopted these responsible A I 124 00:04:07,069 --> 00:04:09,919 approaches . But there's a more 125 00:04:09,929 --> 00:04:12,559 profound answer which is that behaving 126 00:04:12,570 --> 00:04:14,570 ethically is what the United States 127 00:04:14,570 --> 00:04:16,848 military does . That that's who we are . 128 00:04:16,848 --> 00:04:19,014 And we've been committed to that for a 129 00:04:19,014 --> 00:04:18,760 very long time , the people who are 130 00:04:18,769 --> 00:04:20,980 developing these tools want to develop 131 00:04:20,989 --> 00:04:23,211 it in such a way that it submits to our 132 00:04:23,211 --> 00:04:25,211 professional ethos . And so there's 133 00:04:25,211 --> 00:04:27,378 really not the tension there that some 134 00:04:27,378 --> 00:04:29,322 people seem to think . And I think 135 00:04:29,322 --> 00:04:28,869 generally the , the sort of the ethics 136 00:04:28,880 --> 00:04:31,213 side and the innovation side really are , 137 00:04:31,213 --> 00:04:33,436 are arm in arm trying to solve the same 138 00:04:33,436 --> 00:04:35,658 problems , which is how do we make sure 139 00:04:35,658 --> 00:04:37,491 that we're producing models that 140 00:04:37,491 --> 00:04:36,700 achieve the effect that they're 141 00:04:36,709 --> 00:04:38,709 supposed to achieve ? And how do we 142 00:04:38,709 --> 00:04:40,765 make sure that everything that we're 143 00:04:40,765 --> 00:04:42,931 doing submits to the norms and ethical 144 00:04:42,931 --> 00:04:45,042 principles that we established within 145 00:04:45,042 --> 00:04:44,070 the US military