1 00:00:00,490 --> 00:00:03,289 Welcome to an exploration of human 2 00:00:03,289 --> 00:00:05,610 machine teaming . A groundbreaking 3 00:00:05,610 --> 00:00:08,350 field that amplifies human intelligence 4 00:00:08,529 --> 00:00:10,810 with advanced machines to achieve new 5 00:00:10,810 --> 00:00:13,600 heights of efficiency , decision making , 6 00:00:13,890 --> 00:00:16,620 and operational excellence . Human 7 00:00:16,620 --> 00:00:20,049 machine teaming , or HMT for short , is 8 00:00:20,049 --> 00:00:22,569 a powerful force reshaping our defense 9 00:00:22,569 --> 00:00:24,579 where decisions that carry immense 10 00:00:24,579 --> 00:00:27,610 implications for global security and 11 00:00:27,610 --> 00:00:30,299 the safety of American forces often 12 00:00:30,299 --> 00:00:31,979 must be made in seconds . 13 00:00:37,549 --> 00:00:40,400 So , what exactly is human machine 14 00:00:40,400 --> 00:00:43,630 teaming ? In essence , HMT is about 15 00:00:43,630 --> 00:00:46,169 collaboration between humans and 16 00:00:46,169 --> 00:00:48,113 machines , where each brings their 17 00:00:48,113 --> 00:00:50,330 unique strengths to the table . While 18 00:00:50,330 --> 00:00:52,330 machines excel at processing vast 19 00:00:52,330 --> 00:00:54,689 amounts of data quickly , humans are 20 00:00:54,689 --> 00:00:56,490 irreplaceable for their ethical 21 00:00:56,490 --> 00:00:59,569 judgment , creativity , and ability to 22 00:00:59,569 --> 00:01:01,689 operate in uncertain environments . 23 00:01:01,869 --> 00:01:04,330 Together , humans and machines create 24 00:01:04,330 --> 00:01:07,169 teams that are smarter and faster than 25 00:01:07,169 --> 00:01:10,000 either could be alone . In my simple 26 00:01:10,000 --> 00:01:12,919 mind , it's about in an increasingly 27 00:01:12,919 --> 00:01:16,160 complex world marrying the strengths of 28 00:01:16,160 --> 00:01:18,720 humans with the strengths of machines . 29 00:01:18,879 --> 00:01:21,046 Humans have strengths and weaknesses . 30 00:01:21,046 --> 00:01:23,323 Machines have strengths and weaknesses , 31 00:01:23,323 --> 00:01:25,379 and I think that the marriage of the 32 00:01:25,379 --> 00:01:27,379 strengths and weaknesses of the two 33 00:01:27,379 --> 00:01:29,839 allow us to do that at a , at a , at a 34 00:01:29,839 --> 00:01:31,450 level that we , we need to , 35 00:01:31,450 --> 00:01:33,450 particularly in military operations 36 00:01:33,450 --> 00:01:35,617 which are , you know , very complex by 37 00:01:35,617 --> 00:01:37,506 nature . The real tenets of human 38 00:01:37,506 --> 00:01:39,339 machine teaming involve a lot of 39 00:01:39,339 --> 00:01:41,506 flexibility . The idea that humans and 40 00:01:41,506 --> 00:01:43,617 machines can work together jointly on 41 00:01:43,617 --> 00:01:45,672 really complex tasks , and the whole 42 00:01:45,672 --> 00:01:47,895 notion of the benefits of human machine 43 00:01:47,895 --> 00:01:49,728 teaming are that you could get a 44 00:01:49,728 --> 00:01:51,783 machine to start to work proactively 45 00:01:51,783 --> 00:01:53,783 with a person , right ? So , um , a 46 00:01:53,783 --> 00:01:55,783 human might be able to . Leverage a 47 00:01:55,783 --> 00:01:57,450 machine to do some breadth of 48 00:01:57,450 --> 00:01:59,617 information analysis or acquisition to 49 00:01:59,617 --> 00:02:01,228 allow the machines to filter 50 00:02:01,228 --> 00:02:03,395 information for us or to build courses 51 00:02:03,395 --> 00:02:04,950 of action analysis sorts of 52 00:02:04,950 --> 00:02:06,783 capabilities in a very rapid way 53 00:02:06,783 --> 00:02:08,672 because machines are great at the 54 00:02:08,672 --> 00:02:10,839 compute , humans are great at decision 55 00:02:10,839 --> 00:02:10,330 making , and so when you pair the two 56 00:02:10,330 --> 00:02:12,330 together , greatness is a potential 57 00:02:12,330 --> 00:02:14,497 there . So the way I think about human 58 00:02:14,497 --> 00:02:16,608 machine taming technology is really , 59 00:02:16,608 --> 00:02:18,386 the machine is a tool , and the 60 00:02:18,386 --> 00:02:20,441 question is what do we want it to do 61 00:02:20,441 --> 00:02:22,608 with what information we want it to do 62 00:02:22,608 --> 00:02:24,497 it with . The goal of the tool is 63 00:02:24,497 --> 00:02:24,330 really to help us have a relationship 64 00:02:24,330 --> 00:02:26,729 with knowledge at speed and scale . In 65 00:02:26,729 --> 00:02:29,610 practical terms , this means developing 66 00:02:29,610 --> 00:02:31,929 systems that augment human decision 67 00:02:31,929 --> 00:02:34,285 making . By providing machines the 68 00:02:34,285 --> 00:02:37,485 autonomy to perform repetitive or 69 00:02:37,485 --> 00:02:39,955 computationally intensive tasks , 70 00:02:40,524 --> 00:02:42,691 freeing up human operators to focus on 71 00:02:42,691 --> 00:02:45,285 strategy , ethics , and creative 72 00:02:45,285 --> 00:02:47,274 problem solving . The Air Force 73 00:02:47,274 --> 00:02:49,354 Research Laboratory plays a central 74 00:02:49,354 --> 00:02:51,845 role in advancing human machine teaming 75 00:02:51,845 --> 00:02:54,679 technologies . As the world becomes 76 00:02:54,679 --> 00:02:57,399 increasingly complicated with threats 77 00:02:57,399 --> 00:03:00,350 that are volatile , uncertain , complex , 78 00:03:00,479 --> 00:03:02,960 and ambiguous , or what's often 79 00:03:02,960 --> 00:03:06,309 referred to as Luca , the need for HMT 80 00:03:06,309 --> 00:03:09,699 is more critical than ever . AFRL aims 81 00:03:09,699 --> 00:03:12,179 to create systems that allow military 82 00:03:12,179 --> 00:03:14,580 personnel to make faster , more 83 00:03:14,580 --> 00:03:17,460 informed decisions . We've always at 84 00:03:17,460 --> 00:03:19,820 AFRL had a history of teeming with 85 00:03:19,820 --> 00:03:22,042 machines , whether those are the planes 86 00:03:22,042 --> 00:03:24,264 we put in the air , uh , you know , the 87 00:03:24,264 --> 00:03:26,699 unmanned vehicles , or whether that are 88 00:03:26,699 --> 00:03:29,339 the computers or other types of sensor 89 00:03:29,339 --> 00:03:31,500 technology . The reality is we always 90 00:03:31,500 --> 00:03:33,722 need to have an advantage when it comes 91 00:03:33,722 --> 00:03:35,889 to our relationship with information . 92 00:03:35,889 --> 00:03:38,835 and knowledge . So , at AFRL we define 93 00:03:38,835 --> 00:03:41,875 autonomy as giving some computer agent 94 00:03:42,154 --> 00:03:44,835 a delegated and bounded authority to 95 00:03:44,835 --> 00:03:47,154 take actions in space . So a lot of our 96 00:03:47,154 --> 00:03:49,098 work is , how do we make sure that 97 00:03:49,098 --> 00:03:51,235 humans can properly communicate what 98 00:03:51,235 --> 00:03:53,457 the delegated authority is that's given 99 00:03:53,457 --> 00:03:55,457 to that agent , and how do we build 100 00:03:55,457 --> 00:03:57,457 systems that are watchdogs that can 101 00:03:57,457 --> 00:03:59,624 monitor what this AI is going to do to 102 00:03:59,624 --> 00:04:01,568 make sure that the bounds that the 103 00:04:01,568 --> 00:04:03,679 humans give it are always held . We , 104 00:04:03,679 --> 00:04:05,735 of course , because we're an applied 105 00:04:05,735 --> 00:04:07,902 research lab , we develop technologies 106 00:04:07,902 --> 00:04:09,902 that support directly human machine 107 00:04:09,902 --> 00:04:12,068 teaming applications across the deaf . 108 00:04:12,169 --> 00:04:14,559 And so that involves working with 109 00:04:14,559 --> 00:04:16,503 programs like collaborative combat 110 00:04:16,503 --> 00:04:18,059 aircraft with ISR systems , 111 00:04:18,059 --> 00:04:19,837 intelligence systems , could be 112 00:04:19,837 --> 00:04:21,837 maintenance related systems . So we 113 00:04:21,837 --> 00:04:24,003 build actual technologies that support 114 00:04:24,003 --> 00:04:26,170 these programs and support these needs 115 00:04:26,170 --> 00:04:28,226 across theA and of course across the 116 00:04:28,226 --> 00:04:30,059 space force as well . So AFRL is 117 00:04:30,059 --> 00:04:32,003 investing in human machine teaming 118 00:04:32,003 --> 00:04:34,115 because we think this is an area that 119 00:04:34,115 --> 00:04:35,959 has the potential to deliver 120 00:04:35,959 --> 00:04:37,915 transformative technologies . For 121 00:04:38,274 --> 00:04:40,394 future war fighting missions in all 122 00:04:40,394 --> 00:04:42,674 domains , we hope that these 123 00:04:42,674 --> 00:04:45,595 technologies will expand the scope and 124 00:04:45,595 --> 00:04:47,954 capabilities of military operations by 125 00:04:47,954 --> 00:04:51,154 accelerating the decision cycle of the 126 00:04:51,154 --> 00:04:54,475 human operator while at the same time 127 00:04:54,475 --> 00:04:57,595 reducing the risk on the human by 128 00:04:57,595 --> 00:04:59,994 allowing them to remove themselves from 129 00:04:59,994 --> 00:05:02,755 the highest risk aspect of the missions . 130 00:05:03,589 --> 00:05:05,149 Human machine teaming will 131 00:05:05,149 --> 00:05:08,190 revolutionize military operations . 132 00:05:08,549 --> 00:05:10,493 Imagine a future battlefield where 133 00:05:10,493 --> 00:05:12,309 unmanned aircraft collaborate 134 00:05:12,309 --> 00:05:15,709 seamlessly with human pilots , or where 135 00:05:15,709 --> 00:05:17,670 autonomous systems help analyze 136 00:05:17,670 --> 00:05:19,549 enormous data streams to provide 137 00:05:19,549 --> 00:05:21,820 commanders with actionable intelligence 138 00:05:21,820 --> 00:05:25,510 in real time . We live in an 139 00:05:25,510 --> 00:05:29,299 increasingly uh technology rich world . 140 00:05:29,950 --> 00:05:32,117 I think if you look at the speculation 141 00:05:32,117 --> 00:05:34,309 of where military operations will go 142 00:05:34,309 --> 00:05:36,630 into the future and , and uh 143 00:05:36,630 --> 00:05:39,549 understanding that we need to maybe 144 00:05:39,549 --> 00:05:43,220 shift our thinking from few exquisite , 145 00:05:43,510 --> 00:05:47,339 very costly things to smaller , 146 00:05:47,600 --> 00:05:50,795 cheaper things , the , the battle . The 147 00:05:50,795 --> 00:05:53,195 field is going to be littered with more 148 00:05:53,195 --> 00:05:55,635 and more machines and therefore the 149 00:05:55,635 --> 00:05:57,635 ability of humans to interface with 150 00:05:57,635 --> 00:06:00,315 those machines in a synergistic and 151 00:06:00,315 --> 00:06:02,093 coherent way is going to become 152 00:06:02,093 --> 00:06:04,204 increasingly important . Machines are 153 00:06:04,204 --> 00:06:06,037 great at computes . Machines can 154 00:06:06,037 --> 00:06:08,259 process things that humans can't fathom 155 00:06:08,259 --> 00:06:10,426 in terms of speed and , and breadth of 156 00:06:10,426 --> 00:06:12,648 information analysis that they can do . 157 00:06:12,648 --> 00:06:14,593 So why not try to leverage that as 158 00:06:14,593 --> 00:06:16,926 humans ? We're great at decision making , 159 00:06:16,926 --> 00:06:16,709 we're great at pattern recognition . 160 00:06:16,890 --> 00:06:18,970 We're great at creativity . So the 161 00:06:18,970 --> 00:06:21,160 thing is , how can we get machines to 162 00:06:21,410 --> 00:06:23,489 play into that whole process ? So 163 00:06:23,489 --> 00:06:25,545 understanding what the humans do and 164 00:06:25,545 --> 00:06:27,267 use the machines as a means to 165 00:06:27,267 --> 00:06:29,489 accelerate what they're doing already . 166 00:06:29,489 --> 00:06:31,433 Human machine teaming is important 167 00:06:31,433 --> 00:06:33,433 because without it , we will lose . 168 00:06:33,649 --> 00:06:37,600 Without using the augmentation , the 169 00:06:37,600 --> 00:06:39,767 relationship with knowledge , and with 170 00:06:39,767 --> 00:06:41,933 the machines that we have at speed and 171 00:06:41,933 --> 00:06:44,450 scale , we will no longer maintain the 172 00:06:44,450 --> 00:06:46,649 dominance in the future battlescape or 173 00:06:46,649 --> 00:06:50,369 battlefield . So , it is not an if we 174 00:06:50,369 --> 00:06:52,410 do human machine teaming or we use 175 00:06:52,410 --> 00:06:54,859 artificial intelligence . It is a must , 176 00:06:55,130 --> 00:06:58,519 and we must use it better , faster , 177 00:06:58,609 --> 00:07:01,440 with more curiosity , more optionality , 178 00:07:01,529 --> 00:07:03,760 and more advantage than the adversary . 179 00:07:03,970 --> 00:07:06,929 It's incumbent upon us to be able to 180 00:07:06,929 --> 00:07:08,929 understand these new technologies , 181 00:07:08,929 --> 00:07:11,649 especially AI that is coming out , and 182 00:07:11,649 --> 00:07:14,720 be able to adopt and integrate them 183 00:07:14,929 --> 00:07:17,151 with our war fighting . Systems to make 184 00:07:17,151 --> 00:07:19,318 them better and the reason for that is 185 00:07:19,318 --> 00:07:21,429 that our adversaries are also working 186 00:07:21,429 --> 00:07:23,816 on it and our adversaries are 187 00:07:23,816 --> 00:07:26,038 developing technologies just as fast as 188 00:07:26,038 --> 00:07:28,094 we could . So for every bomb that we 189 00:07:28,094 --> 00:07:30,316 make , they have an equivalent bomb for 190 00:07:30,316 --> 00:07:32,316 every missile , every airplane they 191 00:07:32,316 --> 00:07:34,538 have . So our adversaries are achieving 192 00:07:34,538 --> 00:07:36,705 technological parity . So what will be 193 00:07:36,705 --> 00:07:38,872 the differentiator in a future war ? I 194 00:07:38,872 --> 00:07:40,927 think the differentiator will be the 195 00:07:40,927 --> 00:07:43,038 ability for us , for the war fighters 196 00:07:43,038 --> 00:07:45,038 who can better integrate with the . 197 00:07:45,038 --> 00:07:47,971 Machines and develop this truly unique 198 00:07:47,971 --> 00:07:50,772 symbiotic human machine teaming system 199 00:07:50,921 --> 00:07:53,462 that will be better than any weapons or 200 00:07:53,462 --> 00:07:56,062 any single warfighter and that can give 201 00:07:56,062 --> 00:07:58,062 us a decisive advantage over our 202 00:07:58,062 --> 00:08:00,173 adversaries . So that is why it is so 203 00:08:00,173 --> 00:08:02,502 important to invest and adapt and adopt 204 00:08:02,502 --> 00:08:05,622 these technologies now . But human 205 00:08:05,622 --> 00:08:07,511 machine teaming isn't without its 206 00:08:07,511 --> 00:08:09,742 challenges . One of the most crucial 207 00:08:09,742 --> 00:08:12,592 factors for success of HMT is trust . 208 00:08:13,250 --> 00:08:15,417 How can we ensure that operators trust 209 00:08:15,417 --> 00:08:17,472 the machines and can confidently use 210 00:08:17,472 --> 00:08:20,170 this capability ? Just like you need to 211 00:08:20,170 --> 00:08:22,392 have trust in your wingman or partner , 212 00:08:22,609 --> 00:08:24,553 we need to build the same level of 213 00:08:24,553 --> 00:08:26,720 trust with machines if we want them to 214 00:08:26,720 --> 00:08:28,890 help us win . One of our really big 215 00:08:28,890 --> 00:08:30,850 challenges in human AI teaming is 216 00:08:30,850 --> 00:08:32,739 making sure that we appropriately 217 00:08:32,739 --> 00:08:34,906 calibrate trust to the capabilities of 218 00:08:34,906 --> 00:08:37,128 the autonomy . So we don't want a human 219 00:08:37,128 --> 00:08:39,580 operator to overtrust AI and use it in 220 00:08:39,580 --> 00:08:41,858 situations that it wasn't designed for , 221 00:08:41,858 --> 00:08:44,080 but we also don't want it to undertrust 222 00:08:44,080 --> 00:08:45,969 and not use it when it was better 223 00:08:45,969 --> 00:08:48,080 suited to deal with that technology . 224 00:08:48,080 --> 00:08:50,080 We're really looking for this trust 225 00:08:50,080 --> 00:08:49,780 calibration where they appropriately 226 00:08:49,780 --> 00:08:52,020 trust it in the situations it was 227 00:08:52,020 --> 00:08:54,131 designed for . So first , let's think 228 00:08:54,131 --> 00:08:56,409 about how do we even define trust . So , 229 00:08:56,409 --> 00:08:58,464 when we're thinking about trust of a 230 00:08:58,464 --> 00:09:00,820 human , and autonomous or human AI team , 231 00:09:01,190 --> 00:09:03,023 we're thinking about the human's 232 00:09:03,023 --> 00:09:05,246 willingness to accept vulnerability and 233 00:09:05,246 --> 00:09:07,301 situations that are characterized by 234 00:09:07,301 --> 00:09:07,109 uncertainty because they believe that 235 00:09:07,190 --> 00:09:09,023 that autonomy is gonna help them 236 00:09:09,023 --> 00:09:11,079 achieve the mission that they need . 237 00:09:11,190 --> 00:09:13,469 Yeah , I mean , I think there's this 238 00:09:13,469 --> 00:09:16,030 concern that at some point things are 239 00:09:16,030 --> 00:09:18,849 gonna go what I would call sentient . I 240 00:09:18,849 --> 00:09:20,793 think that's the concern . I think 241 00:09:20,793 --> 00:09:22,960 that's a misplaced conception based on 242 00:09:22,960 --> 00:09:25,010 my understanding of technology . I 243 00:09:25,010 --> 00:09:28,809 certainly understand uh the fear , but 244 00:09:28,809 --> 00:09:32,450 we are gonna have the right . Risk 245 00:09:32,450 --> 00:09:34,561 posture for anything we develop based 246 00:09:34,561 --> 00:09:36,690 on uncertainty developed by test and 247 00:09:36,690 --> 00:09:38,746 engineer . I mean , it's going to be 248 00:09:38,746 --> 00:09:40,746 underpinned . We've really tried to 249 00:09:40,746 --> 00:09:43,023 push the idea of responsible use of AI . 250 00:09:43,023 --> 00:09:45,023 So across society , you've probably 251 00:09:45,023 --> 00:09:46,746 seen concerns about artificial 252 00:09:46,746 --> 00:09:48,912 intelligence and things like bias . In 253 00:09:48,912 --> 00:09:51,023 our view within the Air Force , we're 254 00:09:51,023 --> 00:09:53,134 trying to push the idea that machines 255 00:09:53,134 --> 00:09:55,190 don't really have ethics . When they 256 00:09:55,190 --> 00:09:57,357 can understand context , we might have 257 00:09:57,357 --> 00:09:59,523 a different conversation , but at this 258 00:09:59,523 --> 00:10:01,634 point in , in , in where the state of 259 00:10:01,634 --> 00:10:03,357 the art is , they can't really 260 00:10:03,357 --> 00:10:05,523 understand context all that well . And 261 00:10:05,523 --> 00:10:07,690 so in our view , it's hard for them to 262 00:10:07,690 --> 00:10:09,857 ingest ethics , right ? So the best we 263 00:10:09,857 --> 00:10:11,968 can do is try to push for responsible 264 00:10:11,968 --> 00:10:15,650 use of AI , but building . that trust 265 00:10:15,650 --> 00:10:19,570 factor into the operators , into , and 266 00:10:19,570 --> 00:10:21,848 I think this is where the lab can help , 267 00:10:21,848 --> 00:10:23,959 you know , demystify things . I think 268 00:10:23,959 --> 00:10:26,070 it's gonna be very , very important , 269 00:10:26,070 --> 00:10:28,014 but if our folks don't trust it to 270 00:10:28,014 --> 00:10:30,181 utilize it in an effective manner on a 271 00:10:30,181 --> 00:10:32,690 time scale of relevance , It really 272 00:10:32,690 --> 00:10:35,169 doesn't matter . Human centered design 273 00:10:35,169 --> 00:10:37,210 is another critical element in the 274 00:10:37,210 --> 00:10:40,289 development of H&T technologies . For 275 00:10:40,289 --> 00:10:42,690 these systems to be successful , they 276 00:10:42,690 --> 00:10:45,049 must be built with the human user in 277 00:10:45,049 --> 00:10:47,619 mind from the very beginning . Yes , so 278 00:10:47,619 --> 00:10:49,508 how important is the human in the 279 00:10:49,508 --> 00:10:51,508 human-machine teaming equation ? Of 280 00:10:51,508 --> 00:10:54,039 course , it's critical . And this is a 281 00:10:54,039 --> 00:10:56,200 point that if you , if you work in 282 00:10:56,200 --> 00:10:57,559 isolation outside of a 283 00:10:57,559 --> 00:11:00,039 multidisciplinary space , it's probably 284 00:11:00,039 --> 00:11:01,983 easy to forget the human if you're 285 00:11:01,983 --> 00:11:04,095 someone building hardware or building 286 00:11:04,095 --> 00:11:06,150 software . You're just trying to put 287 00:11:06,150 --> 00:11:08,095 out the best solution that meets a 288 00:11:08,095 --> 00:11:10,317 particular need , right ? But the human 289 00:11:10,317 --> 00:11:12,428 is critical because at any point , at 290 00:11:12,428 --> 00:11:14,428 some , at least at some point , the 291 00:11:14,428 --> 00:11:14,280 human is gonna have to use that 292 00:11:14,280 --> 00:11:16,391 technology . So I think there's often 293 00:11:16,391 --> 00:11:18,336 misconceptions about human machine 294 00:11:18,336 --> 00:11:20,558 teaming . I think the first one is that 295 00:11:20,558 --> 00:11:23,809 the machine somehow matters more . It 296 00:11:23,809 --> 00:11:25,920 doesn't , right ? Again , the machine 297 00:11:25,920 --> 00:11:27,976 just does what we tell it to do with 298 00:11:27,976 --> 00:11:30,031 what we tell it to do it with . So I 299 00:11:30,031 --> 00:11:32,031 think the most important thing that 300 00:11:32,031 --> 00:11:34,253 people should think about when we think 301 00:11:34,253 --> 00:11:36,365 about human machine teaming is how do 302 00:11:36,365 --> 00:11:38,198 we want to show up ? What is the 303 00:11:38,198 --> 00:11:40,198 humanity we're bringing to it ? And 304 00:11:40,198 --> 00:11:42,198 more importantly , are we coming in 305 00:11:42,198 --> 00:11:44,198 with that learning mentality , that 306 00:11:44,198 --> 00:11:46,031 consistent learning , you know , 307 00:11:46,270 --> 00:11:48,710 opportunity , and are we adapting as we 308 00:11:48,710 --> 00:11:50,710 learn ? I think that's gonna be the 309 00:11:50,710 --> 00:11:52,909 most interesting and most important 310 00:11:52,909 --> 00:11:54,965 part of human machine teaming in the 311 00:11:54,965 --> 00:11:57,390 future is really how do we show up . In 312 00:11:57,390 --> 00:11:59,223 addition , we're taking a lot of 313 00:11:59,223 --> 00:12:01,168 lessons learned from the Automatic 314 00:12:01,168 --> 00:12:03,001 collision avoidance Technologies 315 00:12:03,001 --> 00:12:04,834 program , which had 3 high Level 316 00:12:04,834 --> 00:12:06,946 requirements . The first was to do no 317 00:12:06,946 --> 00:12:09,168 harm . Second was to do not interfere , 318 00:12:09,168 --> 00:12:11,501 and the third was to prevent collisions . 319 00:12:11,501 --> 00:12:13,612 So , what our big lesson learned from 320 00:12:13,612 --> 00:12:13,580 that is that there's a lot more 321 00:12:13,580 --> 00:12:16,060 complexity to developing these human AI 322 00:12:16,060 --> 00:12:18,060 teams that just getting the mission 323 00:12:18,060 --> 00:12:20,171 done . We need to ensure that in , in 324 00:12:20,171 --> 00:12:22,227 this case for auto GAS auto ACA , it 325 00:12:22,227 --> 00:12:24,393 was preventing collisions . We need to 326 00:12:24,393 --> 00:12:26,449 make sure that in introducing that , 327 00:12:26,449 --> 00:12:26,411 we're not doing damage to the larger 328 00:12:26,411 --> 00:12:28,633 system and that we also are making sure 329 00:12:28,633 --> 00:12:30,744 that we're not interfering . With the 330 00:12:30,744 --> 00:12:30,692 human when they're trying to do 331 00:12:30,692 --> 00:12:32,803 something . I actually think in human 332 00:12:32,803 --> 00:12:35,091 machine teaming , the human is the most 333 00:12:35,091 --> 00:12:37,313 important , but more importantly , it's 334 00:12:37,313 --> 00:12:39,535 also the most unpredictable . So if you 335 00:12:39,535 --> 00:12:41,702 think about it , if I am going to do a 336 00:12:41,702 --> 00:12:44,281 query or use a particular technology or 337 00:12:44,281 --> 00:12:46,559 design a relationship with the machine , 338 00:12:46,559 --> 00:12:48,611 I'm still inherently influencing it 339 00:12:48,611 --> 00:12:50,692 from my experience and expertise . I 340 00:12:50,692 --> 00:12:52,812 like to joke that my husband wishes I 341 00:12:52,812 --> 00:12:54,868 had a predictable operating system , 342 00:12:54,971 --> 00:12:56,915 right ? That he could put the same 343 00:12:56,915 --> 00:12:59,027 thing in and get the same thing out . 344 00:12:59,027 --> 00:13:00,971 And so oftentimes people will talk 345 00:13:00,971 --> 00:13:02,693 about AI , um , and especially 346 00:13:02,693 --> 00:13:04,804 generative AI as an example and say , 347 00:13:04,804 --> 00:13:06,915 well , how do we get , you know , the 348 00:13:06,915 --> 00:13:08,915 exact same and repetitive and , and 349 00:13:08,915 --> 00:13:10,971 concrete answer ? And the reality is 350 00:13:10,971 --> 00:13:12,749 sometimes you won't because the 351 00:13:12,749 --> 00:13:12,682 knowledge base is changing , but more 352 00:13:12,682 --> 00:13:15,523 than that , you as a human change every 353 00:13:15,523 --> 00:13:17,763 day . If I have an experience , for 354 00:13:17,763 --> 00:13:19,985 example , or we're seeing an experience 355 00:13:19,985 --> 00:13:22,273 unfold in a Battlefield , you know , 356 00:13:22,513 --> 00:13:24,744 far away . We are learning every day . 357 00:13:24,953 --> 00:13:27,744 Our strategy changes every day . Our 358 00:13:27,744 --> 00:13:30,323 information flow changes every day . 359 00:13:30,414 --> 00:13:32,247 And so , in fact , that learning 360 00:13:32,247 --> 00:13:34,193 mentality , that curiosity , that 361 00:13:34,193 --> 00:13:37,874 desire to adapt , to curate at speed is 362 00:13:37,874 --> 00:13:40,713 really important . I think for the 363 00:13:40,713 --> 00:13:42,713 foreseeable future , I don't really 364 00:13:42,713 --> 00:13:44,914 imagine a time in which humans are 365 00:13:44,914 --> 00:13:47,081 going to be taken out of the process , 366 00:13:47,081 --> 00:13:49,414 especially in the Department of Defense . 367 00:13:49,414 --> 00:13:51,470 Humans and AI are always going to be 368 00:13:51,470 --> 00:13:53,636 teaming together in some way , shape , 369 00:13:53,636 --> 00:13:55,747 or form . They're completely critical 370 00:13:55,747 --> 00:13:57,803 to the decision making and enforcing 371 00:13:57,803 --> 00:14:00,424 our norms and our belief systems as , 372 00:14:00,465 --> 00:14:02,424 as a country into , into the 373 00:14:02,424 --> 00:14:04,664 technologies . I would repeat what 374 00:14:04,664 --> 00:14:07,344 Jonah Goldfein once mentioned in one of 375 00:14:07,344 --> 00:14:10,135 his speeches that the future wars will 376 00:14:10,135 --> 00:14:12,585 not be a war of attrition , but war of 377 00:14:12,585 --> 00:14:14,869 cognition . It's true , right ? Our 378 00:14:14,869 --> 00:14:17,549 adversaries are achieving almost a 379 00:14:17,549 --> 00:14:20,830 technological parity with us . Uh , so 380 00:14:20,830 --> 00:14:23,400 it will be the capability of our war 381 00:14:23,400 --> 00:14:25,750 fighters who will many times have to 382 00:14:25,750 --> 00:14:28,729 overcome the negative effect of stress , 383 00:14:28,919 --> 00:14:32,150 of fatigue , of lack of sleep , and 384 00:14:32,150 --> 00:14:34,710 still be asked to perform at their 385 00:14:34,710 --> 00:14:37,469 optimum level in the fog of war and . 386 00:14:37,565 --> 00:14:40,794 That's why investing in technologies 387 00:14:40,794 --> 00:14:43,914 that really integrate the human with 388 00:14:43,914 --> 00:14:46,674 the machine to extend their 389 00:14:46,674 --> 00:14:48,835 capabilities as well as improve the 390 00:14:48,835 --> 00:14:52,025 efficacy of the machines in turn is why 391 00:14:52,025 --> 00:14:54,195 the human element , keeping the human 392 00:14:54,195 --> 00:14:56,705 in the loop is so important and cannot 393 00:14:56,705 --> 00:14:59,515 be ignored because it's not the 394 00:14:59,515 --> 00:15:02,190 machines but human that fight wars . At 395 00:15:02,190 --> 00:15:04,390 the heart of human-machine teaming is 396 00:15:04,390 --> 00:15:06,940 the concept of decision superiority , 397 00:15:07,309 --> 00:15:09,420 being able to make the right decision 398 00:15:09,420 --> 00:15:12,489 faster than an adversary . In a future 399 00:15:12,489 --> 00:15:14,433 where decisions need to be made in 400 00:15:14,433 --> 00:15:17,039 milliseconds , the combination of human 401 00:15:17,039 --> 00:15:19,679 insight and machine speed will be key . 402 00:15:20,710 --> 00:15:23,090 So the Uta loop was created by Colonel 403 00:15:23,090 --> 00:15:25,201 John Boyd , and in essence , it was a 404 00:15:25,201 --> 00:15:27,368 framework that is familiar to a lot of 405 00:15:27,368 --> 00:15:29,590 people in Air Force , Space Force , and 406 00:15:29,590 --> 00:15:31,812 really in DOD generally . And it stands 407 00:15:31,812 --> 00:15:35,530 for observe , orient , decide , and 408 00:15:35,530 --> 00:15:37,752 act . So if you think about it , that's 409 00:15:37,752 --> 00:15:39,609 a very normal process that we go 410 00:15:39,609 --> 00:15:41,720 through to be able to understand what 411 00:15:41,720 --> 00:15:44,090 do we do next . Decision superiority is 412 00:15:44,090 --> 00:15:46,257 about getting ahead of our adversaries 413 00:15:46,257 --> 00:15:49,090 oo loops . So getting faster decision 414 00:15:49,090 --> 00:15:51,409 making , so accelerating our guardian , 415 00:15:51,609 --> 00:15:53,890 our airmen decision making processes 416 00:15:53,890 --> 00:15:55,946 without sacrificing accuracy . So we 417 00:15:55,946 --> 00:15:58,168 talk about the speed of relevance . The 418 00:15:58,168 --> 00:16:00,279 speed of relevance is gonna depend on 419 00:16:00,279 --> 00:16:02,390 what that particular mission set is . 420 00:16:02,390 --> 00:16:04,723 It could be seconds , it could be hours . 421 00:16:04,723 --> 00:16:06,946 We view human machine teaming as one of 422 00:16:06,946 --> 00:16:08,612 the key enablers for decision 423 00:16:08,612 --> 00:16:10,723 superiority because we recognize that 424 00:16:10,723 --> 00:16:12,390 machines bring to the table . 425 00:16:12,580 --> 00:16:14,747 Computational power that we just can't 426 00:16:14,747 --> 00:16:16,858 get as humans . And so we really have 427 00:16:16,858 --> 00:16:18,858 to start thinking about time like a 428 00:16:18,858 --> 00:16:20,913 weapons platform , right ? And , you 429 00:16:20,913 --> 00:16:23,024 know , oftentimes when we think about 430 00:16:23,024 --> 00:16:24,969 human machine teaming or AI , what 431 00:16:24,969 --> 00:16:27,024 we're really doing is wormholing the 432 00:16:27,024 --> 00:16:29,136 Ota loop , right ? We're saying , how 433 00:16:29,136 --> 00:16:31,358 can I have all of those options and how 434 00:16:31,358 --> 00:16:33,191 can I go , you know , from maybe 435 00:16:33,191 --> 00:16:35,191 Orientation to being able to make a 436 00:16:35,191 --> 00:16:37,984 decision faster . Looking ahead , the 437 00:16:37,984 --> 00:16:40,145 future of human-machine teaming is 438 00:16:40,145 --> 00:16:42,664 bright . We're moving towards systems 439 00:16:42,664 --> 00:16:45,145 that can learn and adapt alongside 440 00:16:45,145 --> 00:16:47,905 humans to create teams that can operate 441 00:16:47,905 --> 00:16:50,275 in highly complex environments with 442 00:16:50,275 --> 00:16:53,344 minimal supervision . Today we are 1 to 443 00:16:53,344 --> 00:16:56,224 1 , simplistically , one human to one 444 00:16:56,224 --> 00:16:58,784 jet , one human to 1 spacecraft . In 445 00:16:58,784 --> 00:17:01,006 the future , it's going to be 1 to 10 . 446 00:17:01,090 --> 00:17:04,800 1 to 100 , 1 to 1000 . And human 447 00:17:04,800 --> 00:17:07,319 machine teaming underpins us being able 448 00:17:07,319 --> 00:17:11,050 to effectively operate at speed to be 449 00:17:11,050 --> 00:17:13,217 able to make decisions on a time scale 450 00:17:13,217 --> 00:17:14,939 of relevance . Absolutely , it 451 00:17:14,939 --> 00:17:17,161 underpins all of that . So , one of the 452 00:17:17,161 --> 00:17:19,383 things that I often think about when it 453 00:17:19,383 --> 00:17:21,217 comes to optionality is how does 454 00:17:21,217 --> 00:17:23,439 optionality exist in our world now ? So 455 00:17:23,439 --> 00:17:25,383 if you think about it , if we were 456 00:17:25,383 --> 00:17:27,383 playing tic tac toe , you and I are 457 00:17:27,383 --> 00:17:29,550 both considering about 9 moves that we 458 00:17:29,550 --> 00:17:31,772 can make . If we were playing chess , I 459 00:17:31,772 --> 00:17:33,606 think it's about 36 . If we were 460 00:17:33,606 --> 00:17:35,717 playing AlphaGo , we'd have about 360 461 00:17:35,717 --> 00:17:37,939 different moves that we could make . So 462 00:17:37,939 --> 00:17:40,217 if you think about it , the more moves , 463 00:17:40,217 --> 00:17:39,689 the more complex , you know , the 464 00:17:39,689 --> 00:17:41,745 harder , but again , we're playing a 465 00:17:41,745 --> 00:17:43,839 game . So in some cases , that game 466 00:17:43,839 --> 00:17:45,969 really matters about playing it at 467 00:17:45,969 --> 00:17:48,500 speed . So when I look at a situation 468 00:17:48,500 --> 00:17:50,333 where we may have an adversarial 469 00:17:50,333 --> 00:17:52,667 context , the question for me is , well , 470 00:17:52,667 --> 00:17:55,060 do I want to be able to process 9 471 00:17:55,060 --> 00:17:57,500 potential outcomes or moves , or do I 472 00:17:57,500 --> 00:18:00,310 want to be able to process 360 ? Quite 473 00:18:00,310 --> 00:18:02,477 frankly , I want to be able to process 474 00:18:02,477 --> 00:18:05,339 3000 , right ? Because by being able to 475 00:18:05,339 --> 00:18:07,660 look at all 3000 , what I'm really 476 00:18:07,660 --> 00:18:09,859 doing is in essence saying , where is 477 00:18:09,859 --> 00:18:12,189 my advantage ? And if I can do that . 478 00:18:12,322 --> 00:18:14,661 Rapidly , faster than my adversary . It 479 00:18:14,661 --> 00:18:17,241 also means that I can move faster . I 480 00:18:17,241 --> 00:18:19,391 can maintain first mover dominance , 481 00:18:19,562 --> 00:18:21,442 and that's critical . And that 482 00:18:21,442 --> 00:18:23,553 optionality also hits home for me and 483 00:18:23,553 --> 00:18:25,720 the idea of human-machine teaming in a 484 00:18:25,720 --> 00:18:27,722 very powerful way . And that is the 485 00:18:27,722 --> 00:18:30,521 best options also mean potentially the 486 00:18:30,521 --> 00:18:33,842 lowest loss of life . The future is all 487 00:18:33,842 --> 00:18:36,511 about greater synchrony , um , greater 488 00:18:37,052 --> 00:18:39,904 longitude . learning over time . So at 489 00:18:39,904 --> 00:18:41,904 what point do we get tools that can 490 00:18:41,904 --> 00:18:44,071 start to learn our preferences that we 491 00:18:44,071 --> 00:18:46,237 can maybe set some of those thresholds 492 00:18:46,237 --> 00:18:48,348 for the learning processes themselves 493 00:18:48,348 --> 00:18:50,348 and get them better tuned to us and 494 00:18:50,348 --> 00:18:52,460 more in sync with us as individuals , 495 00:18:52,460 --> 00:18:54,293 right ? Do more work in terms of 496 00:18:54,293 --> 00:18:56,460 getting those interfaces right so that 497 00:18:56,460 --> 00:18:58,404 we can set those bounded delegated 498 00:18:58,404 --> 00:19:00,460 authorities in a flexible fashion so 499 00:19:00,460 --> 00:19:02,515 that the machines are working for us 500 00:19:02,515 --> 00:19:04,460 and not against us . The future of 501 00:19:04,460 --> 00:19:06,682 humanity . Machine teaming is one where 502 00:19:06,682 --> 00:19:08,737 the role of machine is complementary 503 00:19:08,737 --> 00:19:11,071 and not redundant to those of the human . 504 00:19:11,086 --> 00:19:13,676 So in an ideal human machine team 505 00:19:14,586 --> 00:19:17,005 technology , it would combine the the 506 00:19:17,005 --> 00:19:19,395 strength of the human such as reasoning , 507 00:19:19,645 --> 00:19:22,845 perception , contextual adaptation with 508 00:19:22,845 --> 00:19:24,901 the strength of the machine which is 509 00:19:24,901 --> 00:19:27,005 fast processing , unbiased decision 510 00:19:27,005 --> 00:19:28,894 making , while at the same time . 511 00:19:29,228 --> 00:19:31,547 Overcoming the weaknesses of the human , 512 00:19:31,628 --> 00:19:34,427 such as fatigue or bias . And so in the 513 00:19:34,427 --> 00:19:36,628 future I see the human machine team as 514 00:19:36,628 --> 00:19:39,108 the man and the machine come in a truly 515 00:19:39,108 --> 00:19:42,787 symbiotic fashion where the man a human 516 00:19:42,787 --> 00:19:45,828 learn to trust the machine , its 517 00:19:45,828 --> 00:19:48,147 intelligent and decision making 518 00:19:48,147 --> 00:19:50,897 capabilities , and the machine in turn 519 00:19:51,287 --> 00:19:54,890 learn how to understand . And 520 00:19:54,890 --> 00:19:57,300 respond to the human state whether it's 521 00:19:57,300 --> 00:20:00,890 physical or cognitive . And understand 522 00:20:01,079 --> 00:20:03,520 the context and adapt to it , but 523 00:20:03,520 --> 00:20:05,800 perhaps more importantly , understand 524 00:20:05,800 --> 00:20:08,839 our values . Human machine teaming is 525 00:20:08,839 --> 00:20:11,189 not just a technological innovation , 526 00:20:11,479 --> 00:20:13,800 it's a revolution in how we think about 527 00:20:13,800 --> 00:20:16,790 human potential and machine capability . 528 00:20:17,319 --> 00:20:19,520 As we push forward into this exciting 529 00:20:19,520 --> 00:20:22,060 frontier . The collaboration between 530 00:20:22,060 --> 00:20:24,420 humans and machines will unlock new 531 00:20:24,420 --> 00:20:27,400 possibilities in every sector , from 532 00:20:27,400 --> 00:20:30,250 defense to healthcare to everyday life . 533 00:20:30,579 --> 00:20:32,939 Stay tuned , because the future is 534 00:20:32,939 --> 00:20:35,819 coming fast , and it's powered by human 535 00:20:35,819 --> 00:20:36,819 machine teaming .