AI and American National Security - A Reflection

A reflection by Maggie O’Daniel

There is no denying it, artificial intelligence (AI) is a hot topic issue in today’s society. So, I was unsurprised that as the clock struck 5:30pm, the back room of Fante’s Coffee was packed with people. Dr. Adel Elmaghraby was moderator for the discussion and took his place at the head of the long mash up of tables we had pushed together. Dr. Elmaghraby is a professor at the University of Louisville Speed School specializing in IT and has extensive background in AI technology going back to the 1980s. The title of the article we read for this session was “AI and American National Security”. We discussed AI as both a boon and curse to cyber security, how AI is being used in drone warfare, and the present zeitgeist surrounding generative AI.

Dr. Elmaghraby started by explaining that AI is not new, especially within the cybersecurity realm. The history of AI started in the 50s, but the relatively rudimentary hardware at the time put severe limitations on the development of AI. The room-sized computers could not process quick enough to make the more advanced ideas of AI possible. This created a stagnation in AI development until around the 1980s. Currently, AI is being effectively used to fend off cyber attacks from places like China and Russia. AI makes this possible because the cyber attacks happen so quickly a person on their own would not be able to defend fast enough to stop viruses from downloading. This will be a constantly evolving issue due to the rate at which AI is developing. Elmaghraby stated that in many respects, both China and much of Europe, already have more advanced AI than the US. This prompted much discussion on why this is, the role of private development companies, and what it means in regards to the US continuing to be a world power.

Drone warfare was another aspect of AI within American National Security we explored. The idea of unmanned craft able to identify combatants and act without putting American lives at risk is certainly appealing. But is the reality as picture perfect? I actually had some prior experience with this subject in my Anthropology of Violence class where we read Hellfire from Paradise Ranch: On the Front Lines of Drone Warfare by Joseba Zulaika. I may have been a little too eager to share about this book and even brought it with me to the session– it’s a really interesting read if you get the chance! The point of this particular part of the discussion was: yes, saving American lives is important, but what is the cost? I discussed a case in Hellfire where a drone, using AI, identified a caravan of trucks driving with what appeared to be 21 armed combatants. The operators acted on this analysis and ended up killing 21 individuals in a 23 person family, including a number of children. The AI system had falsely identified this family as enemies. The group was rightly horrified and came to a consensus that AI’s lack of accuracy, people's trust in AI, and the lack of meaningful human oversight were the main failures in this case study, but also in how our society approaches AI overall.

Finally, we came to the real meat of the discussion. The majority of attendees wanted to  discuss generative AI in its current iteration. It was interesting to see who brought up the pros and cons of generative AI. Many of the individuals in business/private sector jobs felt that generative AI sped up the work process. One gentleman in engineering exclaimed, “Why would I spend 2 days on a report that AI could do better in 2 minutes?” While a young woman who is currently a high school student stated, “My concern is that there are so many inaccuracies with AI generated content. If people rely too much on AI they will lose the skills that are required to do their jobs and AI can’t be checked for accuracy if the people using it don’t know how to do the work on their own first.” Dr. Elmaghraby agreed with both of these points. AI is at the forefront of increasing productivity, but we have to be careful in how we use it. AI has aided in making breakthroughs in identifying cancer cells, but the doctor is still required to both teach the AI what to look for and to know what needs to be done with those results. I think the discussion can be summed up by my favorite quote of the evening said by Dr. Elmaghraby:  “You cannot have AI without I.” i.e. you can’t have artificial intelligence without the creator and users having some intelligence of their own in how to operate and utilize it effectively. Well, I can’t speak for others, but I personally came away from this Great Decisions discussion with a little more “I” on AI myself.


About the Writer

Maggie O’Daniel is our Reaserch Fellow. She is a Cultural Anthropology Master’s student at UofL. Maggie has worked with nonprofits for 5 years, and is exploring how WAC connects our region to the world.

Contact Maggie

Next
Next

Preventing the Unthinkable: Genocide in the 21st Century - A Reflection