Cyber Warfare and Democracy in the Age of Artificial Intelligence

Season 25, Episode 1,   Apr 02, 04:01 AM

Today, Steve is speaking with Mariarosaria Taddeo, Professor of Digital Ethics and Defence Technologies and Dslt Ethics Fellow at the Alan Turing Institute. Mariarosaria brings her expertise as a philosopher to bear in this discussion of why and how we must develop agreed-upon ethical principles and governance for cyber warfare.

Key Takeaways:
1. As cyber attacks increase, international humanitarian law and rules of war require a conceptual shift.
2. To maintain competitive advantage while upholding their values, liberal democracies are needing to move swiftly to develop and integrate regulation of emerging digital technologies and AI.
3. Many new technologies have a direct and harmful impact on the environment, so it’s imperative that any ethical AI be developed sustainably. 


Tune in to hear more about:
1.  The digital revolution affects how we do things, how we think about our environment, and how we interact with the environment. (1:10)
2. Regardless of how individual countries may wield new digital capabilities, liberal democracies as such must endeavor tirelessly to develop digital systems and AI that is well considered, that is ethically sound, and that does not discriminate. (5:20)
3. New digital capabilities may produce CO2 and other environmental impacts that will need to be recognized and accounted for as new technologies are being rolled out. (10:03)


Standout Quotes:

1.  “The way in which international humanitarian laws works or just war theory works is that we tell you what kind of force, when, and how you can use it to regulate the conduct of states in war. Now, fast forward to 2007, cyber attacks against Estonia, and you have a different kind of war, where you have an aggressive behavior, but we're not using force anymore. How do you regulate this new phenomenon, if so far, we have regulated war by regulating force, but now this new type of war is not a force in itself or does not imply the use of force? So this is a conceptual shift. A concept which is not radically changing, but has acquired or identifies a new phenomenon which is new compared to what we used to do before.” - Mariarosario Taddeo 

2. “I joke with my students when they come up with this same objection, I say, well, you know, we didn't stop putting alarms and locking our doors because sooner or later, somebody will break into the house. It's the same principle. The risk is there, it’s present. They’re gonna do things faster in a more dangerous way, but if we give up to the regulations, then we might as well surrender immediately, right?” - Mariarosario Taddeo

3. “LLMs, for example, large language models, ChatGPT for example, they consume a lot of the resources of our environment. We did with some of the students here of AI a few years ago a study where we show that training just one round of ChatGPT-3 would produce as much CO2 as 49 cars in the US for a year. It’s a huge toll on the environment. So ethical AI means also sustainably developed.” - Mariarosario Taddeo


Mentioned in this episode:

Read the transcript of this episode
Subscribe to the ISF Podcast wherever you listen to podcasts
Connect with us on LinkedIn and Twitter

From the Information Security Forum, the leading authority on cyber, information security, and risk management.