Speech by Senior Minister and Coordinating Minister for National Security Teo Chee Hean at the Singapore Defence Technology Summit on 27 June 2019.
Good morning to all of you. A warm welcome to the Singapore Defence Technology Summit. I am happy to be once again among friends, some of whom have travelled long distances to be with us, some of whom I have known for more than 20 years. This year’s Summit is marked by the strong returning attendance of eminent figures representing industry, government and academia in defence technology, and is bolstered by new friends. I thank all of you for being with us.
Today, our societies are all grappling with the rapid advance of technology. We all want to be ahead of the game, and not be left behind. As such, the issues are often framed in terms of how to develop and deploy new technology more quickly. But in this headlong rush, we do also need to reflect on how well, and how wisely we are making use of technology, and whether we are prepared to deal with the collateral consequences of the proliferation of these new technologies.
I would like to pose three questions for discussion. One, how do we prepare for a world where machines are smarter than us? Two, how do we maintain security in an increasingly interconnected world? And three, while we focus on high-tech warfare, how do we avoid being blind-sided by asymmetrical, low-tech warfare, which strikes not just n the battlefield, but in hearts and minds?
The Age of Machines
Let me start with the first question: What happens in a world where machines are smarter than us? The more fundamental question of course is: Can machines ever become smarter than humans? There are different schools of thought on this. One school believes that Artificial Intelligence (AI) will allow machines to not only process huge volumes of data much faster than we can, but because it allows self-learning, machines will become smarter and smarter as they learn more, and soon surpass human capability, and become hundreds, thousands, millions, perhaps billions of times smarter than humans. They are already outsmarting the best of us in Chess and Go. So why not in business where we already have programmed trading of securities, or in defence?
Another school of thought believes that machines will never be smarter than humans. While machines can self-learn and are better than humans at learning and matching patterns, they could become “smarter” perhaps, but they might never wiser than humans. Beyond calculating the “optimum” solution, will they know the difference between right and wrong? Will they be able to adapt to uncertainty, and take into account new factors and circumstances which will change not only the data going into the problem that they are analysing, but may change the nature of the problem and the outcome we want? These are issues which we are all grappling with, and here in Singapore too, as we seek to harness the potential of AI.
Do we depend on man or machine to call the shots, especially in a life and death situation? A current case in point is the design of stall protection systems for aircraft. We can design a machine to automatically “save” us, to “save” an aircraft and all on board because it can process indicators of an impending stall more quickly and react faster. This would be a real life-saver when the pilot is suffering from spatial disorientation, is not responding appropriately, or is incapacitated. But would we rather place our trust in a human pilot because instruments and the machine can go wrong, and the pilot can bring his greater overall awareness, experience and judgement to deal with the situation?
We are already in that future where a machine can assess and decide in a second what we may take hours or days to evaluate. AI-driven machines can only grow further and more rapidly in capability. How do we ensure that they continue to act in our best interest, and are aligned with our goals? An Israeli historian and philosopher, Yuval Noah Harari, wrote in his book “Homo Deus”, “what will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?”
I am glad to see that we are taking steps to consider these issues and make AI serve the public good better. In January this year, Singapore became the first country in Asia to launch a framework on how AI can be ethically and responsibly used. This provides a framework with detailed and implementable guidance , but is a “living document”, intended to evolve with feedback from the public, social and information scientists, industry and users. We will evolve as we go along, but the goal is to use AI for the public good.
Another example is the initiative by Blackstone founder Stephen Schwarzman, who recently donated GBP150 million (US$188 million) to Oxford University to help fund a new institute to study the ethics of Artificial Intelligence.
There are indeed technical measures which we can deploy to help mitigate this future. One example is “explainable AI” – designed to explain how or why particular decisions or actions were taken. The higher the stakes, the more important it is to explain. “Explainable AI” should also come with some safeguards to allow the human operator to override the system. If the aircraft stall protection system had provisions to explain to the pilot what it was automatically doing and why, together with a simple override, that might have helped the pilots to diagnose what was going on and avert the recent fatal crashes.
Ultimately, humans have to understand and trust the decisions of their computers, so that we will have the confidence to make the most of a more technologically advanced future.
Security in An Interconnected World
Second, how do we maintain security in an increasingly interconnected world? A world which is becoming more interconnected – horizontally across populations, countries and continents, but also vertically across a wide range of daily activities and functions, and up and down our production and consumption chains. People everywhere are relying more and more on ‘smart devices’ in their daily lives. As this trend continues, it will become difficult to buy products, appliances or home electronics that are not connected to the internet in some way.
Greater connectivity offers society many benefits. However, with each new smart device on the network, we open up more potential vulnerabilities and a much larger surface area for attack. With the number of interconnected smart devices multiplying by a factor of say, a hundred, more than today, we need to devise new ways to maintain the security and resilience of our systems. Otherwise, the system will become unworkable, unstable and cannot be trusted to do the things that we want it to do.
We must assume that persistent attackers will find their way in, past our traditional boundary defences. To stay safe, we need systems and processes to protect us from bad actors who are already lurking inside, by seeking them out and acting against them.
And when “the system” expands to include many more remote smart devices, we will need new ways of authenticating and verifying the security of smart devices right out at the edge.
Three days ago, I had a close look at technology being developed under our National Cybersecurity R&D Programme, which is coordinated by Singapore’s National Research Foundation, Cyber Security Agency, and other agencies. This is a consortium of experts from our universities and research centres, collaborating with top international counterparts – some of you are here today – and partnering several of our companies and state agencies which develop and deploy Operational Technology systems – the systems that monitor and control critical cyber-physical systems such as our water treatment plants and power infrastructure. They are deploying solutions now at the pilot level to protect the plants from external and internal attack using advanced design, augmented reality and AI-based algorithms – very interesting and at least I feel more confident that we have some solutions to this very difficult problem.
We will also need to depend more on data encryption, be it for data at rest, or data being transmitted, shared, or processed. Technologies such as homomorphic encryption will allow the secure processing of encrypted data without the need for decryption in the intermediate stages. But even as we constantly come up with new forms of defences, can we keep pace as attacks become ever more ubiquitous and sophisticated?
I had an interesting conversation seven years ago with a knowledgeable, and well-informed friend of mine from a very advanced country. He predicted that we would soon face the spectre of what he called “unbreakable encryption”. I am not sure whether he considered that a boon or bane – but I think in the context of what we were discussing, he meant the latter. Indeed, this has become an issue. In February 2016, US law enforcement agencies were initially unable to access, legally and technically, the encrypted contents of the mobile device of a terror suspect whom they were investigating quite urgently. They could not go around it. They were eventually offered a solution by a vendor, at a price of course.
The struggle between the unstoppable spear and the impenetrable shield continues.
This brings me to my third point. As we focus on high-tech defence and systems, how do we avoid being blind-sided by asymmetrical, low-tech warfare? We may be interested in what is happening at the edge of science, but we may be more vulnerable to some of the simpler things in everyday life.
Military strategists have long recognised that the objective of war is to bend the opponent to your will. War is not just fought on the military battlefield, but also in the hearts and minds of countries and populations – not just soldiers but also civilians. Winning a battle on the battlefield might not assure victory in a war. If technological advantage, mass and firepower were the sole determinant of victory, then there would have been little doubt about the eventual outcomes of the wars in the late 20th Century, like in Vietnam or Afghanistan in the 1980s, or in the early part of the 20th Century like the ongoing conflicts in Afghanistan and Iraq.
Ironically, as we become more technologically advanced, asymmetric or low-intensity threats, the traditional weapon of the weak, are on the rise. Some of the factors that I spoke of earlier make us more vulnerable, for example, that we are more networked and interconnected. They are striking in the very heart of our cities, targeting our civilians and the civilian infrastructure and systems that keep modern society running. Non-state actors which advocate violent extremism have motivated distant individuals or groups to undertake attacks an ocean away, connected by today’s technology. Home-made bombs and lethal chemical devices are made from household items, kitchen knives will do when guns are not easily available, and vehicles are used to mow down pedestrians. There are no simple solutions when such everyday items are turned into weapons to strike at ordinary people going about their daily lives. Precisely because of how simple and close-up these daily items and activities are to each of us, such threats can have a greater psychological impact throughout society.
We are also witnessing new forms of asymmetrical or unconventional warfare enabled by cyberspace. It is not just the major powers that have the ability to attack enterprise or cyber-physical systems. These methods allow small-scale actors to circumvent military defences to launch disproportionately damaging non-conventional attacks on much stronger adversaries.
We have also seen how psychological and information warfare has been scaled up and become more dangerous through the use of fake news and AI-driven deepfakes. This has enabled foreign interference in elections on an unprecedented scale by exploiting the power of social media.
In a physical world of low-intensity conflicts and a digital world of persistent information battle, how do we protect ourselves and deter adversaries, when the cost of initiating such unconventional war is not high? How do we effectively shape or regulate cyberspace to combat fake news and deepfakes?
Two months ago, the Singapore Parliament passed a law for the Protection from Online Falsehoods and Manipulation. It will not solve all our problems, but will allow us to take a few important steps. It will allow us to take action to require corrections to be promptly posted up alongside the falsehood, so that readers can evaluate for themselves the falsehoods purveyed, together with the corrections. And in some more serious cases, it will allow us to order a take-down of the falsehood, to break the transmission chain spreading the falsehoods. The Government is also considering legislation to address hostile information campaigns mounted by foreign actors that threaten our national security. This is a real problem that is facing all our countries as we try to have as open a society as possible. This very openness is exploited by others.
We are taking the issue of cyber defence seriously. We now have five pillars in our national Total Defence concept – Military, Civil, Economic, Social and Psychological defence. This year, we added a sixth dimension to our Total Defence concept, which is Digital defence. Apart from the Cyber Security Agency that protects our national Critical Information Infrastructure, we have also announced this year, the formation of the Defence Cyber Organisation to safeguard our defence ministry and armed forces networks.
While we come from diverse backgrounds, we share common goals. We want to harness the potential of technology and greater interconnectedness to do good, to improve the lives of our citizens, and to better protect our countries and peoples. But these self-same developments present new threats and dilemmas which we will have to collectively confront, particularly as technology continues to advance and pervade every aspect of our lives.
Through this conference, I trust that we will be able to share our experiences, develop new ideas, form new friendships, and spawn new practical proposals for cooperation. Let us work together to strengthen our measures for the ethical use of AI; strengthen our cyber defences in a more inter-connected world; and strengthen our national resilience against asymmetrical threats in the physical, cyber, social and psychological domains.
Thank you very much.
Explore recent content
Explore related topics