When it comes to the topic of Artificial Intelligence (AI), the world is quite clearly divided into two factions: those who strongly believe that AI is a good thing and those who vehemently deny this. There are valid arguments on both sides. The pros and cons are very contextual: who is developing it, for what application, in what timeframe, towards what end? There is no clear black and white. This also means that the opportunity exists to proactively identify, isolate and address the risks, as the field of AI develops further.
It helps to understand both sides of the debate. So one should take a closer look at what the naysayers say. The apprehensions, most commonly debated, can be clubbed into three main categories:
This is probably the most tangible and widely acknowledged of all the risks highlighted in the context of AI. Technology and machines replacing humans for doing certain types of work is not something new. Everyone knows about entire professions dwindling, and even disappearing, due to technology. The Industrial Revolution too had led large-scale job losses.
However, despite the fears around jobs, it was, by and large, conceded since the beginning of the Industrial Revolution, that these technological advancements more than compensated for short-term job losses, by means of creating new avenues, lowering prices, increasing wages etc.
The narrative today though is very different. One should consider the following:
- A growing number of economists no longer subscribe to the belief that, over a longer term, technology has positive ramifications on overall employment.
- The loss of low to medium skill jobs has been significant in several fields in the last decade alone — and the troubling fact is that most of these have been impacted not by AI, but just simply automation — the impact of AI can be expected to be much larger.
- Multiple studies have predicted large-scale job losses due to technological advancements, including a 2016 UN report that states that 75% of jobs in the developing world are expected to be replaced by machines.
Unemployment, particularly at a large scale, is a very perilous thing, often resulting in widespread civil unrest. AI’s potential impact in this area, therefore, calls for very careful political, sociological and economic thinking, to counter it effectively.
The concept of singularity is one of those things that one would have imagined seeing only in the pages of a futuristic Sci-Fi novel. However, in theory, today it is a real possibility. In a nutshell, singularity refers to that point in human civilization when Artificial Intelligence reaches a tipping point beyond which it evolves into a super-intelligence that surpasses human cognitive powers, thereby potentially posing a threat to human existence as one knows it today.
While the idea around this explosion of machine intelligence is a very pertinent and widely discussed topic, unlike the case of technology-driven unemployment, the concept remains primarily theoretical. There is no clarity yet on two main questions around this line of thought:
- Whether this tipping point can ever really be reached in reality?
- If yes, can it possibly happen in the short to medium term?
Unlike the previous two points, which can be regarded as risks associated with the evolution of AI, the aspect of machine consciousness perhaps is best described as an ethical conundrum. The idea deals with the possibility of implanting human-like consciousness into machines, taking them beyond the realm of ‘thinking’ to that of ‘feelings, emotions and beliefs’.
It is a complex topic and it requires delving into an amalgamation of philosophy, cognitive science and neuroscience. ‘Consciousness’ itself can be interpreted in multiple ways, bringing together a plethora of attributes like self-awareness, cause-effect in mental states, memory, experiences etc. Bringing machines to a state of human-like consciousness would entail replicating all the activities that happen at a neural level in a human brain — by no means a meagre task.
If and when this were to be achieved, it would require a paradigm shift in the functioning of the world. Society today is a well-established, structured system that guides the way people interact with each other. This system will need a major redefinition to incorporate machines with consciousness co-existing with humans and how the two relate to each other. It sounds far-fetched today, but questions such as this need pondering right now, so as to be able to influence the direction in which one moves when it comes to AI and machine consciousness, while things are still in the ‘design’ phase, so to speak.
While all of the above are pertinent questions, I believe they don’t necessarily outweigh the advantages of AI. Of course, there is a need to address them systematically, control the path of AI development and minimise adverse impact. The greatest and most imminent risk is actually a fourth item, not often taken into consideration, when discussing the pitfalls of AI.
Orto put it differently: the question of control. Due to the very nature of AI – it requires immense investments in technology and science – there are, realistically, only a handful of organisations (private or government) that can make the leap into taking AI into the mainstream, in a scalable manner, and across a vast array of applications.
Add to this the fact that one of the most critical components of developing AI is continuous feedback and learning that comes from having the most amount of data to get feedback or learn from. It is a self-fulfilling virtuous cycle — the more data one has, the better their AI platform(s), resulting in more users adopting their AI platform(s) and one getting even more data for more learning. Who would win that race? Again, only a handful of organisations possibly could. There is going to be very little room for small startups, however smart they might be, to compete at scale against these.
Given the mammoth portions of people’s lives that will likely be steered by AI-enabled machines, those who control that ‘intelligence’ will hold immense power over the rest of the world. That all familiar phrase ‘with great power, comes great responsibility’ will take a whole new meaning – the organisations and/or individuals that are at the forefront of the generally available AI applications would likely have more power than the most despotic autocrats in history. This is a true and real hazard, aspects of which are already becoming areas of concern in the form of discussions around things like privacy.
In conclusion, AI, like all major transformative events in human history, is certain to have wide-reaching ramifications. But with careful forethought, these can be addressed. In the short to medium term, the advantages of AI in enhancing lives will likely outweigh these risks. Any major conception that touches human lives in a broad manner, if not handled properly, can pose immense danger. The best analogy is religion — when not channelled appropriately, it probably poses a greater threat than any technological advancement ever could.
Can Citizen Journalism Work?
Citizen journalism promised a lot. It promised that people right in the middle of events, armed only with a smartphone...
Building the 21st Century Newsroom
Putting together the morning edition of your newspaper is difficult. Journalists must find stories, research them, write up their reports...
Fighting Fake News
Everything old is new again. Fake news is not new – it is just a new word for an old...
Are Billionaires the Only People That Can Save Newspapers?
Billionaires dominate so much of our attention. They are treated as people to look up to, to emulate and to...