8 minute read

AI And Digital Labour: The Experts Weigh In

Informed Intelligence

AI And Digital Labour: The Experts Weigh In

From the point of view of students, AI is scary: all that is studied in university classes is being redefined. But from the point of view of professionals, AI is terrifying. All that they have made their careers on is being analysed, deconstructed, and automated. Or at least that’s what could seem to be the case in most coverage of AI. But things were different at the Proximus Centre in Brussels in February, where major industry figures gathered to discuss the state of art of this fascinating technology.

Where it Started and How It Is Possible: the Basics of AI

The first one to speak was Professor Hugues Bersini, Co-Director of the IRIDIA Lab and teacher of AI at the Université Libre de Bruxelles (ULB). Ater the problem of the defining AI itself, he argues, the first concept that has to be understood is between two different architectures of AI.

The first (and probably better-known one nowadays) is the statistical-driven one of a neural network that just learns from experiences. But the real question is: does such a system really ‘understand’? It’s just learning to adapt inputs and outputs in order to get to its goal. Why that’s the correct way of doing so is written nowhere, but just implicitly embedded in the data.

A different approach is used by the so-called ‘old-fashioned AIs’: programs that have instructions with a kind of reasoning behind them, that try to make the best choice in order to get to their goal. This is what has been done by DeepBlue, the software which beat several chess champions.

An interesting field to contextualise these issues is that of self-driving cars. Using the machine learning approach, the inputs (i.e. what the camera sees) flow through a Neural Network with the goal of replicating the driver’s output (i.e. actions on steering and pedals). Then, the machine learns how to do it – but it has no real clue of why it does so.

However, using the other approach, the decision tree involved is too complicated for a machine because of the sheer number of variables involved, making it too complex a task.

“At a certain level of complexity, ML is the only solution” Professor Hugues Bersini

Recently, the second type of AI has improved a lot thanks to the great availability of data to feed it. Data mining existed for years, but there has never been as much as there is now. Through Big Data, images, sounds, and videos for analysis are abundant in today’s world.
IBM’s Watson architecture, which tries to combine both of the types of AI discussed here (although it is more complicated than stated here), has been able to achieve some exciting results.

What AI Can and Can’t Do Better Than Humans

Other speakers gave insights on where the market is today, describing the products that are being used and developed now.

Bruno Schroder, CTO of Microsoft for Belux, explains what the big corporation he works for is doing right now, while a more ’boutique’ point of view is provided by Pierre-Yves Thomas, founder of Pythagoria.

Ranging from Uber, which uses facial recognition to confirm the identity of drivers (a perfect example of an easy task for humans, but quite an achievement for software), to the chain JJ, which identifies and predicts consumers’ shopping baskets, to the many applications in medicine, like skin cancer recognition, AI is increasing efficiency business and more.

An example of the world getting better thanks to the smart use AI is a program for helping teachers in India to identify children who are more likely to be dropouts. This helps them act in time on those students and fight the problem of deschooling – a huge issue in India today.

How is this possible? The technology which could achieve this is Imagenet’s Large Scale Visual Recognition Challenge, a neural system which has been able to correctly identify elements in pictures more accurately than humans themselves. With Cognitive Services, it is even possible to identify emotions. And since emotions are universal, this technology can be used everywhere in the world, unlike speech recognition AI. However, things are also changing in that field. Last November one of them was able to beat humans as well.

Another very interesting field is in finance – in particular, M&A. Just imagine being able to feed the corporate regulations of two companies to a computer and just look at it highlighting every inconsistency. AI will be able to do this by checking the internal coherence of already existing contracts and documents with the new clauses that are going to be introduced.

It looks like the bots are getting better than humans in many fields. The question tharisesses from this is significant: will they ‘steal’ jobs? It may be helpful to listen to another great scientist who wrote a lot about the topic, Andrew McAfee. His theories about ‘The Great Decoupling’ deal with the transformation of the society following the development of digital technologies, and evangelises “the dawn of what is called the Second Machine Age”.  This Harvard Business Review can be consulted for more details on this matter, which would form too lengthy a discussion to be included here.

This question itself is arguably wrong, as it starts from a wrong assumption: to steal a job implies that the amount of work to be done on this planet is limited. But if there is not a right answer to this question it would be stupidly inefficient to break the development of these technologies.

“AI should be designed to assist humans, not to replace them” Bruno Schroder

It is clear what he meant from this video.

Even if everything that is a process can be automated there are things that require the incredible complexity of a human brain, and a human brain-like neural network could not be run even with all the power plants on the planet. So, these old ‘technologies’ still have their raison d’être.

Tasks involving emotion and creativeness in general will be human prerogatives for a long time yet. But with AI’s increasing role, some lines of work may see shocks. A doctor, for example – universally recognised as a highly-skilled worker – simply reconnects inputs (the symptoms of the patients) with an output (the diagnosis), while a nurse takes care of the emotions and general mental wellbeing of the patient. That’s why Schroder postulates that, in the future, “a nurse will gain more than a doctor, because AI will be able to do the doctor’s job but not the nurse’s one.”

Outlook on the Future: Dangers and Opportunities

Hans-Christian Boos, Founder and CEO of Arago, is providing consultancy on AI to a lot of new and old businesses, so he’s certainly one who can tell where this is all moving and what the future outlook look like.

Companies often do run themselves like programmes, using people, time and money as main resources to accomplish their tasks. In order to avoid this, future companies could be run by AI. Companies are made up of humans, beings with imperfect rationality, and sometimes it can be difficult for them to change and adapt to new social structures.

This is especially the case for people who have been working in some companies for years. Being “replaced” with a machine despite doing a great job, or believing they’re doing a great job, may not be so easily accepted. The strategy that can be used to resolves this would be sort of Trojan horse: in order to start introducing AI, businesses can just go step by step, automating one process first, and then another, and so on – all to show people what the new technologies can achieve.

Businesses can already access services from consultancies about the technical feasibility of automating certain activities, comparing it with the skills–costs ratio of the workers currently doing them, and taking into account the current and expected regulatory framework.

Apart from the consultancy cost, AI is a cheap technology. It does not need a lot of hardware, runs in the cloud, and is often easily implemented. And when it comes to the hidden costs of not implementing it, this could be intended as a ‘do or die’ situation. Should some company in a business that could benefit from AI technologies fail to keep up, some competitors will do so sooner or later, and latecomers will be out of business.

Final Thoughts

AI is not going to disrupt everything. But it would be blindness to ignore the potential that it has.

Business will be changed, jobs and people’s everyday lives too. But that’s not necessarily a bad thing. Letting AI do what humans do today is not a bad idea, assuming they can be taught to do so in the right way – which they can, being already more precise, less erroneous and, in a certain way, more ethical than humans.

Once AI will take care of automatable work, humans will have time for emotional and creative work. Or just enjoyment: is it written somewhere that people have to work in order to survive? This new technology can let machines work while humans create.

Get articles like this straight to your inbox each morning with our Breakfast Briefing. Sign up by clicking here!

Log in with your details

or    

Forgot your details?

Create Account

Send this to friend