Ask the AI Expert – Part 1: Big Data, Brexit and Surveillance

Artificial Intelligence (AI) and Machine Learning (ML) are hot topics at the moment. Not just in finance but across all areas of work, politics and society, they seem pivotal to our future. This is the first out of three Ask the AI Expert articles.

You can read Ask the AI Expert – Part 2: Quants, Agriculture and Entrepreneurs here and Ask the AI Expert – Part 3: Risk, Regulation and Integrated Services here.

We were keen to learn more about the opportunities that AI and ML will bring to the financial advice profession and of the broader implications the application of this technology will have for our lives and society. I had the incredible opportunity to speak with Chris Cormack, Co-Founder and Managing Partner of The Quant Foundry, to explore the issue in more detail.

The Quant Foundry is a boutique consultancy offering bespoke quantitative solutions for all areas of financial risk including credit, market and operational risk. Chris Cormack is involved in all aspects of running the company and as a client engagement partner oversees multiple client engagements. Chris is also head of the methodology and models research team that builds both conventional stochastic models and Machine Learning and AI models.

Chris Cormack started his career as an academic, a lecturer in physics at Queen Mary University of London. He has an MA in Physics from Oxford, a PhD in Particle Physics from Liverpool and a Master’s in Mathematical Finance from Oxford. Chris has travelled the world doing research, including California, CERN in Geneva, and Japan. In 2004, he co-founded The Quant Foundry.

Part 1: Big Data, Brexit Surveillance

AV: Many people will have heard the phrases, but could you give a quick description of what AI and machine learning are and compare and contrast, as they’re not the same thing?

CC: In layman’s terms, AI can be regarded as a set of mathematical models that can be used with or without robotics that can perform an action that’s usually associated with human cognition. Throughout the history of AI, people have been involved in understanding how we recognise objects; how we perceive the space around us; interpreting speech and understanding language; and, possibly the biggest focus, how we make decisions.

These are all the characteristics associated with AI, so it is quite a broad area. Over the past 60 years or so, different branches of AI have developed. Two main branches are machine learning and the symbolic and goal-led learning.

Machine learning is about using lots of data to start to classify information. The idea is that you can train the computer with human assistance to classify things. If you have a picture of a cat and a picture of a dog, for instance, you can label them as such and over a couple of hundred instances of this you can train a classifier to separate images of cats from those of the dogs. That is the essence of machine learning – using lots of data with human assistance to train something to separate and classify that information.

Under the branch of machine learning there are things like speech recognition (natural language processing is the term that’s used) and also the concept of deep learning, which is an advancement of machine learning that uses complex neural networks. Those are layers and layers of little bits of memory, if you like, that are then trained with labels, essentially. These are used for things like autonomous driving, which is one example people are familiar with, and also to interpret the written or spoken word.

Now going back to the other branch out of AI, the symbolic and goal-led learning is a combination of computers trying to interpret the world and is a form of reinforcement learning. It’s like when a child does something great you celebrate and encourage it, if they do something bad you help them understand that this wasn’t what you wanted.

Computers can be treated the same way with a reward system. The reward is a number: the bigger the number the greater the reward and reinforcement for what they have done; the smaller the number the less reinforcement. You can train quite complex systems using goal-based learning like this, which can actually make quite complex decisions around risk characteristics or utility characteristics as part of reinforcement learning. Goal-led learning is used across a number of areas, whether it’s robotics or, to give a finance example, portfolio optimisation, or complex decision-making even in areas like negotiations, for example. It can be used to anticipate behaviours.

AI is now encroaching into the space of human creativity, such as ad campaigns based on knowledge of imagery and music. It has even been adapted to create works of art.

One example people may be familiar with in reinforcement learning is AlphaGo, the Google programme that beat the world Go champion two years ago. [Go is a strategy board game, similar in some ways to chess, but generally accepted to be much more complex]. They now have a model that can outperform the best humans in that space.

AI is now encroaching into the space of human creativity, such as ad campaigns based on knowledge of imagery and music. It has even been adapted to create works of art. It’s big and changing and having a huge impact on a lot of people’s lives and businesses.

AV: Could you tell us how AI and machine learning was involved in the Brexit campaign?

CC: This is a very relevant and interesting story. There are various judgments and interpretations of what happened, but essentially the Brexit campaign was micro-targeting individuals using AI. This was highlighted in the recent Benedict Cumberbatch film.

The ‘Leave’ campaign wasn’t the typical broad-brush political engagement but a selective targeting of people who could be nudged in the ‘right’ direction. The campaign leveraged social media and all its huge amount of data, likes and dislikes etc, and came up with a recommendation engine in terms of who is likely to respond to an ad campaign, who is likely to be persuaded towards voting to leave.

They mined information on people’s demographics, location, maybe information on their previous voting intention and their socio-economic status, what their likes and dislikes were and went a little further and gauged their biases. They showed them things like the big red bus with £350m on the side to scenarios of what could happen if Turkey joined the EU – scare stories essentially. People were targeted with dynamic ads that could be adjusted each time you clicked on a certain one, so the algorithm anticipated your likely response and enhanced your likely desire to shift from a neutral view to a more pro-leave Brexit view.

With a few bits of information, AI can start influencing not just broad sections of society but people at an individual level. People find that insidious, that an algorithm could know more about them than they anticipated.

This is the subtlety of AI. In this case it was able to anticipate the likely trend to nudge people in a given direction. Rather than using passive megaphone political propaganda, it was actually targeted. What is controversial is that people have now realised that AI with a few bits of information about individuals can start influencing not just broad sections of society or a given region of the country but can influence people at an individual level. People find that insidious, that an algorithm could know more about them than they anticipated.

Another clever thing about that campaign was that it reached out to people who didn’t typically engage in politics. They looked at different segments in very different ways. It’s clever but obviously very controversial, and it’s been a wake-up call to people on what AI can actually do. Lessons were learned and everyone got very worked up about this in terms of how AI was applied. There are some very significant concerns in terms of future impact on society and it is a clear wake-up call. What’s good and important is that everyone engages in this debate. Like any technology, it’s how you apply it that’s important, not just the existence of the tech itself.

AV: While we’re on this topic, could you give us your views on some of the more worrying aspects of AI technology?

CC: One of the challenges that comes with AI is that once it reaches the point where its ability to classify is on a par with human capabilities there is a natural tendency for people or governments to notice this capability and to look at what how they can employ it.

There are some genuinely scary things in terms of decision-making. For example, China’s behavioural monitoring system is quite an interesting concept. Anyone who’s been to China or seen various journalistic reports from there will know they have a huge surveillance system throughout their cities. Motorways are full of cameras that don’t just record what the vehicle number plate is but also what the individuals are doing in their cars. There are a number of CCTV cameras dotted around which are not just being passively monitored by human beings, they are being monitored by AI systems that identify faces. And individuals that are identified committing a social misdemeanour or a crime are instantly tagged and identified.

In China there is a social credit system, so every time you jaywalk across a road, for instance, a point gets deducted from your ‘account’, as it were. If you drop below a certain level of points, you will have certain access rights removed or suspended. Like you may not be able to get a loan just because you jaywalked across a road when you were late for a train. The challenge is giving up too much of our social infrastructure and decision-making to AI.

The use of AI in the Brexit  campaign has heightened people’s awareness of AI. How it is applied is something that we should all be discussing.

It’s also something that is being aired as part of a crime prediction indicator in the UK. Something that’s come out of the use of AI in the Brexit campaign is people’s awareness of AI. How it is applied should be something that we all should be discussing and recognising how and where it is used and how people go about understanding the models.

One of the biggest things about any AI model, whether it is in a social or governmental context or in a business context, is the explanation as to why the model came up with the decisions it did. That’s a big area of AI and ML research and a big focus for anyone building these models, to ensure they can explain the decisions made by their models. It’s something that anyone should demand of any AI model and it’s something that we certainly focus on in the regulatory side that we work on. I always say it’s not just the technology, it’s the application of the technology that’s important too. It’s good that people’s awareness is increased. There are dangers, but there are great opportunities too.

AV: This is very interesting because what one person may see as absolutely acceptable, someone else may see as completely the opposite and that’s a really interesting dynamic.

CC: It is, and it’s like the richness of any human interaction. There are always going to be common ground areas and there are always going to be controversies and misinterpretations. AI will only enhance these things as it is basically built around human cognition. It’s an interesting conversation – hopefully not just between chatbots!

 


This document is marketing material for a retail audience and does not constitute advice or recommendations. Past performance is not a guide to future performance and may not be repeated. The value of investments and the income from them may go down as well as up and investors may not get back the amount originally invested.

Let's Talk

Book a FREE 30-minute Teams call and we’ll answer your questions. No strings attached.

Check Availability 

You are now leaving First Wealth's website

First Wealth (London) Limited does not endorse the linked website or any of its contents, and is not responsible for the accuracy of the information contained within it.