Rob McCargow: Director of AI
AiLab interviewed Rob McCargow, the Director of AI from PwC in the UK. We met up at the PwC offices in London, UK to hear Rob's insights and find out about his work in the Artificial Intelligence space.
This interview took place in Jan 2018 with Dr. John Flackett.
[Ed: since this interview, Rob has been promoted from AI Programme Leader to Director of AI at PwC].
AiLab: Rob, could you tell me about yourself and your role at PwC
RM: Yes, I’m Rob McCargow and I’m the AI Programme Leader for PwC in the UK. I have a number of areas I work on within the AI field. The first thing I do is spend time with our internal workforce to bring them up to speed on the implications of AI and the opportunities that they can use to help our clients. I also spend time with our clients across all sectors, as well as with external stakeholders such as policymakers, academics and the startup and tech vendor scene.
The opportunity for AI is significant and because we work with so many organisations globally, there’s huge interest and therefore having an approach to explain the technology in a non-jargonistic way is absolutely critical to building trust in the technology. It’s important to set the scene for how businesses can embrace this technology in a way that’s responsible, mitigates the risks and allows you to accelerate the innovation.
AiLab: Cool! So how did you get into AI?
RM: It was a fairly circuitous journey… I actually started as a Microbiologist and my career has always been a journey of constantly reinventing myself to ward off the threat of automation! I first started in a box making factory and then in shot-blasting and galvanising after University. Following this, I spent a long time in HR and recruiting and had a stint in trying to save the world in West Africa during the Ebola outbreak for an NGO – that was quite an eye-opening experience. I then joined PwC about 3 years ago in an operations leadership role, which was a useful scene-setter. The opportunity came up to work on a major AI project and spearhead it from a change management perspective to help win the trust of the internal stakeholders. Following that I added the external-facing remit to share the lessons learnt with our clients.
AiLab: I know, as with me, you see the AI hype at the moment. Is there any story that stands out particularly on that front?
RM: I think there has been some remarkable high-profile breakthroughs in the last year or two, such as the UK’s own DeepMind – acquired by Google of course – and the breakthrough in AlphaGo and AlphaGo Zero. Also more recently, the story about how they deployed their technology to learn chess in a matter of hours to be more or less invincible. There are also great opportunities happening in healthcare including some British-based organisations like Babylon, who are doing some interesting things in healthcare diagnosis. In the field of drug discovery, an organisation called BenevolentAI are doing some amazing things in augmenting the discovery of novel drug compounds, which could be really promising.
However, for every positive one there are some ill-conceived ones as well. In the UK there is a new AI startup every five days which is a positive sign of the growth potential. However, how many of these are genuinely deep tech AI companies? I suspect that some of these could be more branding exercises to boost their valuation which can fuel overpromise in the market.
AiLab: One of the reasons we are in London is to learn more about how the UK is trying to be a global leader in AI. How do you think that’s going – what are they doing well and what can be improved at the moment on that front?
RM: I think the UK has got a number of very clear advantages in the AI field. We have some remarkably strong higher education institutions across the country, which stand up to scrutiny against the others globally. We’ve got a good innovation scene here and a number of very high profile startups. We also have an increasing awareness from the political class of what an opportunity AI could offer whilst also being mindful of the risks it poses. When you see how the UK shapes up versus the other major global economies – the US and China in particular, but also Australia, Canada, the UAE and Singapore, etc. – we have to know what aspect of AI we can be leaders in.
There’s a narrative at the moment about the UK becoming ‘the’ global leader of AI. The truth is, if you look at the sheer amount of investment in Silicon Valley, the entire technology investment in the whole of Europe equates to only 8% of the investment in the US – so we can’t compete on VC investment. The other side of it is around the volume of data that countries like China have with 800 million people online. That’s a huge opportunity with enormous data sets to perfect Machine Learning techniques, so we can’t compete in terms of scale of data.
Where I think the UK has got an opportunity in AI, is to set the standards and lead in how we embrace this technology responsibly, how we establish the ethical principles and aim to lead the world on some of those approaches, driving innovation, driving standards and driving a sustainable approach to adopting the tech.
AiLab: I know you’re very interested in the ethical side and responsible AI. What are your main concerns around AI at the moment?
RM: I think there is a lack of public awareness of the effects of the pervasive power of this technology on us as consumers. There are a number of major risks that are arising for organisations at the moment. In particular, the one in a matter of months (May 2018) within the EU is the General Data Protection Regulation (GDPR). This will have profound implications for our clients not just in the EU but ones dealing with EU citizens which, if not taken seriously, could stifle innovation. Organisations need to be very clear around their data protection standards.
There are a number of issues as well around the homogeneity of the AI workforce. In many parts of the world it’s as much as 80% male-dominated and there’s a regular stream of stories coming out where AI has led to poor outcomes and discrimination through the amplification of bias in datasets. In order to create good AI for all, we need to have a workforce in AI that is representative of all parts of society and we are not there yet.
AiLab: It’s the start of a new year and you’ve predicted that Human Resources will be a big area for AI this year. Why HR specifically?
RM: There isn’t necessarily a hard science behind this, but one statistic from my friends at CognitionX suggests there are in excess of 300 AI HR vendors in the market which has seen a huge proliferation. The HR roadmap for an organisation has got a number of points in it from building communities of talent, from identification, assessment and selection of CV’s, the assessment through video interviewing, through to the subsequent offer management, the onboarding, right through to the point at which they are then setting the objectives for training and development during their career lifecycle.
The opportunities are pretty endless. The fact is that organisations have reservoirs of enormous amounts of data on their employees that they’re not really making the most of. Organisations should not just be focused upon efficiency, rather the focus should be on improving career journeys, upon improving trust with employees and providing better jobs. However, in order to do this there clearly needs to be a major conversation with the workforce about being absolutely transparent and crystal clear about the way that their data is used, harnessed and harvested to come up with decisions. For example, who gets what job, who gets sent on an assignment, etc., so there’s a double-edged sword there.
AiLab: You have hundreds of speaking gigs in the coming year, what else are you and PwC focused on this year?
RM: I think the responsibility and focus for us is to support the huge number of clients across all sectors that haven’t really started on their AI journey as yet. To retain their competitive advantage companies are going to have to move on this quite quickly. So, rather than seeing them racing head-long into this and feeling the commercial pressure to have to adopt AI, it’s about bringing them back a few steps and making sure they have got the right ingredients in place to make the most of the technology that is now on offer. A lot of our work is around education; it's around driving some proof of concepts and experimenting quite quickly. Not necessarily being too deterministic about something, but being a little bit exploratory while building their confidence in the technology so we can then move forward to large scale implementations.
AiLab: Thanks so much Rob for your time
RM: Thank you for talking to me.
We would like to thank Rob for his time, his awesome insights and for showing us around the spectacular PwC offices. Also to the staff at PwC for welcoming us - in particular the wonderful Francis.
This interview is copyright to koolth pty ltd & AiLab © 2018 and may not be reproduced in whole or part without the express permission of AiLab.