Error message

Warning: strpos(): Empty needle in jellyfish_country_path_url_outbound_alter() (line 425 of /home/sites/jellyfish-drupal/public_html/sites/all/modules/custom/jellyfish_country_path/jellyfish_country_path.module).

Digital Journeys 2017: Panel - Who is Winning the Race for Artificial Intelligence?

Blog | 08 Aug, 2017

Who is Winning the Race for Artificial Intelligence?

Emily Tan from Campaign chairs this punchy panel where tech giants IBM, Microsoft and Google confront one another, and the challenges AI  presents; what you can do with it creatively, the moral implications and how to educate people as fast as the technology develops.

 

Transcript

Emily: Hi everybody, thanks for lasting through the day. We promised to make this panel quite fun, with lots of back and forth and bantering. Although we have promised not to push these guys and put them on the spot too much, I have promised that, none of you have. So have fun. So on the panel, we have Jeremy Waite who you've seen or heard earlier from IBM. He's had a head start on the two of you and we've got Matt Bush. That's funny.

Matt: We haven't met before.

Emily: I haven't heard this one. I've only and harassed him. So, yes, that is Matt Bush from Google and we have Dave Copeland from Microsoft which today we've heard from...a lot from Google, a lot from IBM and we haven't heard much from Microsoft. And they have quietly been pumping out some big AI updates, I believe. So not that quietly actually, just made a big splashy one. So we'll give Dave some time to talk about that but to start with before we go into who's winning the race for artificial intelligence, I think we need to define what the race is. So perhaps you could go through, one at a time, and tell me what your companies think of the frontiers in which the AI race is being fought. Let's start with Jeremy.

Jeremy: And we've not prepped any of these questions at all, by the way. This should be about you guys as much as it's about us, answering these questions. I'm not sure that there is a race, to be honest. I think the fact there's a race, first of all, implies there's a finish line and there's certainly not one of those. And also, there's not many people in this space that are actually competing with each other. Certainly, we're not. You know, we all have completely different applications to entirely different things. For me, the challenge which we kind of hinted at before and I think a lot of the other speakers have mentioned this as well it's just maturity in the marketplace. The biggest challenge, the race, is how to win the hearts and minds of people, to do cool stuff with technology. The tech's great and the tech works in most parts, we're trained in it, but market maturity is so far behind. People are still struggling with the basics. The race for me is to try and educate people as fast as possible.

Emily: All right. So, to rephrase that so that the rest of you don't get a cop-out and ride on his answer. Okay, so each of you has a focus on where AI development, on the front...on the types of AIs that your companies are working on. And I think it's visual, there's natural language, there's machine learning. Could you describe which are the top priorities for your companies right now and why?

Dave: No, is the answer to that one and let me just take it in a different direction. Talking about race and it's like talking about, you know, we all make different sets of running shoes and which one of the running...sets of running shoes is going to help you guys win the race. Well, that's just bollocks, right? What's going to make a difference is how quick a runner you are or whether you're running or doing the decathlon or whatever. So I think the more interesting part of the debate is really about where Jeremy is going in and saying, "What creatively could you do with a technology like machine learning and AI?" I think the bit I'd like to put on the table is as a society there are some morals, some ethics, some issues that you're going to run into because the thing that AI needs to work is data or more specifically your customer's data. And you're going to be seduced into some really interesting places with that data that will cross a line at some point unless we have an open discussion about where that line is and what we do...or about unconscious bias in the data we used to train the algorithms, we're going to end up in some dangerous places. And the thing I'm proud of as an industry, that's an area that we're all working on.

Matt: I think just to...I mean it's worth taking a step back really for a second. I should probably say for a second as well that these guys are deep technology experts. I'm an AT [SP] man. So I might come at it from a slightly different perspective but because of that I've done a little bit of research, obviously, just to try and keep up with these boys. But I remember, when I was a shelf stacker at Tesco’s in the 1980s, it was a Saturday job. It wasn't my full-time job, just to be clear. And I remember there was a guy there who went to study at Open University and he was studying AI. So AI is not new in any way, shape or form and the kind of the ideas around AI are not new in any way, shape or form. Just to touch on some of the... Dave said one of the reasons why we're seeing this massive growth and interest in AI now, is for two reasons. One is the massive increase in data. So we've got more data available than we've ever had before, and that's growing and growing and growing. And the massive increase in computational power. So those two things are happening right now and they are continuing. So, Moore's Law and computational power. You all know about the data argument and that we can't see the power of computers slowing down anytime soon. You know, there's some significantly faster changes to the like...I don't know if you know about quantum computing, you certainly do, which could kind of, you know, accelerate this again. But I think, you know, if you talk of just consider that, that's what we're talking about AI so much at the moment. And from a Google perspective, we kind of pivoted or shifted our language from...in 2010 we came out and said, "We're mobile first."

And, you know, that was a kind of an internal play really where we kind of re-engineered everything in our business to be thinking about mobile first and the mobile consumer first and so on and so on. We, last year, Sundar Pichai, said that we're now an AI first company. Really what he's talking about there, to kind of answer your question in a very roundabout sort of way, is that it's about using AI to solve business problems and so, you know, I don't really know or Google doesn't really know where that problem might be solved yet. But the point is that arguably if there's anything that kind of a human can think, then an AI could arguably do that task right now.

Emily: Okay. So let's talk about using AI to solve business problems. I think that quite...IBM has had a chance to showcase what they can do and Google has too. Could...? Maybe you can give us a walk through what Microsoft has been offering businesses in this arena.

Dave: Well, we've all got the same stuff, right? So, you know, just the session before me there was a whole series of APIs up on the wall. You've got them in Watson, I've got...some of them are in a Cortana, some of them are somewhere else. It doesn't matter, right? You've got to understand that we use the term AI and the thing you have to understand about artificial intelligence is it's neither artificial nor intelligent, right? Once you get past that we're into a place. What we're really into is a world where we can establish patterns and then we can feed data against the context of those patterns and it can spot matches. When you do a search on Google you're seeing the results of a pattern and you're seeing lots of data being sifted to give you the results. And whether that's a Google search, whether it's some of the stuff that Jeremy you're doing in health care with Watson or it's the stuff that we might be doing in cancer research or any number of areas, then you apply that through a business lens. Is there anything in your organization today that you do that follows an established pattern? How many of you have got a call centre or a sales line that people phone, right? The first few levels of that conversation is a pattern, "Hi, who are you? How can we help you? What are you doing? You're interested in this?" The Twitter guy talking about chat bots, that's a pattern and the thing is the patterns themselves can start to get deeper. They can't get more complicated and we certainly can join patterns to patterns. That's the bit where, you know, that's when we start talking about sentient AI and it gets all a bit Terminator-like and nobody wants to go there for the next few decades. So it really is, if you've got stuff in your organization that follows an established pattern, we can bring AI to it, and any one of us can do that.

Matt: Can I give a really good example on that one. So probably the most basic example and slightly ridiculous example I've heard so far is...so we... Internally we had an operating system, machine learning operating system called Tensorflow and about a year ago, we made it open source. Basically, it means anyone can plug into it, anyone can take...plug into the API and start using the machine learning powers of Tensorflow to actually start to solve problems for their business. And the one that I really love is there was a guy in Japan, his parents had a cucumber farm, these things exist. And apparently, on this farm, they were farming nine different types of cucumbers and it was taking them hours and hours and hours every single day to sort these nine different cucumbers because, you know, actually they all look relatively similar. But using Tensorflow they actually managed to sort the cucumbers into nine even piles with no human interaction whatsoever and so like, you know, they've saved hours and hours and hours. So like just, you know, we think of technology and we kind of box technology and we don't always think about the practical uses that technology can give. And so to Dave's point about like, you know, think about something that's repetitive and...

Dave: Sorry go on. And just what I love about your example is the bit I want to put back to you is and why I'm being a bit generic about our respective platforms is they're all irrelevant unless you know what question you're trying to answer. Matt's example, they had a question, "How do we sort the bloody cucumbers?" If you don't know what that question is then AI is pointless to you, it's valueless to you. So the most important thing that you have to do first of all is to figure out what is the question that we would like to answer. Has anyone got a cucumber farm?

Jeremy: We chatted about some of this with probably the smartest guy that I've ever met, a guy called Cédric Villani. If you've never met him or seen him, you should check out any of his talks or TED talks. He won the Fields Medal in 2010, which pretty much is the Nobel Prize for maths and he's just completely crazy. He's a totally mad French guy with his big cravats, he dresses... He's like a dandy all the time and stuff and we were talking about data, right? We were talking about this explosion of how it's the Wild West and no one can make any sense of it and he said... we're talking about IBM and Watson, he said, "Jeremy, do you know your problem?" "No Cédric, but you're about to tell us what it is." He says, "You're a blind man in a dark room searching for a black cap that isn't there."

So I'm thinking, "What?" He's French and he's a bit mad, right? Blind man in a dark room searching for a black cap. Well, basically what he was saying we're doing this whole thing about data and the explosion of where it's all going, right? 90% of all the data in the world created in the last 12 months, 80% unstructured, only 5% analysed. Two and a half quintillion bytes every day. It's equivalent to a company the size of Google every 24 hours and data is now growing at this rate of Moore's Law, it's just insane. So it's...the amount of data is doubling pretty much every 18 months.

Emily: So what he was just trying to tell you is that you're trying to solve a problem that is not quantifiable and doesn't really exist yet. You should solve one small problem at a time and build it up.

Jeremy: Or he's also saying that we're trying to go out, right? We're underestimating what AI can do without putting the emphasis on the people asking the right things and trying to solve the right.

Emily: So is this why each of your companies have applications of AI that are, you know, quite big, planet-saving AI for Earth and, but also really tiny micro applications of AI as well. Like every time you use Google to translate to scan a text, it translates it automatically.

Dave: I wouldn't say that was micro, I'd say that was quite powerful.

Emily: Well, when I say micro I mean, like, it's on my phone and it's not saving the planet yet but...

Dave: Yeah, no, fair enough. Fair enough.

Emily: Okay, maybe you could...

Dave: I see your point.

Matt: The Google Photos example, actually is even more ridiculous in some respects. It's brilliant. I mean, it's because you talk about...you were talking about training and AI. You know, it was the through Google Photos that I kind of understood how you train an AI.

Emily: Because you used Captcha to train Google Photos, didn't you?

Matt: Well, no we use pictures of cats and dogs, it's the Internet, right?

Emily: Yeah. But also, there was I think the Captcha recognizing the texting apparently was linked to a Google recognized text.

Matt: No, it's nothing to do with text whatsoever. So we just basically fed Tensorflow like hundreds of thousands if not millions of pictures of cats, millions of pictures of dogs. What it does is it looks for patterns and pixels and tries to like make sense of that and you keep saying, "That's a cat, that's a cat, that's a cat, that's a dog, that's a dog, that's a dog." And then it gets to the point where you start not telling it and you give it a picture of a cat, and if it gets it's a cat, if it says it's a cat, it says, "Brilliant, I got it right." If it says it's a cat and it's actually a dog, it got it wrong. So it says like, "Okay, what was I doing wrong? What patterns and pixels was I looking for that were wrong?" And on it goes until it keeps improving and so now, with Google Photo, you can search for things like hugs and, you know, we can tell whether that picture is a picture of a hug through Google Photos. But that's only because we've trained the machine over numerous...so, you know, just to kind of re-emphasize Dave's point about it's not intelligent.

Emily: Do you want to share something that people don't know Microsoft is doing, then?

Dave: Not really.

Emily: Oh, come on. Okay no. In a sense that people are using it every day and is it getting better in the background but people haven't realized that AI is now sort of taking over the way it was.

Dave: And this is a conversation we all have to have. I mean, I was having it with my son and his mates the other day. We all use AI every day. You know, every time you use a search, you're using AI. When you go to Amazon and you're looking for a product, you get a recommendation, it's AI. All that's happening, it's those patterns and I just want to come back to the point you made about, you know, we're all doing these big projects but then there is these little things, right? You've got to understand to do big projects. Like ours is AI for Earth, you need a bunch of computer scientists, people who have been working in AI for decades to do the little stuff though, to build an ad model or a campaign. You use the tools that we create. You don't have to be that kind of person in order to do it. You need to understand the basic fundamentals of data but you can still be creative about that. You can still do it and I think that's the segmentation you're going to see. The big stuff, the stuff that's going to really change the planet over time needs to be the computer scientists but the little stuff, the stuff we interact with every day, that's the stuff that we all do.

Jeremy: I think one of the challenges as well is that a lot of people don't realize there's a problem. I'm going to try and put this in the context of today practically and keep it serious just for a tiny moment. We talked a lot about customer journeys and I think Emily mentioned the car brand that...190 stages in a customer journey. A few years ago, I used to go out talking a lot about no one knows the lifetime value of that customer. No one knows the who, what, why, where, when, what the customers are actually doing. That's not really the case anymore. Everybody knows, for a lot of the time, where their customers have gone and what they've done. They might not know super well across every channel and Google is telling us customers on average cross five channels or devices when they're doing their thing. So, I was in...this was my "Aha" moment. I was in a boardroom talking to an exec of a very big brand about this. And he's like, "But we've got it. We're okay. Ninety percent of the time we know where our customers have gone." And this is the issue that we face. A lot of agencies we've got to go out. You can't help someone until they realize they've got a problem, right? So I said, "Okay. You think 90% of your time you know where your customers have gone?" And he's like, "Yes, and we've got all the data and everything is in place." So I said, "Let's just do the maths, really, really simple. Five channels is where your customers go during the lifetime of their experience with you." He's like, "Absolutely."" He's like, "Website, review, social app, five channels."

They check them on average five times throughout the course of that lifetime and they've got three devices, desktop or laptop, smartphone, tablet. So you just do the basic math. Five times 3, 75, right? If you look at a customer journey with 75 pieces on it. He said, "Nine times out of ten, I know where the customers have gone." Ninety percent to the power of 75 is 0.04% success rate, if you do the calculations on your phone. I Googled what that probability is and it's the chances of being injured when you sat on the toilet. And this is a guy that thinks 90% of the time, "I know where all my customers have gone," and he still only got a 0.04% chance. If you were 99% accurate, it's still only 47% and the problem with our industry, is a lot of people we're trying to help feel like they're okay and in control when they don't realize that they're screwed and they don't know where anyone is going.

Emily: So how are you guys around showing them that they're screwed?

Matt: Being aggressive like that. I think it's about trying to get them to understand that it's the data and like, you know, you've all heard about HIPPO, the highest paid person's opinion, you know, and we still experience that in every single meeting that we go to, not at Jellyfish, actually. We used to have a good debate with those guys. But like, you know, everything should be start...lead with the data. And once you can start to like, you know, uncover the data and then you understand what the problem is, then once you know what the problem is, you can start to fix it.

Emily: Okay, so I'm going to segue this into a slightly different question because I'm just bloody curious. I'm doing an article around it right now. Why is voice, a natural language, such a big deal for all three of you? Why is it something you're investing so much in and in the case of Microsoft, for example, how has your partnership with Alexa affected Bing and how it searches and perhaps the usage of it?

Dave: Well, I can't talk to that at all actually. I don't know anything about that but I can certainly talk to you about voice and why voice is important.

Emily: Yeah, okay, why is voice so important for AI and the interface that people are seeking for?

Dave: Because we want a more human interaction. So, you know, Microsoft have been thinking about natural user interaction for a long time. I would argue it took us until the Kinect in 2001-ish, to get there, but the bit about natural user interaction as you have to understand natural is within a context. You know, we saw it first. Remember when mobile first found...oh, you're all too young, mostly. So when mobile phones first came out, right, people would be walking down the street wearing a headset. It's all quite normal today but we would be like, "Nutter." You know, stay away, you know, and it took us a while to get used to...because in the context of walking down the street or sitting on the train, on a plane talking to nobody is a bit weird, right? But actually when you're on your own or when you're in an environment where that would make sense, that natural piece comes into play. It's a bit like using gestures on Kinect. Brilliant experience in your living room, not so good on a computer on a packed commuter train, right? Again, it's a bit of a problem.

So what we, and I think other technologies, want to do is to deliver a range of interfaces that allow the human at the other end of this to make the right choice about to do what would be natural for them. Voice is a core pillar in making that right. And the other thing with voice is that it was only really...and Matt, you talked about data and computers being the two big things. I'll just add one thing to that which is neural networks. In 2009, we made a massive breakthrough in how we thought about what we call artificial intelligence but in reality is machine learning. And it's basically layering the patterns that we're looking for in the same way that the human brain does, and we mimic a bit of that and that enabled us to go from really, really crap voice recognition to slightly crap voice recognition.

Matt: But I think, you know, just because there's a study that we published a couple of weeks ago and those who started using Voice Search 5 years ago, only about 20% of people like sort of, are still using it today. So the experience was, to your point, crap and therefore, you know, they moved away very quickly because the experience wasn't there. Now, because we've made such advances in natural language processing and so, you know, on the whole, what you're saying is being understood. And actually quite often you get a voice back then, you know, what we're seeing now is that, you know, people who start using it now it's about 85%, 90% that are using it on a regular basis.

Emily: Yeah, do people ask questions differently in voice and assess things differently than when using text?

Matt: Yeah, yeah, very differently. I think it's two things here. One it's faster, significantly faster in voice. We actually got the... we did an event recently and we got the global speed text champion that exists and the fastest rapper in the world to do a voice to text at the battle and the voice like smashed it, obviously.

Emily: Could it pick up the rap guy at his full speed?

Matt: Yeah, yeah. And we were showing off a little bit, of course.

Dave: But the bit I worry about, Matt, and I don't know where you guys are with that, but we're still at a point where the voice recognition is really good but if you want to use a device like a Alexa or Cortana or what do you call your home assistant... home assistance?

Jeremy: I know, we keep changing it. We do keep changing it.

Emily: You got the Home device, right?

Matt: You don't answer the device.

Dave: But the problem is you have to remember the syntax of the commands that will make the thing work and that's a problem because as a human being I don't want to know your bloody...I don't mind. Turn the bloody lights on, right? Now that's what I want, right?

Emily: There is a problem where you have to remember the exact phrase or you might activate the wrong thing or get a Spotify song that you like. They have a song about that?

Jeremy: And so it's a bit like, you know, and you've seen this a lot. We've seen in search where, you know, you don't type a question into Google or Bing, do you? You don't say, "Please tell me where is the best restaurant in Shott." You just say, "Best restaurant, Shott." All right and so we changed the way we interact. And so we've got to be able to respond to a very much more natural human interaction.

Matt: And on that, just on search very, very quickly. As people have got more comfortable using search, voice search then, you know, we do see a completely different type of query coming through, as you can imagine. I mean, something like 25% of queries that we see every day are new, that's pretty much static, that hasn't changed dramatically. But what's driving that now is the changes that we're seeing are voice searches. And I don't think from this...well, so I know to a certain extent, but from the data I've seen, most brands aren't really kind of taking into account the way in which people are searching with their voice as opposed to the way in which they search with text. And I do think, you know, to Dave's point earlier, you're not going to sit on a crowded train and start doing voice search. It's just not going to happen. You'd look like a mad person. Whereas like, you know, in the comfort of your own home, of course, you know, it's a much easier way to do it. If you're walking down the street, and you just want to shout something quickly because you're carrying a bag, it just makes more sense.

Emily: Well, I'm going to open up to the floor now.

Contact us to talk digital today. 

Contact
Please enter a valid Name.
Please enter a valid email address.
Please enter a valid Phone.
Please enter a valid Company Name.
  • Select Service
  • Analytics
  • Brand
  • Consultancy
  • Conversion Rate Optimization
  • Display
  • DoubleClick Partnership
  • Email
  • PPC
  • SEO
  • Social
  • UX
  • Video
  • Websites
  • All Services