Practical AI – Episode #25

Finding success with AI in the enterprise

featuring Susan Etlinger

All Episodes

Susan Etlinger, an Industry Analyst at Altimeter, a Prophet company, joins us to discuss The AI Maturity Playbook: Five Pillars of Enterprise Success. This playbook covers trends affecting AI, and offers a maturity model that practitioners can use within their own organizations - addressing everything from strategy and product development, to culture and ethics.

Featuring

Sponsors

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.

RollbarWe catch our errors before our users do because of Rollbar. Resolve errors in minutes, and deploy your code with confidence. Learn more at rollbar.com/changelog.

LinodeOur cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2018. Start your server - head to linode.com/changelog

AlgoliaOur search partner. Algolia’s full suite search APIs enable teams to develop unique search and discovery experiences across all platforms and devices. We’re using Algolia to power our site search here at Changelog.com. Get started for free and learn more at algolia.com.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of Practical AI. This is the podcast where we try to make AI practical, productive and accessible to everyone. I am Chris Benson, one of your co-hosts. I am the Chief AI Strategist at Lockheed Martin, RMS APA Innovations, and with me is my co-host, Daniel Whitenack, a data scientist with SIL International. How’s it going today, Daniel?

It’s going great. How about with you, Chris?

Doing good. What have you been up to lately?

Well, I finished out my course that I was teaching at Purdue University, so I’m enjoying grading, and then throwing some eggnog in there when I can pair the two. That’s working out well…

Sounds great. As I mentioned as we opened up, I actually started this new job at Lockheed Martin. I’m very excited about it; I’ve been ramping up on that, and I’ve never worked for a defense contractor before, so I’m learning all sorts of new things about how to apply AI, and it has been absolutely fascinating the last couple of weeks doing that.

Yeah, it’s exciting. Don’t share too much, or you’ll have to kill us, I’m sure.

Yeah, I’ll have to kill myself, so… There we go. [laughter] So I wanted to introduce our guest today. Our guest has become a good friend of mine in recent months. Susan Etlinger is an industry analyst with Altimeter, which is a Prophet company. Susan and I met at the Adobe AI Think Tank earlier this year in New York City, where she moderated a 90-minute broadcast on Facebook, and I was privileged enough to be one of the people on the panel. How’s it going today, Susan?

I’m great, thank you. It made it sound like we spent the entire 90 minutes talking about Facebook, but we actually talked about AI. [laughs]

I’m glad you said that, very true. We had a great panel and talked about AI with a lot of really smart people, they were able to contribute to that conversation, so it was great to meet and I’ve enjoyed talking to you ever since, and it became obvious really early on that I had to try to twist your arm to see if you would come on to our podcast, because there is so much about the world of AI that you know.

Fortunately for us, I know that you have been working on a report that is fascinating, called The Maturity Model for AI in Enterprise, where you’re talking about AI in enterprise and in industry. I was wondering if we could start off with you just telling us a bit about that.

Yeah, absolutely. Actually, it’s just gone live as we’re recording this, so by the time this podcast airs, everybody’s gonna be able to see it. What I’ve been trying to do over the course of the past - it depends on how you count it - several months to several years is understand a little bit about the way that artificial intelligence is evolving, not just as a technology or as a societal or social impact, but also just in terms of the impact on business. The impact on business is so different in so many ways, the enterprise impact versus the consumer impact, that I wanted to get a handle on it.

This report is about two major things - one is one of the four trends that are really affecting the way that enterprises implement AI. Those four trends have to do with how we interact, moving from [unintelligible 00:04:00.19] moving from URLs to speech, and images, and that sort of thing.

[04:08] The next is around how we decide, and that is moving from the old way of programming, if/then statements, so from business rules to probabilities, because AI is, of course, inherently probabilistic.

The third is around how we innovate. In the past, or actually in the future, we’re gonna go to more of a data engineering world, where we’re actually incorporating data into the engineering process in a much more fluid way than we can do today, and that’s something, Chris, that your insights really help me shape.

Today, in many places, we’re in a sort of reporting-on-the-past kind of world, and we need to be able to use data in a much more forward-thinking way.

And then the last is around how we lead, because we live in a world that’s very hierarchical, that’s very expertise-driven, and of course, data and the ability to get clean data is going to help us make decisions based on data.

I’ve had the ability to go ahead and read this… I know you quoted me as one of a number of people in the article and let me see a preview, and the big thing that I really was thinking through this process was how much I wish I’d had this over the last couple of years as I worked for previous employers and trying to put together the business case and the operational aspects of AI teams.

We’re starting to see organizations like Google and Amazon offering up some of their internals, but this report, the AI Maturity Playbook that you’ve put out, is a huge tool to get people started in this… And I wish I had it all along. Daniel, were you gonna say something?

I was just gonna say, it’s interesting – the thing that stood out to me when you were talking through that is the emphasis on engineering that you were talking about, and integration within a company’s infrastructure… And I don’t know if you’ve seen this - maybe you can comment on this - but it seems like we’ve seen a trend, at least when I’m looking at job postings and people’s titles and such, there was kind of a time when we were talking about “Oh, everybody needs to be a data scientist, and we’re all gonna use data for stuff.” Then it kind of moved into “Everybody needs to be doing AI and be an AI person or machine learning person/scientist.” Now it’s kind of drifted into – I see a lot of job titles looking for “machine learning engineers” or “AI engineers”, or “data science engineers”, whatever that is…

People are gradually coming to the realization that they actually have to do some type of integration of this stuff in their infrastructure. I don’t know, are people feeling that pain? What’s pushing that side of things forward?

It’s funny, because this topic, this issue of data analytics, to data science, to data engineering really popped up in my interview with Chris. Not to do too much logrolling here, but it’s really where I started to think about it. So in the subsequent interviews when I spoke to other people, I asked them – and even just people that I know in the industry, who weren’t necessarily formally interviewed for the report… And I’d say “So how is this working for you? If it’s a startup, what are you seeing with your clients?” or “If it’s a big enterprise platform, what are you seeing?” and then in enterprise companies, I was asking them “Where are you on this spectrum?” and they were like “Oh yeah, yeah…. Because we’ve brought in all these data scientists, we’ll have a very particular way of working, and the challenge is getting to scale, and getting to be able to build not just models, but products that we can scale across the organization”, and that’s a whole – not only a technical challenge, but a cultural one as well, and also a recruiting challenge in terms of trying to figure out what are the qualities we should be hiring for in order to be able to build scalable infrastructure or scalable products. That’s been a big theme that I didn’t really know to expect.

[08:02] When we had that conversation, Susan, and we were discussing that it was – I found it in my own experience, as I went into a previous employer and was creating a full AI operation within that organization, that a big surprise for me had been that I was hiring on some new people, and I was pulling people from other parts of the organization, and I had a mixture of skills there… And some of our team members were just straight data scientists, in a lot of cases fresh out of school, and that had been their exclusive focus.

And being this new field of, you know, neural network model creation and such, I think myself and others on the team really expected that to be the strongest skillset, and what we were surprised to find was some of the other members of the team had already been in the industry and had created products and services for other companies, or previously for the same company, they had been programmers in various other roles, and they had moved in and maybe gone back to school in some cases for data science and to learn this… And I was surprised that those people, after model creation, they were able to apply that better after the fact.

So in some ways, potentially the people who had focused exclusively on this, had a leg up… But as soon as some of the others caught up with them, the fact that they knew how to deploy and how to meet a business need in terms of products and services, was a huge advantage for that crowd, and that was something that surprised us all.

Yeah, and I think what’s interesting is that this seems like just part of the evolution, if we think back on other technologies and how they became enterprise-ready. You see similar trajectories, where you’re hiring for a skill, and that skill may or may not come with another particular set of skills. That’s a challenge with every technology, but I think particularly with AI, because there is so much hiring that comes directly out of the Academic setting, and that’s such a different set of expectations.

I’m curious of your opinion on the following, in light of that and in light of the other things that you mentioned that are changing around how we will be interacting with systems, for example, and how systems will be more dynamic and reactive… Do you think for the software engineers out there, that are listening to this podcast, that are maybe interested in AI – I know that there’s some concern amongst software engineers that their job will need to drastically change, and that sort of thing, as AI is more integrated into the products that we’re building… Do you see that software engineering as a whole is going to see a very dramatic shift, or will it more be like AI is just gonna be something that they interact with, but it will be another layer in the stack, or something like that?

Well, it’s funny, Daniel - I can answer as a non-software engineer, just in terms of what I’ve observed… And what I’ve observed is I don’t think I’ve ever seen a software engineer who hasn’t had to change, who hasn’t had to evolve their skills, who hasn’t had to figure out something that they weren’t expecting…

You know, if you think back to the beginnings of the internet, that was a massive, massive change in the mid-90’s and the early 2000’s, and the development of even like social technologies, and mobile technologies, and all of that… You know, every single time there’s a massive shift, there’s a massive set of changes that reverberate through the industry, and I just don’t ever see that changing.

Then in terms of kind of the long view, I do think that intelligent systems, the ability to learn from data-autonomous systems - that’s gonna be table stakes. I don’t know how many years, I don’t have a crystal ball, but what we’re thinking of as sort of exotic now is gonna be table stakes, and that’s really a lot of the thrust of the report, too.

[11:55] I know that having had the advantage of seeing it ahead of time, you started off the report kind of talking about some of the macro trends that would affect AI, and you were really thoughtful in how you were approaching how the real world would affect this. I remember you talked about the interactions that we’re having with computing; I remember one of the sections was talking about as we move from screens to different senses that we may not have used historically… And then I believe you went on to how we decide, how we innovate, how we lead… And I was just wondering what some of those insights were that we could share with our listeners.

The screen thing, how we interact is really interesting, because we’re just so used to – if you’re older than 30, you’re used to interacting with a laptop computer, even a desktop computer, and a phone. You know, if you’re younger than 30, more of your life has been spent talking to your phone and talking to that weird little cylinder on your dining room table, or your thermostat, or whatever it is that you’re talking to… We’re certainly becoming much more accepting of things like facial recognition and image recognition, although obviously that comes with issues… And there are even people who are working on sensory-based interactions based on smell and taste; so none of our senses is gonna be left behind. And of course, touch - using haptics and pinch and zooms are all very normal to us now. You go back 10 or 15 years, and that was Minority Report; that was something that lived in science fiction.

The biggest shift to me though of all of these shifts is around how we make decisions, because we are so used to living in a world that is based on if/then statements. “If my balance drops below $500, send me an alert. If I make a transaction more than $300, send me an alert. If I try to buy something in an airport in Berlin, decline my credit card.” And now what we are seeing is that the world is a lot more probabilistic, and sometimes that’s fantastic, and it’s really easy to understand, and it’s intuitive, and sometimes it actually creates a lot of stress for organizations, because you could say “Something with an 85% or 87% confidence level is fine for one industry and completely off the table for something else.”

I imagine that that creates – I don’t know, in my mind I’m thinking for a lot of people, maybe including myself in certain scenarios, that creates a lot of trust issues. It might be harder for me to understand naturally the probabilistic way of dealing with all of these complicated scenarios, but I kind of have to put my trust in the modeling at that point, and not just in an easily understandable if/then statement…

Yeah, absolutely. The thing is too that it’s not just about putting your trust in the model, it’s the engineering and user interface and other kinds of communicative decisions that are made to let you know whether you should trust the data.

I’ll give you an example, and this is almost kind of a nice segue into the conversation about ethics… In Turkish, as in many other languages, there are no gendered pronouns; so the word for “he” and the word for “she” are the same. It’s actually the word “O”. And if you take the sentence “She is a doctor” on Google Translate - you can do this yourself - and you translate it into Turkish, it will come back with “O bir doktor.” Sorry about the pronunciation, Turkish speakers… And then if you take “O bir doktor” and you translate that back into English, Google will assume and write “He is a doctor.”

Now, this is probabilistic, because if you look at the Word2vec dataset, we know already that the word “doctor”, as many other professions, is biased toward male humans, because there are more instances in that data of men being doctors than women being doctors… And even if it’s 50.5%, it’s gonna be a man.

[16:08] Here’s the thing, the Turkish language has been around a bit longer than Google, and is not likely to change for Google’s sake, and yet, there is no indication when you do a Google Translate that what you’re looking at when it says “doktor” you’ve probably got a 97%-98% probability that it’s correct, but when you’re looking at the “O” that signifies the gender of the human being discussed, that it’s way, way, way lower. So what I’m saying is that sometimes we actually need to incorporate into engineering and into user interface design some indication for people that what they’re looking at may or may not require further analysis.

Yeah, and I do think that this leads right into a great discussion on ethics, which I’m eager to get into… But before we jump into those details, I’m wondering if based on what you’re just saying, those are kind of real problems, real biases, real dangers if you wanna put it that way, that exist right now in machine learning and AI… I’m wondering, so much of the conversation around the danger of AI, and other things, people naturally go to the scenario of the Terminator, or consciousness, or something… Do you think that that distracts from these real dangers and biases that we’re experiencing now, and should we even be having that conversation, or should we as practitioners kind of – how can we help bring a more balanced view into what we should really be talking about in terms of ethics, I guess is my question?

Any of us who work in this field - somebody like me, who’s an analyst, really with a humanities background, versus you guys who have much deeper technological jobs than I could ever hope to have - you know, maybe you hang out with your family at the holidays and they’re asking you what you’re working on, you say “AI”, and they’re like “When are the robots coming to get us?” And that’s the conversation really that much of the world is having. That, the trolley problem, if the car is driving down the road, and it’s gonna kill one person, or five people, or it’s gonna kill you, or a woman with a stroller? … all that kind of stuff is where people’s minds naturally go to. And I’m not saying that those are trivial issues - obviously they’re not - and when you get into things like autonomous weaponry… I mean, that’s a whole other topic. But AI isn’t a monolith, so when we think about both the benefit - the innovation benefit AND the risks of AI, we have to think about it in a particular context… And that context could be something like a financial services context in which you’re trying to manage risk, or it could be a diagnostic context in the healthcare industry…

So what I really think is important is for us to understand some of these nearer-term issues, some of these very pragmatic, practical issues around what happens when we use algorithms to abstract humanity. Not that that’s bad per se, it’s just that it has implications that we then have to deal with on the other end. So this is part of responsibly learning to use the technology, just as we would responsibly learn to use any other technology that is extremely powerful.

Susan, as we are talking about what ethics are in AI and how to apply them, which is very personal for me, as I’ve come into a new job, in a new company, a new industry - the defense industry - where we’re looking at AI use cases… I think this is the first time in my life where I’m almost leading with ethics. And I think - you know, there are many other people that will be in similar situations because AI has such tremendous capabilities… What types of advice do you have for people who are moving into jobs or are now having to face “How does AI affect our products and services at our company?”, what kinds of things would you advise them to do in terms of the thinking that maybe they haven’t had to consider in the past?

[20:17] I think there are a few very straightforward things. The first is to understand that algorithms are as good as the data – you know, this is the classic “garbage in/garbage out.” Algorithms are only as good as the data and the way the data is modeled. And the data that we have, in many cases, is just absorbed from society. In the case of Google, or in the case of the Word2vec dataset that includes all that language stuff that I mentioned earlier - it just absorbs the reality that we live in. And sometimes you wanna perpetuate and amplify that reality, and sometimes you maybe don’t.

For example, if you’re a marketer and you wanna do audience segmentation to doctors, you don’t want anything assuming that all doctors are male. You’re gonna alienate all those female doctors out there, and potentially even stifle the potential of younger female students who maybe wanna get into the medical profession. So we just need to know these things and we need to actually have processes in place to ensure that when we can fix and catch bias, that we do. We can’t change society by changing technology, obviously, but we can be mindful about it.

The second thing is around explainability. There’s a woman Rachel Bellamy from IBM; I heard her speak in London not too long ago, and she said “Explainability is the new user interface for AI”, and I thought that was a really interesting point, because one of the things we’re not used to in probabilistic systems is the idea that you put data in and then there’s this sort of black box, and then there’s the output… And so in many cases, we do need to understand what some of these decision criteria were. In some cases it’s fairly straightforward. Maybe there are a few keywords that were determining the outcome or suggesting the outcome, and then in some cases, for example, maybe with disease diagnosis, or with pharmacological types of use cases, it might be very complex; or weather - these very complex systems.

So this idea of trying to understand what happened between the input and the output is very important, so that people do have a sense of trust. You don’t simply say “Well, Chris, I’m not giving you a mortgage loan, even though you have financially pretty much the same profile as Daniel”, and then three months later you give Daniel the mortgage loan, even though he pretty much matched where you match. You have to be able to go back and understand what happened, you have to understand a little bit about what caused that action to be taken. So explainability is interesting, and it’s also become kind of a huge issue in the industry, and I think there’s a lot of controversy around it.

Then the third piece is there needs to be an understanding that ethics in AI are simply just norms of behavior, and we don’t really have norms of behavior in the digital world the way that we do in the physical world. You know not to push in front of somebody getting on a bus; you may do it anyway, but you know not to do that. We don’t have those same norms in the digital world… So having internal controls, making explicit the decision criteria - all those things are really important.

I’m glad that you addressed that, because that was actually gonna be my next question - what do you need in place around it in terms of what you’re calling internal controls, so that the burden isn’t entirely on the individual that is trying to figure their way through this and apply ethics as they do that? From an internal control, do you need systems in AI implementation that you might not have needed in other environments, and if so, what might they need to be thinking about? What might those systems need to be addressing?

[24:04] Yeah, we do need systems. In some cases it’s like grandfathering existing processes and controls, grandfathering AI into that. In other cases, it’s entirely new types of controls. Some industry examples out there - AI Now, which is a really phenomenal organization, focused on ethics in AI, they’ve issued what they’re calling an Algorithmic Impact Assessment. Very similar to like an environmental impact assessment, that when you’re going to build something or excavate something, that you need to understand the environmental impact… So this is built on that same premise, that if you’re gonna introduce algorithms and algorithmic decision-making into - in this case it’s meant for governments and for cities; if you’re gonna introduce that into a civic environment, you need to think through some of those potential impacts to vulnerable people, to systems, and processes, and all those things.

That document lays out a template for assessing the impact of your algorithmic system. I think something like that can and should be customized for industry. That’s one example.

IBM has built a couple things that are quite interesting. One is called a Supplier’s Declaration of Conformity. Imagine as a defense contractor, or as a retail bank, or as a healthcare provider, you’re not only using your data, but you’re using data and systems from other organizations, other companies; you wanna make sure that you’ve gone through the process of understanding and holding your systems up to the highest scrutiny, but you also wanna make sure that your suppliers and vendors and partners have done the same thing. So that’s another example.

They’ve also built – and this is, again, something that’s a bit… I wouldn’t say controversial, but it’s open to scrutiny - this idea of a dashboard that shows a bias quotient, and a confidence quotient. As a simple example, if you’re trying to settle car insurance claims, you should know that the data that you have 19-year-olds is very scant, whereas the data that you have for 42-year-olds is very rich… So if you’re settling a car insurance claim on a 19-year-old, you need to dig down into some other things and really probably use much more human intervention to understand what the situation was, simply because those recommendations are based on just less rich data.

These are just some examples of things that people are doing… Microsoft is rolling bias check into Word and PowerPoint, so if you use a word that you’re maybe not aware has some kind of connotation that is hurtful or unpleasant, it will let you know, the same way it will let you know if you misspelled a word.

Yeah, that’s really interesting… Piggybacking off of that, for my own selfish reasons I wanna ask the next question, because I’ve taught a few corporate workshops recently, and we talk about, you know, maybe you wanna make your training set as representative of reality as you can, and then you try to optimize for accuracy or whatever it is, and then, you know, I bring up bias in these issues that we’re talking about, and in the midst of those discussions I think everytime I’ve done this someone somewhere in the audience asks the question about “Well, if we include gender or zip code or income or whatever it is in our model and it makes it more accurate, why wouldn’t we want to do that? Isn’t that just an accurate representation of reality, even though it produces a biased model?” I know how I’ve tried to answer that question, but I was curious your thoughts on how you would help that sort of person understand why they should care maybe about bias in their predictions, and why they might want to consider that a little bit more seriously and not just talk about accuracy?

[27:57] Yeah, Daniel, you’ve hit on one of the most crucial issues around algorithmic bias we’re gonna see in 2019, and that is there is a little bit of a storm brewing between some data scientists and engineers and sort of people – and I’ll just be brave here and say people like me, who run around talking about AI ethics… And here’s why - it’s really complicated. And there is a tendency – and I’ve had this conversation with some data scientists who work at very well-known companies off the record… There’s a tendency I think for some folks to kind of do a little social justice virtue signaling around “These darn data scientists, they don’t understand people, and they don’t understand humanity, and they’re gonna ruin the world by allowing bias to creep in.”

No biggie…

Yeah, no biggie, right? And then on the other side we have data scientists saying “Okay, so who elected you the arbiter of all that is good and just in the world?” And these are both completely valid points of view. So here’s where I stand on it - we do have to have this conversation with precisely the group of people that you’re talking about, in a productive way. These industry conversations need to happen, because as somebody who I’m not allowed to quote said to me not too long ago, “Who gets to choose who’s the person who puts their finger on the scale?” That is really critically important, because what we may ameliorate in terms of bias for one group, we may actually impact for other people, or have unintended consequences that we’re not even able to forecast.

I’ll give you one simple example. If you think about what happened with Amazon’s recognition system, where it incorrectly identified John Lewis and six members of the Congressional Black Caucus as criminals, as matching faces in their criminal facial recognition database - okay, that’s not even arguably, that’s unarguably bad. Like, bad-bad-bad, right? We’ve got John Lewis, who’s one of the greatest civil rights activists ever known to man, who is now basically along with six members of the Congressional Black Caucus been matched to a criminal. If this happens to John Lewis, you can only imagine what’s happening to other people.

Yeah, and similar with the recidivism model and other things that I’ve seen.

Okay, and why is this? Because imagine recognition and facial recognition is really much less accurate at recognizing and understanding people of color than it is in recognizing and understanding Caucasians. Okay, so how do you fix that? Do you make facial recognition better, so that it better identifies people of color? How are you gonna get that data? Do you start encouraging people of color? “No, no, really, it will be great for you… Just give us your face data. Let us analyze your face data, and put you in our system. We promise that will just help in terms of accuracy. It won’t have any bad impact on you.” This is a really – like, who’s gonna say yes to that, right?

Some people will say “You know, we’re perfectly happy that the false positive rate is so high. Just let it stay high, because we don’t want to be included in those systems”, and there are absolutely valid reasons for that.

So this step is not easy, and one thing I would say is I don’t stand on a soapbox, trying to say I’m more ethical than anyone else… I am cowed every single day by how complicated this stuff is. I just feel like we have to have these conversations.

I appreciate your perspective there. I agree that the discussions are complicated, because often times immediately after I have that conversation and people are like “Oh, well we’ll remove the gender column in our dataset” or whatever, but if there is 1,200 other features, who is to say that the model can’t infer gender from those other features? So it’s not just “Take all the sensitive data out” sort of thing.

[32:04] Yeah, and ZIP code. My god, there’s no better predictor of your race than your ZIP code. And there’s no better predictor of your health outcome than your ZIP code. Not even your genome. So there is a way in which people could say very disingenuously, “Oh, well, we didn’t include race. Race isn’t a factor. We can’t do that, it’s a protected class… But we just chose ZIP code.” [laughs] So this is why we all need to be educated about these things. The business people need to be educated about proxy data, and data scientists need to game out and scenario-plan some of this stuff, or at least be part of that conversation… And we have to get past virtue signaling and actually into some real methodologies that people can get behind.

At least monitoring for bias, at the least.

Yeah… And that’s hard too, actually, because who wants to be liable for that?

So as if this isn’t complicated enough, trying to take all of this into consideration, we now have the reality of regulation coming into it. Obviously, in Europe you have the General Data Protection Regulation, which we called GDPR for short… And when you throw that in the mix with all the other complications of trying to be ethical in your use of AI, how does regulation impact that? It seems like there’s quite a balancing act that a practitioner is trying to manage through this process.

Yeah… I mean, GDPR is really interesting; “interesting” is probably a diplomatic word… I mean, I will say, I am a huge fan of GDPR as a philosophy, and yesterday, as a matter of fact, was the 70th anniversary of the UN Declaration of Human Rights that came out of World War II, and Eleanor Roosevelt was involved in crafting that, and the whole point really was to protect the civil rights of individuals, protect their rights from unreasonable search and seizure, and from discrimination, and disenfranchisement, and actually physical harm - all these things, coming out of the Second World War… And GDPR is really built on the UN Declaration of Human Rights, but from a digital standpoint, so that we should be in control of our own data, we should know when algorithms make decisions about us, why those decisions were made and be able to contest them.

So from a philosophical and historical viewpoint, it’s critically important. However, most of us experience GDPR in the weeks and months leading up to May 25th of this year as an onslaught of horrific opt-in email, and then not being able to get to a couple of websites that we usually frequent, and not a whole lot more than that.

There’s theory and practice, there’s the fact that GDPR and its enforceability is a bit of a grey area for global companies; if you’re global, of course you have to comply, just in case people do wander into the EU… But fundamentally, with regulation around technology, it is always so far behind the reality of the technology. We’re still literally in the wake of the 2016 election, we’re still literally grappling with “Is Facebook a magazine, or a magazine stand?” [laughs] That’s the law this is based on. So when you think about it in those terms – I mean, yes, there does need to be protection. What protection? I am not an expert on that.

As we start to come to the end, I wanna pose it and I’m trying to not scope this final question too big… I know in this paper that you’ve just put out you kind of finish up by taking practitioners through how to build up their playbook. With that in mind, maybe if you could just give us some pointers or some starting tips on how you might start that process recognizing that our listeners should definitely go download the playbook that you’re offering on how to build their own playbook… What are some good finishing points where you can leave them with to get started on that process?

[36:05] What I’ve published is really a meta-playbook, right? It’s a playbook for a playbook, as you just said…

Fair enough.

Part of that is that as an analyst firm, we publish our research for free as a service to the industry. This is really intended to help people think through the issues that they need to think through in order to do what they need to do… And of course, I’d probably beating around the head and shoulders if I didn’t say that I’m more than happy to help with that if people need that… But there are five areas that I think are really critically important. The first is looking at your business strategy, moving from optimizing existing processes to actually business model innovation, customer experience, and using intelligent systems to enable those things.

In data science, we’re moving from an exotic specialty within organizations to the ability to scale. With product and service development, we’re moving from kind of reactive taking in all the signals about what’s happened in the past, to anticipatory, trying to anticipate what’s happening. We’re finally getting to what we were promising for the last 20 years around agile enterprise.

From an organization and culture perspective, we don’t talk about this enough, but we’re moving from a hierarchical to much more dynamic organizational culture… And when you have agile development in an organization and an agile mindset, it really changes the way people work together, and some people don’t like that very much, and some people are highly empowered. That makes a lot of difference in terms of how successful AI can be. One major piece of that is you have to have the willingness to fail, and fail fast… And that doesn’t mean move fast and break things, because that’s probably a relic of the last ten years, but it does mean actually the ability to move in tandem, very quickly, learn from mistakes, and keep moving, because that’s just the essence of these systems.

And then finally, it’s around ethics and governments. We’re not in the “anything goes” era anymore. We’ve seen in the last year tremendous stories about what happens when we don’t pay attention to these issues. We do have to start thinking about the ethics and the customer experience of AI in a much more rigorous way, and as we talked about earlier, that’s not the easiest thing to do, but at least there’s some early thinking in here about how to start to frame those conversations internally.

I really appreciate it. I love what you’ve done with this. For our listeners, we will have a link to the AI Maturity Playbook: Five Pillars of Enterprise Success in the show notes. Susan, if people read through that and they wanna engage you, so that you can come in and help their organization, how would they do that? How would you like people to reach out to you?

Yeah, I’d love to hear from people. Most directly, you can email me at susan@altimetergroup.com. You can connect with me on LinkedIn - I’m Susan Etlinger on LinkedIn, and @setlinger on Twitter. I’m easy to find.

Great. Thank you very much for coming on the show. This was a great conversation. I so wish I had heard a conversation like this before I was getting started in industry… I think you’re really helping some people that are still trying to get in and get their organizations involved in this, and thinking about it the right way… So thank you so much for coming on the show.

It’s my pleasure. Thank you both so much for having me.

Thank you very much, and Daniel, I will see you in the next show.

Bye-bye.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00