Practical AI – Episode #65

Intelligent systems and knowledge graphs

with James Fletcher, principal scientist at Grakn Labs

All Episodes

There’s a lot of hype about knowledge graphs and AI-methods for building or using them, but what exactly is a knowledge graph? How is it different from a database or other data store? How can I build my own knowledge graph? James Fletcher from Grakn Labs helps us understand knowledge graphs in general and some practical steps towards creating your own. He also discusses graph neural networks and the future of graph-augmented methods.

Featuring

Sponsors

DigitalOcean – The simplest cloud platform for developers and teams Whether you’re running one virtual machine or ten thousand, makes managing your infrastructure too easy. Get started for free with a $50 credit. Learn more at do.co/changelog.

AI Demystified (FREE five-day mini-course) – Get an introduction to the most important concepts, types, and business applications for AI and Machine Learning. This course is 100% free.

The Brave Browser – Browse the web up to 8x faster than Chrome and Safari, block ads and trackers by default, and reward your favorite creators with the built-in Basic Attention Token. Download Brave for free and give tipping a try right here on changelog.com.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of the Practical AI podcast, where we make artificial intelligence practical, productive and accessible to everyone. I am one of your co-hosts, Chris Benson, I am principal AI strategist at Lockheed Martin, and with me today, as usual, is my co-host, Daniel Whitenack, who’s a data scientist at SIL International. How’s it going today, Daniel?

It’s going great. It seems like the past week or so has been the week of messy data for me, so I’ve been dealing with a bunch of missing rows and weird data issues, it seems like, for the past week, which maybe that’s typical for every person in AI… And everyone’s like “Oh, that’s my week every week”, but it seems particularly to have hit me this last week. What about you? You’re at GTC, right?

I am. I’m at NVIDIA GTC, which is their GPU technology conference in Washington DC. It’s going on now, although right now I’m hanging out in the hotel room, so we can do this… But a lot of fun. I came to Washington at the beginning of this weekend for the AlphaPilot Race. We’ve had a recent episode on AlphaPilot, and that was the second of four. Super-cool doing that. I had a lot of fun. I did some various things on stage… And then today, at GTC, I’ve got a session coming up that I’m leading. It’s kind of a fireside chat where I’m both moderator and panelist together, with a couple of other really smart people.

Yes, that sounds great. I hope that maybe some of that will be available at some point, where people can access it.

Yup. I think they put it all online afterwards.

Awesome. If you want to follow up on that, or are interested in other things related to NVIDIA, you can definitely connect with us on our Slack channel. If you go to Changelog.com/community, you can join us on our public Slack, and/or on LinkedIn, and ask some of those questions and follow up on guests, and all of those different things.

Well, today we’ve got a treat. We have a guest by the name of James Fletcher, who is principal scientist at Grakn Labs. I think we’re gonna talk all about intelligence systems and knowledge graphs in the minute ahead… Welcome to the show, James.

Hi, guys. Thanks so very much for having me along.

[04:01] I noticed on your LinkedIn as we were prepping for the show - it said a couple of things. The first one, it says that you’re presently leading research on machine intelligence and cognition at Grakn.ai, but it also – and anyone that listens to this show much knows I’m an animal nut; I just own that moniker… It says that you are an entrepreneur with a background in computer vision for automated veterinary diagnostics. Before we got into the main topic, I just wanted to ask you about that, if you could take just a second as a tangent and tell us what that means.

Yeah, absolutely. That was quite a fun project. That was my first foray into machine vision, which actually started when I was studying. I was studying general engineering at university, and ended up in this specialization in machine vision… And I really didn’t see that coming. I always thought I was gonna head towards mechanical engineering, or something like that.

Then when I saw the capabilities that were coming out in machine learning at the time, I was like “Okay, wow, this is really good stuff. This is disruptive. You can really do something new with this, and no one’s using this, this is clear, in industry.”

I was studying under professor Andrew Zisserman at the time, who’s quite a big name in computer vision, and we’d gone well, and coming out of that course I said to him “Is it okay if I look at actually commercializing some of these algorithms? This stuff is clearly enough to warrant a whole company around it…” So off I went, and started doing that. That was actually a family business. My dad is also an engineer, so the two of us decided “You know what, actually let’s give this thing a shot.”

How was it – because I know the transition of research out of university into the commercial world can be kind of an interesting journey… Was that awkward, in trying to convince the right people?

That’s a good summary of the journey…

Awkward, you mean…

No, I wouldn’t say it was awkward, but we weren’t knowledgeable on IP, and all of that kind of thing. But at the end of the day, it was released open source by the university. That was actually really pretty trivial. But that actually formed – that was an interesting conversation also, because it had been implemented and released open source in MATLAB, but that wasn’t actually commercially useful to us… So that was a rewrite job from the start, to put it into Python, so that we could actually productionize that.

And then it was really happenstance and things that put a lot of things together for us. We had these generic algorithms and we wanted to find a place to use them… And as a family - actually, there’s a hobby farm involved here, which my parents have… And we happen to have connections with the veterinary college nearby, so we went to them and we said “We need a vertical. We need a specific task that we can hone in one to actually prove the usefulness of these algorithms and what they can do.” So we were looking at veterinary science and they said “Yeah, that’s exactly what we need. We don’t have anyone who’s actually being able to help us at the university do this stuff at the moment.”

So we launched this whole research effort with them.

What was interesting actually as that developed was – this is a lesson in being an entrepreneur, I guess… Is that the core value of the business actually moved sideways from the AI algorithms that we were working with, from the machine vision, and into the actual hardware and robotics that we needed to actually fully automate the process. Because it’s all very well having a machine vision algorithm that automates the skill of looking through a microscope, but if you don’t have a machine that puts the microscope slide on the microscope, essentially - I’m really simplifying it, but I’m sure you got the idea - then how many samples can you actually run? What’s the actual improvement you get through that whole system?

So actually that was the area that was much harder. Once you have an image on a computer, you’re kind of laughing, but getting to that point was a little bit more tricky. But yeah, the end goal was actually trying to control parasite burdens in animals, particularly grazing livestock… But that translates sideways actually into human health, because the rough statistic is that two billion of the world’s population actually has this parasitic worm infection. There’s a number of different reasons why you might wanna work on this particular problem.

[08:18] And there’s a lot of samples to run. [laughs]

There’s a lot of samples to run, exactly. You hit it in a nutshell.

Well, that’s pretty fascinating… And just as a way to close that off - I run an American nonprofit charity called The Animal Institute, which brings technology like AI and computer vision and such to solve problems in animal welfare… So if you ever have any interest in discussing these topics further, I definitely have a playground to play in.

Well, absolutely. It sounds like we should definitely go there.

I was just thinking while you were talking about it - the application is definitely interesting and valuable, but I also think it illustrates… I get asked all the time, and maybe you do as well, like “What should I start working on to get into machine learning, or get into AI? What kind of problems should I start looking at?” And I think the best thing that you can do is start working in an area where you have some connection or where you’re passionate about. For you, this was a connection between what you studied at university and worked on in research along with your family, in engineering, along with this hobby farm, and the connections that you had with the veterinary school… So it made a lot of sense to go into that vertical.

That’s what I think people should consider - just try something out that you’re passionate about, because those are usually the things that you would stick with long enough to learn and to experiment and to level up.

I totally agree with that. I think that’s a really good point. Because what you’re really saying there is that you’ll exceed yourself better in this where you are motivated, right?

Yeah, definitely.

Not just in machine learning, but everything. So if you’ve got that motivation, the more motivation you can summon and put into one place, then - absolutely. You’ll double down on it. The passion will get you through the hard times, right? When you’re missing all those rows in your dataset.

Yeah, for sure. Thanks for the extra motivation this week.

I was gonna say, this has turned completely into a motivational show, totally unexpected… And we haven’t even hit the main stuff we were expecting to talk about.

No, there you go.

Well, speaking about that - how do you get from robotics and microscope slides to knowledge graphs? What’s that journey like?

Well, unfortunately I don’t have some twisting rollercoaster to tell you… Only that when I wanted to move out of doing the technical work on that project, and I was looking around for the next challenge, I suppose one of the things that I really liked to be is sort of like impact-driven in terms of the choice of where I wanted to work. I wanted to see something where you get that value actually disposed, and so you could see that project with the same. You could see where you were gonna actually make some impact… And I looked around at all the roles and had this really great conversation with Haikal Pribadi the CEO here at Grakn. We had a really over-excited conversation when we first met, where he was explaining to me all of the ethos about Grakn, and the vision that the company has, and I was pretty sold to work here, straight off the bat from that conversation.

So really just to pivot - his ethos is to take on people that have demonstrated themselves within the scope of what they do, not necessarily that they have to be people who have worked on knowledge graphs, or graphs at all, in the past. He’s very open-minded about which field you’re coming from… He’s coming from robotics himself actually, so there was a bit of a resonance there.

[11:41] Cool. Well, maybe you could just define – so if I go to the Grakn website, which is grakn.ai (we’ll put it in the show notes), you talk about a couple things, which you’ve already mentioned, and I think it’d be great to dig into those terms a little bit more. One of the things you mention is intelligent systems on the website, and then you just mention knowledge graphs. So maybe you could start out by just sharing what Grakn means by intelligent systems, and what sorts of intelligent systems people are developing out there.

Yeah, absolutely. So the terminology that’s being used at the moment is an interesting and kind of hot topic of its own, and naturally, yo’re gonna get a Grakn-biased spin while you’re talking to me… But the general ethos - I think it’s better to start with knowledge graph…

It’s good if we also start with how we describe Grakn, and what that does for people. Grakn itself is a database, and typically, when you’re talking about knowledge graphs, that’s what you’re talking about - you’re talking about some sort of actually large store of knowledge. Now, a knowledge graph itself is essentially totally synonymous with a knowledge base, which would be the mathematically correct terminology, that’s been abused on the web a lot for other things. So we tend to go with knowledge graph; it’s a little bit sexier, and it also immediately gives someone without experience in knowledge bases an idea of the shape of the data, which is a graph, in the computer science sense.

But what we actually mean by knowledge graph as opposed to just graph - so there’s all sorts of different graph types of format all over the place… But what we’re trying to build here is a system which takes you from – you wanna make that leap from a graph full of data to a graph full of knowledge.

Yeah, I was just gonna jump in and say I think that’s maybe the part where I struggle… I think a lot of people have dealt with databases, and maybe some people are familiar with graph structure data, like “Oh, I’ve got this node, which is a person, and another node, which is another person”, and they’re connected by – I think the terminology is some edge that is like this person is friends with this person, or something like that… When does a database or graph data go from being just a database to being a knowledge graph? What’s the idea around that?

Yeah, so the way that we built the system up is “How can we capture all of these different kinds of knowledge?” So what we have is we’ve built a knowledge representation system. Everything that’s in Grakn is actually built on top of a graph database. That’s actually the start of the innovation. I think that helps people understand what we’re doing. So if you start with a clean slate and you’re gonna build a project, we started with a graph database, and then we’ve built other things on top of that.

Can you talk a little bit about what the different – when most people probably think database, they’re probably thinking of a relational database, kind of more the classical Postgres, and those kinds of databases. As you explain here, could you differentiate between what a graph database and a relational database, so if people are not already familiar, they can make that jump?

Yeah, exactly. As we were already talking about - we’ve got a graph in the computer science sense, as opposed to in the X/Y plot sense, and that we’ve got nodes and edges interconnected. So in a typical graph, a node might represent anything. For instance, I like your example - from one node which is a person, to another node which is a person, you’d have “has friends” as the label of the edge in between those two nodes, right? So what we can do is rather than – a relational database forces you to store everything in tables. That’s what you’ve got. You’ve got a set of filing cabinets, and each file in those respective cabinets may have a reference written on it that links you to a file in another cabinet. That’s the kind of structure of the data that you’ve got available to you.

But what we find is that as soon as we’re dealing with data that’s more representative of a network, then dealing with it in those kinds of tables gets really messy, really fast… Because as soon as you’ve got one thing which is connected to eight other things, in eight different file cabinets, and all of those are also connected to eight different things, you get into a big mess with that starting structure.

It doesn’t scale well there across, laterally.

[16:03] Exactly. The idea is that when you’re actually trying to build some application with those things, the complexity that you as the user of the database has is enormous. Suddenly you have to try and control this structure that wasn’t really designed for the data that you have. So then you go a layer up and you say “Okay, now I need a graph structure to actually more naturally represent my data. So that’s where graph databases are born.

When you say “more naturally”, other than that it reflects the relationships between the data very accurately, are there any other advantages for going graph, if somebody is trying to make that decision today, and they are looking at that? Maybe they’re looking at Grakn… What are the benefits of going graph database versus relational database?

I think you kind of said it in a nutshell. The idea is to be able to naturally represent network data as it is.

Is it easier to get to the data though in that way, and not having to do giant SQL, classical SQL?

Exactly. We go a level more natural, again, when we actually come to the knowledge graph that Grakn builds on top. So once you’ve got your data in a graph form, now you want to be able to concisely refer to and search your data and reference what you’re looking for.

The major innovation - I would say there’s two major parts that you need to understand to figure out what Grakn is and why it helps you… The first thing is we’ve got this knowledge representation system, and we have this flexible model – I don’t think we wanna talk in technical depth on all of the intricacies of that…

Yeah, yeah.

You can basically make entities, relations and attributes. We make these three characters that you have in the story of building a Grakn schema… And the entities are things like people, things like companies, even things like abstract concepts in the world. But then when someone references an entity, you immediately know roughly what they’re talking about. Relations are the kind of glue that sits in between these things. So that’s what you would use as edges in the graph that we were talking about before. But relations are probably the most standout concept in terms of what we do, because these relations allow you a huge, huge volume of flexibility.

They say that not only can I have a friendship between two people, and say that person A is friends with person B, but I can say that they’re also friends with person C, person D, person E. I can do that with one relationship. We used to know that as an edge. So in this case, what we’re saying is these relations are hyper-edges… And you can see that. Immediately, we’re starting to introduce big concepts at the low level of the structure that then we define.

We wanna upgrade how you can represent your domain. We wanna give you this toolbox which we’re calling the Schema in Grakn that lets you model your domain in all of the complexity that it has, and that then means that you’ve now got this format, this structure that can govern your data, that can look after your data for you. It can make sure that you haven’t done anything that’s logically invalid. It can make sure that everything is cohesive within your database. So when you start adding facts, you now know also what the context of those facts is, because heavily label all of the elements that go into the graph.

For instance, you could insert a company, a charity and a university. All of those types that we can describe have inherited from organization. What that now means is when I want to search my data, I can search for either companies, for charities, or for universities, and I can search for those individually, or I can just ask more generic questions and I can say “Just tell me about organizations in my data.”

[20:10] So what we’re trying to do there is to get this really natural way to actually interact with your data, so that you are using your own domain terminology to actually access what you’re looking for, rather than having to sort of imagine “What are my nodes, what are my edges in my graph? How do they fit together?” Instead, we try and bring that to the user and reduce the burden on them when it comes to assessing what’s going on in their knowledge graph.

James, I appreciate where the conversation has landed, in that there’s natural ways of representing your data, and that can be modeled well on top of a graph. I’ve tried graph databases in certain scenarios, with more or less success, and some have been really useful, but something I always find is it seems really hard to build a “knowledge graph” in the sense of developing your schema can be hard… Because you may know what entities you have, but not – there might be multiple ways to represent them, or you may have just like a bunch of unstructured data and you’re not totally sure what entities to choose… So how do you recommend – if people are interested in creating this sort of representation of knowledge, where should they maybe start thinking about the data that they have, and how to develop a schema?

That’s a really great question. I don’t have a short answer, but essentially, that has been a huge part of what I’ve been doing here at Grakn, and what we do overall with members of the Grakn community. We try and help people to actually understand the principles of what is an entity, a relation and an attribute, how do they best fit together… And actually, what’s super-interesting about that is that that’s a really great meeting of philosophy and technology, which I find incredibly interesting.

Essentially, my thoughts on this is that we now see knowledge engineering and knowledge representation as entire careers that are actually coming around now. You actually have someone who’s a specialist; an ontologist I’ve also heard them called. The body of knowledge of the best way to do is not yet set upon, and we have our own ways of doing that here at Grakn, and those ways and how we think that things should be done informs the design decisions that we make in the language that we provide for the knowledge graph.

[23:56] At the moment – it’s actually been on my to-do list a long time to actually write some best practice for knowledge representation and building your schema in Grakn. We have snippets here and there, and we have examples here and there. It’s very difficult to give really generic guidance, but we do have some that we would give out. That’s a little bit long-winded for here, but maybe we can link to that in the future.

Yeah, no worries. I actually want you to extend that just a little bit; I’m curious, what can you do with a knowledge graph that you wouldn’t be able to do if you didn’t have one, as you’re talking about design, and thinking about what best practices are? What comes to mind?

The main thing that anyone who’s interacted with me in a professional context will know is that what I harp on about is trying to get to the point of true to domain modeling. What I really want is to see people building a knowledge graph where they start with a schema where one person who builds a schema could show it to their colleague, and their colleague will immediately understand what elements of data are where in the knowledge graph.

That makes sense.

Yeah, and just to make it super-clear for listeners - when you’re talking about the schema… We gave the example before of “Person is friend with person”, so there’s a person-type entity in this knowledge graph. But there could also be like country type entities, or organizations, or different metrics, websites, resources - all sorts of things. That’s the sort of schema or ontology that you’re talking about, right? The definition of “What things are we going to put in our knowledge graph and how are we gonna label them?” Is that the best way to think about the schema?

That is absolutely correct. And what I think is also really nice is to make some analogies to object-oriented programming (OOP). Anyone who’s familiar with OOP - and there’s a lot of people out there; I imagine you have quite a lot of listeners who are familiar with OOP… Then what we’re saying here is we’re defining the class. We define a class - those are our schema elements - and then when we actually insert data, we’re inserting instances, or instantiating objects of that class.

And just a quick interjection - for those who don’t know what OOP is, he’s talking about object-oriented programming; it’s a technique for representing real-world concepts in code as well. Keep going… I just wanted to let anyone know that didn’t know that.

Yeah, absolutely. So the idea is that all of the elements that we would have - as you say, we have this schema, and you can update that over time, but that is the map for your data. That tells you what things are present in our knowledge graph, and how can they be connected to one another.

For instance, we can immediately say in that example where you had a person entity and also an organization entity - we can then also define the friendship relation that you talked about, and we can say “Okay, a person can be in a friendship with other people.” That makes sense. Can a person be in a friendship with an organization? Now, maybe that’s philosophically debatable, but I would probably say the answer is no… In which case, that should not be permitted by your schema, and you should write a schema that disallows that. And what that means is that takes some weight off your shoulders, because when someone tries to add some piece of data inadvertently that says that there’s a friendship between a person and an organization, then Grakn can automatically reject it and say “No, that’s rubbish. That can’t exist.”

I think maybe there’s a bit of a misconception, and maybe parts of time that I’ve been thinking about knowledge graph, and maybe other people too, where there’s the sense that when you hear about “Oh, Google’s knowledge graph”, or something - it’s just like, information is all over the internet, and if you create a knowledge graph, then you just suck in all that information and then you automatically know a bunch of stuff. But there is actually a lot of work in terms of developing a schema that represents the types of things that you’re interested in, the types of knowledge that you’re interested in. It’s not just like an automated thing where you just crawl a bunch of websites and then you have a knowledge graph on a certain subject. Would that be accurate?

[28:08] Yeah, absolutely. You can go at it any number of ways that you want to. You can start trying to scrape information from the internet, but the quality of the information that you get might not be that high in terms of “Can I ensure the validity of the facts that I’ve pulled from that?” There’s plenty of people that are trying to do that, so that would be automatic entity recognition, and this kind of thing.

Our focus is more on building these things from the ground up. If someone’s got proprietary data, or they’ve got a particular dataset that actually they can realize an enormous amount of extra benefit from just managing the data that they have very carefully, rather than maybe trying to augment it with just old data from the internet - probably you can take a more targeted approach and just bring in elements where you’re fairly aware of what that information even is, right?

I wanted to delve into a different area, given that we’re an AI podcast… I wanted to ask how is artificial intelligence related to knowledge graphs, and are knowledge graphs a source of data that might be available for AI models, or is there some other connection there?

Yeah, I mean - where to start…? The way we see it is that knowledge graphs are gonna be central to the effort towards intelligent systems, as we’ve put it earlier. That’s our nice way of trying to avoid using AI… To make systems more intelligent than they are today, we want to empower them with as much as we can.

The idea here is much of the world is still using relational databases, and as we’ve talked about before, structurally they present us with some challenges when that format isn’t natural. So instead, what we want to do is we want to actually be able to capture the full complexity of the world, actually capture all of our knowledge in one place, and then be able to present that to, for instance, learning models, for them to learn over it.

But what we also provide is actually the artificial intelligence of the ‘80s, and that is automated reasoning. What we have at Grakn built into the open source core product is an automated reasoner that allows you to infer new data based on the data that you already have, and sets up logical rules that you know must be true. This is super-interesting, because in the day-to-day we all use our deductive logical skills any number of times, and we essentially just don’t notice, because it’s so second nature to us. But if you actually try to point to any tools that anyone technical is using right now, about the only thing that people have heard of and they did like a week on it at uni or something as Prolog, that’s about the only tool out there for logical programming. And it sounds like something computers should be able to do easily, right? Like a small set of facts, and figuring out a new fact based on a rule just sounds like if-else blocks, right?

But when you’re actually trying to scale that and make that work and be able to have any number of possible rules that you might want to be able to write and bring that into the database level, that’s when things start to get a bit interesting there… Because now we can say “When A and B and C are true, then D is true.” And what’s nice about this is that your database then whenever you ask for something that fits the bill for D, it’s gonna give you that regardless of whether or not you ever even store that in the database.

[31:52] I just had a – it’s almost a tangent of a question… Talking about Prolog and using automated reasoning, which was kind of before the days of machine learning as we know it today, I just wanted to ask - is there any tie-in maybe today…? I know you were saying that you’re kind of including that in your approach… But today I guess if we were going to tackle that with the current set of technologies, we’d probably use things like generative adversarial networks along with natural language processing to try to create things new from what you already had. Is there any tie into that? And just as a random side question - is there any similarity maybe in the two?

Well, great question. I think our ethos is when you have facts, if you can write a rule that definitively tells you that a new fact must be true based on what you have, that’s absolutely fundamental. Where you can use that, then you should use that… Why is that true? Well, because firstly it generalizes perfectly any new set of A, B and C, and you know that D will be true. And secondly, it’s explainable. When you see D, then you can say “Well, why did I see D?” and the database can tell you “Well, because A, B and C.”

Now, what’s really interesting - and this is the cross-over space that’s happening right now - is as you said, how do we see that complementing the other tools that we wanna use? How do we see that complementing any other machine learning approach? Essentially, the border for me is - to describe it as well - if you are a human, approached with a particular problem, you would probably decide whether to use one of two major skillsets that you have. Either how you deduce things in your logic, or your intuition.

Essentially, what we need is we need to start figuring out “Okay, when do we need to deduce things logically, versus when do we need to use a machine learning approach which gives us some kind of intuition based on experience?” That’s actually the center of my work here at Grakn - how do we actually build learners on top of a logical reasoner, on top of a knowledge graph, in order to get to the next level of intelligence of our machines? How do we make an iterative process between those two, that ingests new facts that have been learned, and then reasons over them? Or how do we reason over facts and then learn from them? This is very much an unsolved region, and it’s super-invigorating at the moment to be in that space.

And what do you think are the sorts of tasks that are low-hanging fruit for learning on top of a knowledge graph? For example, one thing that comes to mind is question answering sort of tasks, or something like that. Are there other tasks that have been explored in AI maybe in a non knowledge graph way, that you think are particularly relevant to explore on top of a knowledge graph?

Absolutely. As I said, that’s actually kind of the whole remit of the research division here at Grakn, is to try and fulfill those end user problems. And what are they? Well, I actually wrote a whole blog post on all the problems that we see there… So you’re absolutely right, question/answer systems - that’s what that ‘80s logical reasoning AI systems were all about, was “We’re building expert systems”, but they didn’t really work, because you had to handcode everything. Well, now we can maybe use machine learning to derive some of it automatically, and we do question/answer systems. You see that with Google’s knowledge graph, and this sidebar that they have when you type in a search; it may just directly find the thing that you’re interested in, not just links…

But then besides that, we see a lot of applications in, for instance – well, we can talk about knowledge graph completion. That’s maybe I want to find new links in between elements of my graph that I’m interested in. For instance, if I ingest a lot of biomedical data, then maybe I want to try and predict new links between a drug and a disease. I wanna infer new treatments. Or maybe I want to enrich my whole graph before I try and make those as well, so I can find other relations/interactions between genes, proteins etc.

[36:11] But then there’s other tasks on a totally different spectrum… What about NLP systems and computer vision systems when you apply background knowledge to them? Well, as humans, when we approach understanding a person who says a sentence, we have behind us however many years we’ve been on the planet of experience of hearing people say sentences. We often don’t really bring that, but we also have more than that. We also have our knowledge of the world. We often hear someone say something and we mishear what they say, and what they said sounded ridiculous, given our knowledge of the world, so we correct ourselves or we nudge them and we say “Did you just really say that? Because that doesn’t align with my understanding of the world.”

That’s what we hope the knowledge graph can do and we’ve had a number of conversations with people who want to improve, for instance, their company’s customer service platforms, where they know the body of knowledge, they know quite a lot about a customer, they know a lot about their products and the kind of things that they offer, and if a customer says “My connection is broken”, can we immediately infer what they’re talking about? Because we naturally know products that that customer has. Okay, they had a home broadband connection with us, so they’re probably talking about that.

And machine vision, as we’ve already talked about a little bit from my past - then often we just present a learner with a flat image and try and get it to guess what’s in the image based just on the pixels. But again, if the learner starts to see things that are nonsensical in the image, or things that are often seen together, that would be a big help for it, to be able to understand and identify when it might be wildly wrong based on the other things, the surrounding context of the problem that it’s trying to solve.

So you started to get into a little bit of the details of where you think certain tasks like computer vision or other things could be augmented by a knowledge graph, and it seemed like in some of those cases it was a matter of like “Okay, you have the image and you have this other information that goes along with the image, that helps you reason about the image or predict something.” Is that where you see the near term of knowledge graph augmented AI – I don’t know what the proper term for that is… But is that where you see the near terms?

[39:38] I know that there’s also people exploring or doing AI with graph structure data itself, rather than just kind of extracting features from the graph as new features in a model, but actually using graph-structured features, or sub-graphs or other things in AI models. Are you familiar with that at all? How do you see maybe as a person who says “Okay, well this sounds cool. I’d love to try to augment some of my AI systems with knowledge from a graph…” Where might they start looking in terms of methods and next steps?

Right, great question. I totally agree, what we don’t wanna do is just stick with the status quo of squashing data as inputs to machine learning pipelines. That’s the status quo at the moment. Our data is stored in these filing cabinets, so what do we put into our machine learning model? Well, it’s data that looks like filing cabinets. And what do we get out? Surprise-surprise, right?

Yeah, and I think it’s probably confusing to people sometimes - it has been for me - where like TensorFlow talks about a graph; it’s not a graph of the data, it’s more of a graph of the computation and how it’s executed on a certain architecture, or the logic of that computation… Whereas what we’re talking about here is actually data that is structured like a graph being processed through one of these systems as a graph. It would be different than just putting a tensor in, right?

That’s absolutely true, yeah. That’s one of the fundamentals that makes learning over – well, anything, except just like a matrix or vector representation difficult, is that all of the frameworks are set up to take those things in. And as you say, in the case of these pipelines, the shape of the processing is a graph, but we don’t really need to worry about that compared to the input and output. And as you say, over here we’re saying “What do we do? How do we move from these square inputs to something else?”

That’s actually a big body of work that I’ve been doing over the last year - looking at what are the approaches that have been done around that. Some of the first approaches, which is still quite common, is to do for instance walks through the graph. Like, I’m interested in some particular entity in my graph, so why don’t I start there within my graph and then just walk randomly and see what I encounter, record what I encounter, and then maybe I use that as like a row in a vector or something, and feed that into my model. That’s one way of doing… But you’re kind of hoping for some serendipity there; you’re kind of hoping that “I’m gonna encounter things in my graph that are important, because I’m just sort of walking around randomly.” Essentially, randomly walking is what is the approach - through the graph.

Okay, so from this, a really nice piece of research came out of Stanford. They called their paper GraphSAGE, or at least the approach was called GraphSAGE. And we actually implemented that here over the knowledge graph, and the idea of that was to essentially not just take these single walks, but to actually look at all of your neighbors, take a random subset of all of your neighbors, but then also look up their neighbors, and their neighbors, and their neighbors, and have this more like spider web shape of the graph that you would analyze. And in some way, without going into all of the technical detail, basically roll that information inwards, towards the entity that you are interested in.

So you kind of gain some information as you move from the outer circumference of a circle inwards. That’s also really nice. So what that’s also doing is still kind of putting your data into a box shape, because you’re still dealing with a tree now. So we’ve gone from a line, which was the walk, to then a tree, and we still didn’t find… What was really difficult about this – so we tried using this, but what it doesn’t manage to capture… Say we are trying to do something really difficult. We’re trying to find a new drug to treat a disease. Now, if we try and do this, if we just look at generally what does a drug look like and what’s nearby to it, and also generally what does a disease look like and what’s near to it, when we then try and match those two things, we haven’t actually looked at any of the common connections that exist between that drug and that disease specifically.

[44:13] We haven’t actually figured out logically what are the paths that actually connect these things. We should probably be interested in those. Those are probably the most important features in this graph. Instead, we’ve just looked at roughly what they look like… And then you end up with just some generic answer, like “Paracetamol treats lots of diseases, because lots of diseases exhibit pain.”

So what we want is, again, a more targeted approach, and that leads us to “No, we have to do the hard thing. We actually have to learn over a graph shape. We actually have to take in graph data.”

I’m kind of thinking about natural language processing, because that’s the world I live in, and some of what we’ve learned recently is that it’s very useful to have your algorithm learn the proper representation of text, taking into the context around just like a single token, for example, in order to actually learn a good representation of text for a certain task. It sounds like what you’re saying is it would be useful to do similar things for graphs, in that we need to learn how to represent graph structure data in a neural network, because it might not be – if we just take all the nearest neighbors and put them in standard row structure, and use that as a representation, then we might miss that actually the predictive thing is beyond the nearest neighbors, and like a bunch of links away. Even though it’s not a nearest neighbor, that’s the thing that’s indicative of the thing that we’re trying to predict. Is that kind of along the right track?

Absolutely. What I see you describing there in NLP is definitely what we’re aiming for here. And not just in graphs, but I think in the industry in general. We’re now seeing beyond curve fitting and how do we move beyond where we are right now to a point where the machine is actually understanding, it actually learns to understand what’s going on. We already talked about that with NLP based on a knowledge graph. It understands the context. You were just talking about that there, context. And the machine vision problem - also understanding the context of what’s actually in the image. All of these things mean that the learner can not just sort of learn by rote, or learn by exact examples, but can actually understand what’s going on.

What’s really interesting in a graph is that you have exactly that. You might have one particular feature that you find, like if I see some particular thing, that’s in some particular way related to what I’m interested in, that’s a huge indicator. But you might also just see a general structure that occurs. That when I have these five elements, these five entities all connected together in a particular, they all have particular types, that is a very typical structure for a really effective drug. Those combinations come up again and again, but in like a generic sense, and maybe we wanna learn that; we wanna learn some kind of structure.

So then what we were faced with was we were faced with the problem of “Okay, we actually need to learn all the graphs.” And to our luck, we don’t have the budget and the manpower to do these huge research efforts ourselves, but our neighbors over here in London, DeepMind, released a paper last year and they also released a library to support what they were doing, where they generified a lot of the concepts of graph learning and how to do learning over graphs in this really neat way. Given that they were acquired by Google, it makes sense that they also figured out how to do this in TensorFlow.

[47:47] So what they’ve got there is a pipeline that now actually lets you input a graph into TensorFlow as the data, and get that same graph back out as an output, but with updates made to every element of that graph. So that means that essentially what we can use is we can use that as a little toolbox that allows us to perform any number of different tasks over our graph structure… And obviously, we’ve tailored that here at Grakn to work over the knowledge graph. But what we can do is we can just carefully frame the kind of problem that we have, so that this toolbox can help us to solve that.

And is that the Graph Nets library?

That’s exactly the one, yeah. That’s the one.

Okay. We’ll definitely link that in the show notes as well, because it seems like they have a good usage example, and notebooks and such, that people can play with that.

So you’ve totally won me over, and I’m looking forward to jumping in and playing with this, and I know Daniel is, too… Can you start walking us through what it is like to actually build a knowledge graph with Grakn? What do you need, what languages do you need to know…? And also, I noticed on the website you talk about – is it Graql [Gra QL]? Am I pronouncing that right?

That’s Graql [Grakell].

Graql, I’m sorry. My apologies.

No, no worries. Yeah, so I can give you the whole overview of what you would do.

Fantastic.

To close down what we were talking about just there, the whole learning approach that we’ve been building, and all of the research that we do on top of knowledge graphs - I’ll emphasize that - we release all of that as code available via our GitHub. Specifically, we have a library called KGLIB. That’s our knowledge graph library for machine learning.

KGLIB is a sensor of those projects, and the main one that we’re running right now is Knowledge Graph Convolutional Networks. That’s how we apply those learners on top of both the reasoner and the knowledge graph shaped data.

The starting point is how do you actually get a knowledge graph, right? How do I actually get my knowledge graph together? Now, the components that you have there, as you pointed out, is something we should start with. So you have Grakn itself. Grakn Core is released open source, you can download that from GitHub, or install it with a package manager… And that’s a database which is gonna run – you can install that on your local machine, and get it up and running, or put it in the cloud. So you need that back-end service running.

Now, when it comes to actually accessing that, we have three officially-supported drivers at the moment. We have Python, Node.js and Java. We make sure that all of those are up to date and working with the latest Grakn. What’s really interesting there actually is the communication protocol between those clients and Grakn. It’s called gRPC. So that’s something from Google, Google’s remote procedural call, that has replaced using REST services.

What’s really nice about this, and the actual end goal that that gets you to, is it means that when I’m accessing the database with Python, I get to actually use native Python functions. All I have to do is import the package that talks to Grakn, import the Grakn client in Python at the top of my script. Then I can just instantiate a communicator that will talk to Grakn and make queries to the database just out of my native Python. I can just launch them straight from my application, and it doesn’t feel like you’re talking to a database anymore. It just feels like you’re making function calls, which comes back with information that’s pertinent to your knowledge graph.

That’s great. And would you use that client tool to help you build your knowledge base? Let’s say that I have a bunch of text data and I’m pulling entities out of it, or classifying that in a certain way to store it as a certain type of entity… Would I kind of be doing that in Python and then push that to Grakn via the Python library? Or are there bulk upload techniques or ways to get data let’s say from relational to graph? What’s the range of what people do?

[52:05] Yeah, absolutely. Great question. Basically, you’re absolutely on the money. The idea is that we give the users these clients in their native language, because that’s their strength; we already know taht they know how to speak that, and they get all of the freedom that that language offers. And then, the way that you’re actually interacting with Grakn is through Grakn’s query language, Graql. You can probably see where that name comes from.

So Grakn’s got this query language called Graql, and the idea is that that’s a really concise, really expressive language… But then what you would do is that is your one-stop-shop for how you actually talk to the knowledge graph in terms of your intentions. So if I want to either retrieve something, then I make what we call a match query; if I wanna insert something, then I use an insert query. If I want to wherever I see a particular pattern insert something, that’s a match insert… I’m sure you get the idea. So you have all of these different ways that you can read and write from the database, and you do all of them in the same way through your application. You’d ask the client, you’d say .query; make this query, and then the response you get back will be the answer. Either you inserted something, or read.

Then what we’ve got - we’ve got a repository of examples, so that people can have a look on there. Very typically, people are migrating from either SQL data, or from CSV data, in which case it’s a matter of just writing what we call an ETL pipeline, so something that will just traverse over all of that data that you have and make the appropriate queries in Graql to get that data shifted over into Grakn itself.

Now, one of the questions that people ask me really often, and it definitely comes in on our community Slack quite often, is “Can I automatically build my knowledge graph?” We kind of talked about that a bit earlier in the call. The problem is that – it’s possible to automatically ingest a relational database into a knowledge graph, but the problem is you just end up with the same structure that you had in your relational database, but in the knowledge graph; you still end up with something broken, because you need to apply that human understanding that you have of the data that you have in these table formats. You need to say “What does that actually mean? What does my domain look like?”

So what you do is you first – well, it’s an iterative process, of course, like a lot of engineering… But you’re gonna start out by saying “Here’s my schema, here’s what I think my domain looks like. Okay, now when I go over this file, what parts of that schema can I infer from the particular row I’m dealing with right now?”

I guess if somebody wants to get into this - I know we’re both very excited about it, and I’ve learned a lot that I didn’t know before the conversation - where can they go and learn more, and actually start digging into using Grakn and Graql themselves? Any specific links that you wanna recommend?

Well, we have the docs available on our website. People seem to think those are quite fun. There’s also some in-depth examples there; for instance, how to do data migration into Grakn, so you can get that knowledge graph up and started, so that you’ve got something to play with. We then have an examples repository on our GitHub, and also, for those who really like to jump in at the deep end, then the KGLIB repo is quite a good place if you want to see immediately from the top how you’re gonna then do the machine learning over it.

And then I suppose the other thing to majorly encourage is to check out our blog. That’s blog.grakn.ai. We have a lot of stuff there that will give people an idea or give them a flavor of what you can achieve with the knowledge graph and how succinct it could be, to get you motivated to actually move your data over and give it a try.

James, thank you very, very much for coming on the show and just kind of schooling us in all this. It’s been really fascinating, and we appreciate it. Thank you, and we’ll talk to you soon.

Thank you very much for having me, both of you.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00