Practical AI – Episode #2
Putting AI in a box at MachineBox
with Mat Ryer and David Hernandez
Mat Ryer and David Hernandez joined Daniel and Chris to talk about MachineBox, building a company around AI, and democratizing AI.
Hired – Salary and benefits upfront? Yes please. Our listeners get a double hiring bonus of $600! Or, refer a friend and get a check for $1,337 when they accept a job. On Hired companies send you offers with salary, benefits, and even equity upfront. You are in full control of the process. Learn more at hired.com/practicalai.
Rollbar – We catch our errors before our users do because of Rollbar. Resolve errors in minutes, and deploy your code with confidence. Learn more at rollbar.com/changelog.
Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.
Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code
changelog2018. Start your server - head to linode.com/changelog
Notes & Links
Click here to listen along while you enjoy the transcript. 🎧
Welcome to Mat and David from Machine Box. It’s great to have you here on Practical AI. I know that Chris and I, when we started thinking about guests for Practical AI, and I was thinking about our slogan or our mantra at Practical AI, which is “Making artificial intelligence practical, productive, and accessible to everyone”, I know the first people that came to my mind were Mat and David from Machine Box. Welcome, guys.
Thank you very much. It’s great to be here.
Yeah, thank you.
Let’s start out maybe with just a brief personal intro from both of you guys. David, why don’t you start by giving us a little personal intro?
Yeah, I’m David Hernandez. My background is in computer science and engineering. I was studying back more than ten years ago in the uni machine learning and artificial intelligence. Mostly I never used it until recently, the last three years; machine learning was booming and I started to get through and implement – first, my knowledge, and then I started to implement things. Probably, that’s how we started Machine Box in some way.
Professionally, I’ve been developing since I finished my degree. It’s almost ten years doing distributed systems, websites… My highlights probably, I’ve worked at BBC in 2012 for the Olympics. We were delivering the real-time system to all the stats and all the video player data for the Olympics, basically. It was a really nice project. So yeah, that’s it.
Sounds great. Mat, why don’t you give us a little bit of an idea of where you’re coming from?
[03:58] Sure. Hi, my name is Mat Ryer. I’ve been doing computer science all my career, in various forms. I’ve spent a lot of time in the Go community at the moment. I fell in love with Go as a language before it was released as version one. There’s a little experimental tag in Google App Engine and I wanted to build something on App Engine… Anything that’s got a little experimental tag is going to grab my attention; it always has. So I jumped into the language quite early, and I’ve just been using Go since then really, wherever I can, and it turns out you can use it everywhere.
I speak at conferences about Go mainly, and also I have a book, Go Programming Blueprints, which is nice because you build real projects in that book. It’s not a book where you learn the language or you just learn theory, you actually build real things. It’s very practical and very pragmatic. That’s why I quite like the way you guys are approaching this podcast, because complicated things can be extremely powerful, but they’re very difficult for people to marshal and get into a shape that they can put into production. That’s really our philosophy at Machine Box, is to give people a head start on that and get them into production much quicker.
And Mat, while we’re at it, what’s the name of your book before we go on?
Thank you. It’s called Go Programming Blueprints: Second Edition.
Okay, thank you very much.
You don’t have you say it in that accent, but it helps.
It sounds much better.
I think it does help. So how did you guys originally meet and how did you start thinking about forming a company together that’s focusing on AI?
That’s an interesting story. Back a few years ago, I was one of the organizers of Golang UK, the first one - now GopherCon UK, or just GopherCon, I don’t know.
GopherCon UK. Mat was one of the speakers, so I met actually Mat in that conference. He was in another company before, and I was looking for a job. I was a contractor at that time, so I joined the same company that Mat was, and we met there basically. We worked there for a few years.
Yeah, David has a really unique ability to think very clearly about big problems that are otherwise very complicated, and that’s a key skill for any team to have. If you can bring somebody in that can look at these big, broad problems, like massive scale, planet-scale sort of problems… Like David mentioned earlier, he was part of the team that delivered the software that ran the Olympics. There’s no dry runs of that. You can’t say to everyone, “Guys, like a week before, can we just have another Olympics just so we can test out all the software?” They’ll probably say no.
So having somebody like that on a team is invaluable and it was very natural when it came to looking at machine learning and David’s expertise in it. It was kind of his eureka moment where he said, “You know, we could actually containerize this and deliver it in a way that makes it very trivial for everybody to use machine learning capabilities, rather than having to learn TensorFlow and work in these abstract mathematical models. We could tell some stories differently, we could give people an API that just makes it very clear, and it sort of changes the way you think about the problem a little bit.” It focuses really on the problem that you’re trying to solve, rather than technical low-level machine learning components that you might use to solve it.
[08:06] That’s great. I’ve been a Machine Box user for quite a while… Very soon after you guys launched I was super excited about it, just because of the practicality of the projects. In terms of what Machine Box is actually - as a developer, if I was wanting to use Machine Box to do something, what might that something be and what would be the interface to doing that?
Machine Box is basically – we deliver machine learning models in Docker containers. What you basically need is Docker installed on your computer; that is available in any major platform - Windows, Mac and Linux. You just Docker pull one of our images, you have a nice API in our images so you only need to know about HTTP APIs to get it started and do, for example, face recognition. That is one of our most famous boxes. You can add face recognition to your stack in just minutes. So that’s basic tools; Docker, a little knowledge to do HTTP APIs as a programmer - probably every programmer should learn that skill nowadays - and that’s basically it. You don’t need any other knowledge.
Just to clarify, if I was a data scientist or a developer, or whatever I am, there’s a lot of APIs out there, both from the cloud platforms, like with machine learning, but also other things… Like, if I want to send an email programmatically, there’s a REST API for that, which uses HTTP and JSON… So you’re saying one of your goals is to really make the interface to doing something complicated - maybe like facial recognition or something - as easy as it is to send an email via one of those APIs. Is that kind of a good summary?
Yes, that’s exactly right. Essentially, the machine learning that’s going on inside the boxes is very complicated. Sometimes we mix different kinds of technologies in different ways. If we try to explain how to do that, it would be very complex and I think the deployment would be difficult. Even just managing the dependencies would be a bit of a nightmare. So we take on all that pain and provide APIs that tell different stories.
For example, you mentioned facial recognition. Facebox is a Docker container. You download it, you run it, you then have HTTP access. The operations you can do are things like, “Here’s an image. Tell me all the faces in that image, and give me the coordinates of the faces.” Not only that, - “If you recognize these people and who the face belongs to, tell me who that person is as well.” And then there’s another API call to teach, and we support one-shot teaching, which is also pretty kind of rare still. It just means that with one image – so, Daniel, I could take an image of your face and teach Facebox with one example image, and then if we took a big photograph of a conference and you were in it, Facebox would be able to find you and identify you. So you get that facial recognition capability, and it’s only a couple of API endpoints you have to learn. It’s basically “Teach this face, and here’s an image - who do you see in there?”
[12:02] And then, yeah, it’s all JSON, because we wanted to just feel really familiar and just fit into what people already had, and HTTP and JSON APIs still dominate. They’re the simplest to use. They’re nice, because you can just use them in the browser, and when you run one of our boxes, we actually host inside the box a little private website, which you access through local host 8080, and that website contains all the API documentation, but it also lets you interact with the box without even writing any code… Because it’s very important on our mission to, first of all, communicate what’s possible in a very simple way, and then make that easy to play with and get to use so that people can see the power of it. And then once they’re sold on that, then it’s just a question of making the integration easy in operations. So we’re really focusing on that whole flow end-to-end. In particular, we care about people without any kind of machine learning experience being able to use these powerful technologies.
It sounds you’ve taken the machine learning part and abstracted that and put it in a little black box for your end users. Who specifically are you targeting as your customer for this?
We have already paying customers. I say “already” because Daniel started playing with Machine Box way before we really launched anything, and one of the nice things about the way we approach our developer community is we give them the technology for free early, and let them just play with it. In that process what happens is, first of all, any bugs are immediately found and squashed. Luckily, it doesn’t happen very often. We do a lot of testing and test-driven development and other techniques which help us when it comes to code quality.
But beyond that, we get to validate the way we’ve told a story and also if the APIs really make sense for the particular way in which their system expects to use a technology like this.
We see customers of all kinds. It’s a developer tool, so this is for developers to integrate into their platform, so by and large, all of our audience are developers. But the people that really have so far found it to be useful are people who understand machine learning in broad terms (some of them), but they know that it’s a lot of effort to go to to build your own things yourself… And then if you care about the data not leaving your own network, whether that’s on-prem or your own cloud, because we’re just Docker containers, you can spin them up anywhere and scale them anywhere. You keep control of all that data.
[15:44] My favorite users - it’s a personal opinion; it doesn’t necessary mean that it’s right… So my favorite users are people doing DevOps, basically… Because they basically love it because they usually don’t have time or are willing to learn any kind of data science. They want to solve specific problems, and they find Machine Box and our APIs really good and really productive for that. So we get a lot of love from DevOps. The best comments that we hear are from people doing DevOps, like “I have this problem. I want to solve it quickly. I want to deploy it quickly”, and it’s just the perfect tool for that kind of people, pretty much.
Yeah, that’s great. Personally, I can attest to the quality of the models. I actually got into a little bit of trouble at a conference, because I was showing Facebox and kind of a one-shot updating of the model, and people didn’t believe me that it actually worked that well.
That happened to us, as well. In a demo, we’ve had it where people just think we’ve spoofed it. [laughs] I know, it’s surprising, because we’re told again and again, “For machine learning to be any good, you need massive amounts of training data”, so that’s why. And really, the solution – it’s a big secret of what we do, but it’s just a clever use of technology inside the box, which allows us to provide that. But the thing is we don’t want people to have to worry about how it works, we just want them to know that it works and integrate it and get to MVP really quickly. That’s really another one of our goals.
A few weeks ago, I was in San Jose at NVIDIA’S annual GPU technology conference, and through my employer, I was in a small group meeting with the NVIDIA CEO, Jensen Huang, and he noted something that I see you guys going towards - that we’re really at a junction where software developers are becoming the targets of machine learning, rather than just data scientists. It will continue to be both, but he noted that that was a big strategic initiative on them - to target the software development community, which is somewhat new to these technologies, and it seems that you guys have really centered your strategy around that approach.
Yes, that’s right. Really what happened - and if I’m being completely honest, we just built something that we needed to use. We wanted to use some of these technologies, and it’s hard, and we had the constraints… Some of them at scale. Some of the prices of the machine learning APIs at scale really – it really becomes prohibitive. It’s still quite expensive and it’s quite valuable, I guess, so that’s why, but we weren’t really too strategic about it in the beginning. We just thought, “Let’s build it how we think it should be built and how we would want to use it.” From there, we’ve then started to see traction and some great feedback on our developer experience.
Yeah, definitely. I want to follow up a little bit on that idea that we mentioned around the conference talks - you used this Machine Box to do something and it’s doing something complicated under the hood, and it’s giving you great results, but to some degree even though you might know generally what’s happening in the box, it’s still is a black box and there’s a lot of back and forth in the industry right now, at least in the circles that I frequent around - is treating machine learning in AI models as a black box a good thing or a bad thing?
[20:02] I can download pre-trained models and that sort of thing that I don’t really understand, from the TensorFlow repo and other things, and often I don’t get the results that are either the published results or the quality that’s promised from these pre-trained models. Now, the models that you’re putting out are definitely really good quality, but I still don’t really know all of what’s going on the inside. So in this case, we’re treating machine learning in AI models like a black box. Why do you think that, at least in certain cases treating models like this, like a black box, can be a really good thing, or maybe what are some downsides or cases in which maybe you wouldn’t want to treat them like that?
Yeah, all the Machine Box models are kind of a black box. In that case, we don’t have any explainability for any of the models. But also, most of the models are based in neural networks, so nobody has that answer yet in the research. There has been some research about it, but nobody knows what is happening inside.
So you just mean in terms of the complexity of the models…
Yeah, but also for use cases. For example, if you are going to deny or accept a credit or an insurance, it’s quite important to understand what a model is predicting. Saying, “If my income is less than this quantity, the model is going to say, ‘You’re not going to get the insurance,’ or ‘You’re not going to get the credit.” But for example with facial recognition, you care less about why the model is predicting that this is matching a face rather than not matching these other identities. You are more worried about the value that you can extract for that matching, rather than the value that you can get explaining what the model is doing.
It’s quite a balance, and it really depends on the use cases. Mostly in our use cases it don’t really matter the explainability. In most of the boxes – we have for example a classification box that allows you to build any kind of classifier, giving text, or images, so it may matter most for that kind of model. But in a general sense, we are more focused on getting value for the models rather than to explain what the models do.
Yeah, that’s a great point, and to your guys’ point, I think if you’re not able to put your model into production and get any value out of it via a useful interface, then really what we’re talking about is just AI research that isn’t really applicable in a business setting. So you have to be able to get things into production, and I think that’s where this sort of black box treatment, in my opinion, is a really good thing in terms of providing a unified interface for developers and DevOps people and infrastructure people to interact with a model.
Yeah. But anyway, I believe that the research is going to come through, and someday we can explain how our neural network does the reasoning and why a prediction is that prediction. We’ll probably try to keep up with the research, and if that comes through, there’s a possibility to add it to the boxes.
[23:54] Yeah, but those sorts of things and a lot of the arguments against black boxing are – I think people who are deep in machine learning, they know about it. They want to invest time and resources into building expertise, and things like that. Lots of people aren’t in the position where they can do that. We give them a capability as a solution. There are models inside; sometimes, there are multiple ones inside each box, but there’s also other things going on in there, so it really is a solution. The only reason really that Machine Box isn’t just completely an open source project is that it’s just so complicated. It’s not like it’s just a trivial little package that would be sensible to open source and everyone can get use out of. To be able to contribute to the Machine Box code base I think it would be more difficult than other projects. That’s one of the reservations I have against open sourcing.
But yeah, it’s really an audience question, I think. If people care deeply and know a lot about machine learning, then maybe they’re going to want to pickup TensorFlow and tackle it themselves. If you’re an app developer and you want to quickly make your software smarter, slotting Machine Box in is just the quickest way to do that.
Yeah, and I think it’s not inconsistent with other trends we’re seeing, like TensorFlow estimators and that sort of thing, which is intending to give these modules to people that will let them practically integrate things.
Yeah, exactly. It’s kind of overlapping. They are catching up with Machine Box.
That was a great transition, when you’re talking about the tooling… Under the hood, I assume you’re talking about TensorFlow there… What other tool are you using? Where are you using Go, if any? I’d love to know how you guys are putting the pieces together.
The basic stack is in Go. Probably more than 80 percent of the code is Go, because more than 80 percent of the Go is just APIs, network calls, and these kinds of things. The machine learning models - the training is done in Python, and our favorite frameworks are Keras and TensorFlow. That’s mostly what we use for deep learning. We use other ones, like more traditional machine learning, things like [unintelligible 00:26:43] is a really old C library that I quite like… But basically that’s it. It’s not so much machine learning code. We serve all the models in Go, and train all the models in Python, and even the scripts.
And just out of curiosity and maybe for the audience, why Go for 80 percent of the stack? What is it about Go? Because so many people in the AI space are doing Python, they’re doing C++… You don’t hear Go as often, so I’d love to know why that for your selection.
Go has a deliberately limited language feature set. I once was speaking to a group and I said, “You can’t do that many things with Go”, and it got a laugh because I realized how it sounds… But what I meant was the actual language itself doesn’t have that many features, which forces the code to therefore be simpler.
In some of the more modern languages with the OO you have big type inheritance, you’ve got all of these language features that allow you to build really quite complicated, very clever and complicated things.
[27:57] The Go philosophy is around simplicity, which mirrors exactly what we’re doing at Machine Box, so it fits brilliantly. Essentially, Go code all kind of looks the same, so it’s all familiar, and you get such benefits at development time, but actually more as you maintain the projects. So that’s why Go wins, I think, from our point of view. Plus we’re fanboys of Go; there’s no denying that. We met at a Go conference.
Pretty much, yeah. But also, some people are really surprised when they ask, and they may have heard about Machine Box being a black box, or at a conference… They contact us and say, “Oh yeah, I like your product. Just out of curiosity, how many people are you?” “Well, it’s just Mat and me developing. We have some business side with Aaron, but it’s just pretty much a three-people company right now.” The people get quite surprised like, “Oh, you did so much. You have so many boxes, so many products, and like two people developing and one business development.”
Yeah, and the answer isn’t that we’re awesome, although David is… The answer is that we are very selective about what we do. We deliberately don’t do as much as you could do. There’s loads of possible things we could push into Facebox, for example, and some of them tell you where the eyebrows are. I haven’t yet seen a good use case for why you need to know in an image where the eyebrows are, but maybe there is one. But until then, we’re not going to invest all our time and effort, and also add that kind of complexity to the API.
So yeah, it’s because we pick. We’re very selective about what we do. We pick the things that we think that are just the gold from all this potential complexity, and we just focus around telling that story and solving that problem. So that’s how we are able to do so much it seems, I think.
Go is the perfect tool for our philosophy. It fits really well into that mantra, into that mindset. So it’s the perfect tool for us.
I think both of you guys are awesome, just to set the record straight.
Thank you, I was fishing for that. That’s why I said it. I’m glad you picked up on that. [laughs]
I figured you were. Not only that, but you’ve given me my next blog post idea, which is around eyebrow-based analysis.
Very important stuff.
You can detect sarcasm with it. That’s the only use case, I think.
Or maybe anger.
Mat, with you, if you had that sarcasm detector, wouldn’t it be pegged most of the time?
Yeah, it would – well, you can basically just return true, as a shortcut.
Yeah, that could be 99.9% accuracy.
There was one time where I said something serious and I wasn’t being sarcastic, but I forget what it was now. [laughter]
So you’ve talked a lot about your technology stack, why you’ve chosen Go… One thing I’m curious about – I think everybody should use Machine Box in one way or another, but there’s a lot of people out there maybe that are working on data science teams or data engineering teams or whatever it is, and are maybe using TensorFlow to develop and train models that are getting deployed internally into their own sorts of services and products.
I’m curious… Because you consistently produce such high-value models that are integrated into your products, do you have any advice around that progression from training your model to getting it deployed within some type of service, whether that be – you mentioned testing? Testing might look differently from machine learning models or AI models than in other cases, but do you have any advice and insights around that process, from training your model to actually integrating it into a service, whether that’s integrating Machine Box into your service or maybe that’s integrating your own model into your own internal service?
[32:17] I don’t really know. Most of the problems are just technology. Usually with technology, you just get it solved in one way or another. There are a lot of tools coming up these days that solve the problem, including Machine Box, but also in TensorFlow deployment is getting better. But I think the most important is people, so how this machine learning thing is transforming the way that people see software, especially talking with customers. In machine learning, we have a lot of false-positive/false-negatives… Once you have something in production, they come up with questions.
Sometimes, the question that most of the customers ask, “So, we have this problem.” Well, that’s not actually a problem. It’s just a false-positive, and there are ways to deal with false-positives and false-negatives. Changing the mindset to accept that a thing is not a bug, it is a false-positive in a machine learning model… It changes the way that you interact with people. It’s like, “Oh, you’re not going to have a machine learning that is 100 percent accurate, so you have to deal with these situations” and that situation is just the way that we are mostly struggling or just trying to get the right conversations with people, and I think that is going to come up in software development in the next couple of years.
Yes, definitely. One of our big challenges is communicating what’s actually going on. We thought we’re just gonna deliver face recognition APIs and that’s it, or image recognition/image classification, or personalization APIs… And we found that, quite quickly, we did actually have to get into the conversation a bit more about, “Look, we don’t expect this to get everything right 100 percent of the time. We expect it to do a much better job automatically than you’re doing.” Hopefully, you can get it to the point where the exceptions that you have to deal with, if there are any in the workflow, get smaller and smaller.
But yeah, that’s definitely been something we’ve had to focus on - communicating that this is unlike other software where you do something and you get a result you don’t like, that’s a bug. And we’ve had some bugs opened where it says, “I put this image in and it didn’t find the face.” And, of course, the face is turned to the side or its got a weird shadow on it or just something is weird about it. Then we get into that conversation, “Well, it isn’t really a bug. It’s part of the expected workflow.” The question is “How do we then tackle that going forward?”
From a data scientist’s point of view, someone did actually ask if they could put their models into our boxes, because they knew the building the models bit, they were good at that, but they had no idea about getting things into production… And running them at scale, one of the very early rules that we gave ourselves - and this is common sense now, I think, a little bit, but it comes from David’s experience building at massive scale for the Olympics in particular… It was that we had to be able to horizontally scale the boxes just by adding more of them, because scale is fine if you get this awesome technology and it works nice and slow on one machine, but to really get the value from it, in most cases you want to run this thing at scale so that it can really get through the work that it needs to get through.
[36:04] And so we’ve spent a lot of time also, which you don’t really see apart from the fact that it just works, but we’ve spent a lot of time in making sure that these boxes could horizontally scale in a kind of Kubernetes environment where it was just elastic, up and down, as you need it. And of course, you have to think about what’s the state inside the box, how does that work, and various of that sort of – will just load balancing across the boxes be enough to get what you want, or is there more that we need to do and where does that happen, and all those kinds of things. So yeah, it’s been a great experience building it, and it’s more fun when people start integrating it and paying for it. That’s when you really feel like you’ve created something valuable.
Yeah, that’s great. I can definitely resonate with some of the things you said around exceptions in models and that sort of thing. I think people too often, in my personal opinion, think about an end-to-end machine learning or AI model that does everything all the time correctly, and I think that’s to some degree, the wrong thought in a lot of cases, because when machine learning models fail, we have an opportunity to refactor them, which is in the end a good thing.
So getting close to the end here, I was wondering – again, what you guys are doing is setting some standards as far as interacting with machine learning models, so I’d love to get more advice from you guys… In terms of the skills that data engineers or just developers who don’t really consider themselves data scientists or AI researchers, what skills would you recommend them looking into, or what kind of skills do they need to start integrating machine learning into their applications?
I don’t think you need that many. It depends how deep you want to go into it. The trajectory that I would recommend to somebody who didn’t have any kind of idea about it would be to start by consuming APIs. If that’s good enough, if that works for your case, then you don’t have to do anything more, and that’s what we found so far. A lot of our customers have said, “We’re just going to try this because then we can build MVP quickly and then later we might change it.” And then that later never happens, because the box is doing just such a good job that they don’t need to then change it. So definitely any kind of API skill around consuming APIs… Most people already have those already.
And then I think beyond that, it’s really just a question of understanding a little bit more about just the high-level concepts I would say would be useful. With the classification box you can create your own classifier with a training set. Now, with classification box, you do need a good amount of examples for each class. When some people start using it, they have just a couple of images, a couple of examples, and you can’t really get a model that’s useful from that.
So learning things – there are sort of softer skills around machine learning, I guess, which is the kinds of data, the kinds of problems that machine learning is good at, first of all, then what kind of training data you’re going to have… Because machine learning is only as bad as its training data. So I think those sorts of things would be useful for everyone to have. And then if you’re getting into more machine learning technical stuff, then I don’t know.
[39:54] In my opinion, you should focus on one type of problem. Machine learning is quite broad. If you want to get started, there are many different subfields. Probably just focus on a problem that you have or you want to solve, like sentiment analysis, or classifying text, or something more or less straightforward, or in machine learning more or less easy, and learning by doing it, instead of focusing on maths or things like that. You can get easily lost in that sense. So try to learn by doing - solve a problem that you have and see how it goes. Once you have that working, you have that boost of energy, like “Oh, I have something that is more or less working. Maybe it’s not state of the art, it’s not very accurate, but it’s better than random.” The machine is actually learning, and that is a good feeling. Probably just that is good to get you started and get more curiosity and learn more things.
That sounds great. So let me ask one last question for you as we wind up… So many of the listeners that we have are trying to figure out how to get into machine learning themselves, and they might be software developers, they might be business people who are intrigued by what’s possible here… So as two entrepreneurs who have gone down this road and you have created a business based on making AI technologies available, and recognizing there are so many people that may want to either supplement their own business that they have or create a new business… What advice do you have for other entrepreneurs that might be interested in taking the same adventure that you guys are now a couple of years down? What would you say to them?
I would always say solve a specific problem. Make sure you’re solving a real problem. This goes for any kind of software actually, but especially machine learning because it’s all cool and sexy and hard. Machine learning is hard. Like David said, if you make some ground, you get really big rewards for doing that, like just the emotional rewards you get. So yeah, it’s difficult to make sure that you’re building something that has some true value, because if you’re just building cool tech, then there’s no guarantee that’s ever going to be anything.
Often, you end up building something that technically is brilliant, but actually doesn’t quite fit the problem, and then you have to basically move or change what you’re doing so that it does solve a real problem, and that can be quite a painful transition. It usually involves adding loads of complexity because you weren’t really thinking about those things from the beginning. So of course you want to be able to evolve and learn and move a project along, but I would say start with a real problem that you understand, and the problem shouldn’t be anything to do with machine learning, but machine learning might be part of the solution.
Great, that’s a wonderful advice. We’ll include links, of course, to Machine Box and other things that we’ve talked about - TensorFlow and Keras, and Docker and Kubernetes… If you’re not familiar with those technologies, we’ll include some good links to getting started with those and learning more.
I just want to thank David and Mat one more time for joining us. It’s been great to have you here, and really excited about what’s going on with Machine Box.
Thank you very much, and good luck with the podcast. I think it’s awesome. I can’t wait for future episodes. I’m sorry to everyone who had to listen to our voices for this episode, but future ones, I’m sure, will be even more interesting.
Thank you very much.
Thank you. I appreciate it very much.
Our transcripts are open source on GitHub. Improvements are welcome. 💚