Practical AI – Episode #73
AI-driven automation in manufacturing
with Costas Boulis, Chief Scientist at Bright Machines
One of the things people most associate with AI is automation, but how is AI actually shaping automation in manufacturing? Costas Boulis from Bright Machines joins us to talk about how they are using AI in various manufacturing processes and in their “microfactories.” He also discusses the unique challenges of developing AI models based on manufacturing data.
Featuring
Sponsors
Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2019
. Start your server - head to linode.com/changelog.
The Brave Browser – Browse the web up to 8x faster than Chrome and Safari, block ads and trackers by default, and reward your favorite creators with the built-in Basic Attention Token. Download Brave for free and give tipping a try right here on changelog.com.
Notes & Links
- Bright Machines
- Bright Machines’ Microfactories
- Digital twins
- Other relevant Practical AI episodes:
Transcript
Play the audio to listen along while you enjoy the transcript. 🎧
Welcome to another episode of Practical AI. I am Daniel Whitenack, I am a data scientist with SIL International, and I’m joined, as always, by my co-host, Chris Benson, who is a principal AI strategist at Lockheed Martin. How are you doing, Chris?
I’m doing fine. How’s it going today, Daniel?
It’s going good. Well, today I think it’s gonna be a great show… We’ve got a chance to talk about something that we’ve mentioned here and there, we definitely talked about it a little bit, but haven’t had a whole show devoted to… And that’s some ideas around AI and automation in manufacturing. Specifically, we have Costas Boulis with us, who is the chief scientist at Bright Machines. Welcome, Costas.
Thank you for having me.
Maybe before we jump into Bright Machines and manufacturing and all of that stuff, if you could just give us a little bit of your background, and maybe we could learn a little bit about how you got into machine learning and AI, and ended up at Bright Machines.
Sure, yeah. I started machine learning when I was doing my Ph.D. here, and I really fell in love with working with data, and what this data means for everything, for so many applications of machine learning. One of the things that I have done and I have enjoyed doing in my career is working in different aspects of machine learning and artificial intelligence. I’m not the type of person that would stay with the same problem for 20 years, working on some specific aspects, for example natural language processing, and never seeing anything else.
I think that there are many commonalities across broad areas of machine learning models, and there’s tremendous value when someone tries to have the perspective that was gained from one area, to apply it to another area. That’s also what brought me to Bright Machines.
I’ve done academic work, I also worked at Microsoft, at Amazon, in different projects, in detecting malware and phishing, in computer vision, in natural language processing as well… And the very interesting thing here with Bright Machine is that they’re trying to apply AI into a big area, huge area of manufacturing, that has not really yet been touched by the revolution that is happening in so many other areas.
So you would say that manufacturing in particular maybe has been lagging a bit behind in terms of adoption of AI technologies?
[04:03] Yes, that is definitely the case. And I guess there are a number of reasons why this was happening. All these years, at least starting 20-30 years back, labor was always cheap, especially in places like China, and always available. So if you wanted to manufacture a product, any product, it was definitely an option to say “I’ll hire a number of workers for a few months. It will be pretty inexpensive to compensate them, and then I’ll just move to another product. I’ll be smart about it; I’ll just throw more people to the problem.” And this is less of an option right now, even for places like China. Labor is becoming more and more expensive, and demand for products is becoming bigger and bigger, especially for electronics products.
Clearly, if we want to keep doing what we’re doing, and if we want to enhance what we’re doing, we cannot rely on the old ways of doing it. We have to have smarter robotics that do the work for us.
I know just from some of the things that I was exposed to right when I came into the industry, it seems like manufacturing has had some software influence in terms of process control and control systems… But in terms of the human element and automation - are you talking about the sort of end-to-end automation we’re striving after has been lacking, or maybe just the sophistication of the methods that are used, and that sort of thing?
I would say both, but it’s basically kind of the end-to-end. For example, if you look at electronics products, there’s a line of different machines that do different things. You have your printed circuit board, and there you start putting smaller components, the capacitors and the resistors, and integrated circuits - that part is actually very well automated. There’s these surface mounted components, and the machines to do this, and that initial part is, I would say, automated pretty well. But this is not where things stop…
After you have your smaller components, you have some bigger components you have to put in. You have to have heat sinks, or all those Ethernet ports, and all these bigger things. Also, you’re gonna have your electronics board, and then you’re gonna have to put more boards on top of that; for example if it’s a motherboard, you have the RAM chips and other PCBs that you put on top of that… And you have to put everything in some kind of casing. Your TV remote control has a PCB that is encased in a plastic case… Anything right now - home alarm systems, anything you can imagine.
So the later parts are not really automated. If you want to, for example, get those RAM chips and insert them into a motherboard, that part is not very well automated, and people are usually doing this job. This is the part that – it’s kind of complex enough yet, because there are different objects that you can encounter, and it’s not clear right now how you’re gonna pick this object, how you’re gonna grip it, how you’re gonna place it… Humans are very good at that, we don’t have to explain really much to a person how to do these tasks, and that’s why this has not been fully automated, and it still relies on people.
If you walk into a manufacturing line, the first thing you’re gonna notice is there are many people there. Although there’s automation, there’s still many, many people involved in the process. So what we’re trying to do at Bright Machines - we’re trying to automate automation; automate the end-to-end process, increase the sophistication of every aspect of what is being done.
[08:03] I’m looking at the Bright Machines website as we’re talking about this, and you kind of got to my question almost before I did here… I’d like to understand what you’re trying to accomplish. It’s very clear that Bright Machines believes that robotic systems are finally ready for primetime deployment… And obviously, the market is bearing that out in a giant way. You guys are number 13 on the Forbes list of AI 50 America’s Most Promising Artificial Intelligence Companies… What’s just happened that’s enabled you guys to suddenly hit the sweet spot in the market that you are fulfilling at this point? What’s changed?
Well, it’s not an abrupt change. It’s a realization in the manufacturing world that the current state of manufacturing or automation is not enough. If you’re trying, for example, to automate the latter parts of the manufacturing process and you try to use whatever tools and visual libraries or other means that you have available in order to automate that, you may be able to have a solution, but it will take you a very long time to build that solution. It will take you months to build a solution. And also, that solution will not be robust. So if things change, your solution may break.
Imagine that a manufacturer wants to start a new product, and the first product is gonna roll out after eight months, and the solution that we have may be breaking once or a few times per day, or per week maybe… That’s definitely not acceptable. The manufacturer wants faster deployment times, they want to crank out projects as soon as they can, and also they wanna have a solution that works for them. So everything we do at Bright Machines is targeted at these two things; these are our main tenets: trying to reduce deployment time for automation for manufacturing, and also trying to build more robust solutions. Everybody’s gonna have confidence that they work, even if conditions change.
So as part of that making things robust and reducing the deployment time, I’m reading about some of your efforts in these sort of micro-factories… Is the idea there to have these modular tasks that can be spun up very quickly, or is that really getting more at the end-to-end automation piece? Or maybe it’s both.
What we’re trying to do - let me start with some of the difficulties that the current manufacturing process is having. One of the sources of difficulties is that the manufacturer - let’s say they’re trying to build a new product, so the manufacturer has to repurpose hardware. They decide to build a new home alarm system, and they were building something different before, so they have to get some hardware from the other lines, add a number of components, maybe add some lights there, or modify the conveyor belt, and modify the tray feeder, or a bunch of other things… And then they have to build a vision solution from scratch basically, and they have to test the whole thing.
There are two sources of errors or of things that can go wrong. The first thing is that hardware - a typical manufacturing line is not standardized right now. Also, the second thing is that modern computer vision and AI/machine learning solutions are not being used extensively to understand better what the robot is looking at and what to do.
I was just gonna ask there, since you’ve mentioned it - with computer vision, how does that really integrate into how you guys are approaching this problem? When you’re having micro-factories and you’re using the robotics, how does AI fit into that picture?
[12:09] AI and computer vision is one of the main efforts we’re having here in order to shorten deployment times and have more robust solutions. Right now, a lot of the vision that is happening in the manufacturing lines seems to be stuck in the past. There is usually a camera in many manufacturing lines; there’s a camera that takes pictures, or there’s a video, and from these images people are trying to build some vision solutions for how to locate the specific point that they care about, or how to complete that task. But the things that they’re using are very low-level. They’re using things like edge detection, or blob detection, or they’re doing some kind of histogramic realization, or they’re doing some kind of image pre-processing, contrast enhancement… So very low-level stuff.
Imagine that you have to, let’s say, insert a DIMM into a DIMM slot. You have a DIMM slot on a motherboard, so you have to find where the DIMM slot is. What people usually do is they say “I’ll define a region of interest.” Basically, a specific area in the image they’re looking at. In that region of interest there’s gonna be some kind of a marker or some kind of a distinct pattern. This is a very rigid pattern - some lines there that are kind of printed there… And from that marker they’re gonna define some other region of interest, and in that region of interest they’re trying to find another marker, some kind of a very rigid structure that they can count on, some kind of an anchor point… And from there, they’re gonna move [unintelligible 00:13:51.06] and maybe they’re gonna have the center of the DIMM slot that they really care about.
That is how a blind person would navigate the world. They would just touch it, find an edge, and from an edge they’re gonna say “Oh, I know I have to go ten steps that way and I’m gonna find another edge, and I’m gonna find some other door maybe to walk in…” So there’s not a lot of understanding that is happening. The robot is looking at something, but it’s not understanding what it’s looking at. Everything is edges and lines and blobs. And you don’t want no edges and lines; you want edges and lines because you want to synthesize information of something that is higher-level… So what you really wanna do is scene understanding. You want to know the objects. You want to have a model that says “Hey, I know what I’m looking at. I know how to find DIMM slots, and I know how to find heat sinks. I will always find them, detect them and understand them, and I’m not gonna rely on those low-level primitives to do these kinds of things.”
They’re trying to understand what they’re looking at, to say “This is a traffic light, and this is a person, and this is a crosswalk…” Cars will never navigate themselves by edges and lines, because people would die. In our world, we would want to move to these scene understanding, higher-level object models that will allow us to very quickly and more robustly build those solutions… Because imagine in the previous solutions that I’ve mentioned before, with the markers and the region of interest, and moving X and Y - you take all this time to craft a solution, and then you find a specific point… And then let’s say tomorrow there’s another customer that says “Hey, I hear you guys are having a DIMM slot project. Can you build one for me?” “Sure. Three months from now we’ll get it to you, because we have to start from scratch.” We have to say “Alright, let’s now take another picture of that motherboard of that new customer, and let’s find again where those specific [unintelligible 00:15:54.01] Everything from scratch. It’s like a Groundhog Day. You know you did this before, but you have to go through this again and again. There’s not a lot of reuse.
[16:08] So fundamentally, what we’re trying to do is make the robots less blind, less [unintelligible 00:16:13.12] and also less numb. Because robots are numb; they don’t feel the world, they don’t get any feedback about what is happening… And both vision can give this feedback, and we can also have other ways of getting feedback. We can have sensors, for example, that apply pressure, to get some kind of a force feedback… So that’s the high-level.
So as you were talking about the ways in which you’re trying to reimagine these vision solutions for manufacturing, I was thinking of some of the more recent research, particularly the methods that OpenAI is developing around their robot hands, and stuff… We were talking about that on a previous episode, where they were using randomization methods to make the solution a little bit more robust against perturbations… Are those the sorts of solutions that you’re talking about here, where you might encounter a slightly different motherboard, or a slightly different component, and you want to be able to generalize quickly to that other component that ’s almost the same, but a little bit different?
Yes, yes. We definitely wanna have models that can take care of all these variations that are happening. Let’s say that we have the same line - things do change in a manufacturing line. The environment lights can change, the cameras may be not calibrated as well over time, because things move around, and the camera may move a bit around, or maybe there’s temperature differences… Let’s say the boards, the PCBs that are coming to the line are not gonna be perfectly aligned to some kind of a reference point… So if things are not aligned, our solutions need to take this into account and continue working.
[19:50] So all this variation can really be addressed very well through computer vision, in that software-first world… Because traditionally, what people have been big into manufacturing in order to eliminate this variation and have automation work for them is they were putting hardware first. Most of the people were mechanical engineers, so that’s what they were trained to do.
If we’re not seeing objects very well, for example, for the camera, they would add a light source. The alignment problem - if something is not aligned, they’re gonna put something into some kind of a 3D print cradle, and it prints to align everything. These are mechanical solutions that can address some of these variations… But the thing with mechanical solutions is that they don’t scale. You have to do the same thing again and again, for different projects, and you haven’t really solved the problem; you’re kind of mitigating it, but not totally solving it. We think we can solve it in a much more scalable way, in a much better way, in a software-first world.
I’m kind of curious – that was a great explanation… How do micro-factories really fit into this, as you start applying this? And how do your AI efforts in terms of vision and maybe other problems that are related to this in terms of getting your robotics to where they need to be for your customers - how does that all fit in? How do you transition into micro-factories given all this?
Yeah, micro-factories - the thing that they really give us is that they standardize a number of hardware components so that we don’t have to grapple with things like “How do we control a camera?” or “What happens if there is another conveyor belt - how do we understand that?” So it definitely helps us to build the solutions in a much more scalable way by having the standardized hardwares. That’s the main thing that they’re [unintelligible 00:22:00.04]
And the fact that we have a full cell, we know we’ll have the full 3D model of the cell, and we know how things can change there - that also helps us model what to expect from a computer vision perspective.
And maybe just before we get too far into that, could you just describe what Bright Machines – what the micro-factories are themselves?
Yeah. The Bright Machines micro-factory is basically like a full cell. It has an industrial arm… These are the arms that do pick-and-place operations; we can basically pick a wide variety of different components, and then place them… Different tasks. There’s a conveyor belt there that moves the different products. These are the things that the different components will be placed on top of.
There’s also the different light sources into that cell, there’s cameras, there’s a place where the tray feeder will go… The tray feeder is where the components that we’re gonna be picking and placing are.
These micro-factories are intended to be the last step of the line of an electronics product. This is where, for example, a person would pick a heat sink with their own hands and put it into a board. This is where our micro-factories can help. They can also perform some of those tasks.
[23:34] So you mentioned standardization is one of the goals of the micro-factories, and you also mentioned trying to make AI models a little bit more robust… I was wondering if you could go into a little bit of the process – we like to be fairly practical on this podcast, given that it’s Practical AI, so I was wondering if you could share a little bit about the workflow you went through in terms of data gathering, and what you’ve done to create these new types of AI models.
Did you start out with your micro-factories and that sort of controlled environment and created some vision models there, and then tried to extend them to other places, or was it the other way - did you start with an existing customer video and imagery, and start there, and then figure out what you needed to standardize, and then standardize it down? How did that process work and where did this sort of data gathering, annotation and model-building fit in?
Yeah, so for example we want to build high-level computer vision models. In order to build high-level computer vision models, as you mentioned, we need data… And this is where the first challenge appears, because while the deep learning revolution has started a few years back, the main reasons or catalysts for this revolution was ImageNet, and a number of other datasets that were available, that people could just rely on and start developing their algorithms. ImageNet, for those folks that are not aware, is a dataset of about 14 million images, and it has close to 22,000 different categories/classes, and it’s geared towards classification and also object detection tasks, and other tasks as well.
ImageNet was possible because of Google image search and Bing image search and Flickr. People built crawlers that were able to find all those images and download them, and then mechanically annotating them. So the big challenge in the manufacturing world is that there’s no Google image search for manufacturing data. We cannot easily build this, because many of the components are custom, they’re also so different… There are literally hundreds of different (for example) heat sink types. There’s so much variation that people do not put in their Flickr account and share it with the world. So that’s one of bigger initial challenges, that we cannot use exactly the same path.
Now, one of the biggest assets we have in Bright Machines is the digital twin. The digital twin is basically kind of a virtual version of the physical robot. It’s kind of a digital replica where we tell the digital twin to move to a particular position in this virtual world, or do a task, and it’s running some code there, and we have confidence that if we take the same code and we deploy it to a physical robot, it will do the same thing. So we make the digital twin to be as close as possible to a digital replica of the physical robot.
Now that we have a digital twin, we can be doing things there. We can be using the digital twin to explore the world and to build what-if scenarios, and we can simulate some of the variation that we cannot naturally take in other sources, like download from the web.
The last few years people have been using GANs (generative adversarial networks) to simulate aspects of variability in their data that are missing. For example, let’s say that you’re building a fraud detection system; it’s hard to get data from fraud detection systems, because fraud is kind of rare, and especially if you’re looking for sub-cases of this fraud… Let’s say you’re looking for fraud from a particular country. It’s hard to get this data, it’s hard to acquire, and it will take a long time. Also, they have to have the right people to annotate them properly, annotate them and say “Yes, this is really fraudulent…”
[27:55] So one thing that people are doing is that they are kind of simulating this. They say “Well, how can I simulate what this fraud from that country would look like?” and then use these to basically understand better the variability that they’re missing. We can do this even better with a digital twin, because with a digital twin we have the full knowledge of this digital world. We can simulate much better.
At my company, Lockheed Martin, we’re also using digital twins for all sorts of stuff, and I’m fascinated that you guys are doing this in the space that you’re working in. I think it’s not something that we’ve really talked about on the podcast before, but digital twins give you an ability when you’re trying to build complex solutions to complex problems in the real world, to be able to figure things out ahead of time, and with the ability to generate data with GANs, as you described, to be able to fill in data that you may not even have, so that you can address a complex problem. So I love hearing that you’re addressing that.
I would like to ask you - in terms of robotics we’ve really only talked about computer vision-focused models so far, but I’m curious whether or not you guys are also using things like movement strategy models, and such as that… There’s so many different types of models that go into robotics, and I’d love it if you could take us through the variety of models that you guys use in your robotics solutions, for micro-factories and beyond.
Yeah, so we’re developing these high-level computer vision models… Besides those computer vision models we’re also experimenting with reinforcement learning approaches. Reinforcement learning is kind of made for the robotics world; it’s just perfect… Which is exactly what it is about. We’re having all these industrial controllers that are trying to complete a complex task; we try to specify every instruction, and try to say “Go there, then do that, and if that happens, do something else; if the other thing happens, do something different…”, it would just not work.
For example, in this first version of this DIMM insertion task - so the task is to get a DIMM card and put it in a DIMM slot. This is actually a complex task. DIMM slots have those latches, you have to unlatch them first if they are latched, you have to apply the right pressure… It takes quite a lot of pressure to put them right. If you’re not in the right place, things will break. So the first version or the first reaction to solving this problem would be to have engineers/people try to specify precise and full instructions for what to do: “Go there, do this. If that happens, do something else.” Well, you will spend a lot of time trying to specify everything that go wrong, and you will still not have a full solution.
So what we’re trying to do is a complex task - inserting a DIMM into a DIMM slot is something that is very well-suited for the robotics world; you can specify the basic things, you can have a reward function and you can have some negative rewards, I guess… For example, you can have an end-of-arm gripper that applies some force, and then get some force feedback. That’s a critical component, the feedback part.
And then it can know when it has correctly placed a DIMM, so it will get [unintelligible 00:31:40.09] in that case. It will know when something bad happens, like you apply pressure but you’re in the wrong spot and you break things, or you do not complete the task in a specified time, or you hit the boundaries of the cell, or something like that… And then you can watch and explore the world and try to define it by itself.
[32:02] The catalyst here, again, is a digital twin, because if you try to have a physical robot experimenting and exploring the world, it will take forever, because moving the different things takes time, and trying different things takes time… And in a digital twin, time is relative. Things can move really fast, and that’s why all the reinforcement learning approaches, like the OpenAI example that you mentioned, where they’re manipulating a Rubik’s Cube, and even other approaches - the AlphaGo, the chess cases - they have a virtual environment where they were able to go through countless cases of games and learn this in an expedited frame. For us humans - we are constrained in this physical world, so we start by trying to learn how to do a task, and it will take years… So we’re trying to expedite this learning in a virtual world.
I’m curious, as you’re developing these models – I mean, developing them, training them is one thing… I was wondering if you could talk a little bit about the challenges of deployment of the models, and inferencing. I’m not sure if specifically in manufacturing do these models have to run at the edge, on some type of hardware that’s very close to the line in terms of how fast they need to operate, and that sort of thing? What are the challenges around that in terms of the deployment and inferencing side?
Yeah, these are actually great question. Yes, the one constraint that we have is that these need to run locally, because the latency requirements are very strict. The goal of the manufacturing line is to have as many products per hour as possible, and having a very short turnaround time when it comes to inferencing is very important. That means that for the majority of the cases we’re gonna have to have things on the edge.
Now, we can be smart about it, meaning that we can share some resources across different cells, and we’re trying to maximize the use of this hardware, but the reality is that for many cases we do need to have local models, running in local hardware… And that’s something that we are developing and we are growing right now.
And how much do these models need to be updated over time? Does the manufacturing process drift over time in some ways, that causes the model to be updated, or often is it kind of you deploy the model to the edge device, for a specific process, and it runs for quite a while, and then maybe when you want to switch to a different product, or a slightly different component, things need to be retrained, and that sort of thing? What’s the cycle around that?
[36:05] Yeah, how dynamic?
Yes, this is something that we need to do some retraining, because things do change. There are new defects that are coming, there are conditions that are not properly identified and recognized… A key part of this is the retraining part, and understanding when you need to retrain is also very important. Ideally, you want to retrain when you detect things are different. If things haven’t really changed, there’s no need to retrain and take up valuable hardware resources for that… So also things like detecting when things have changed, detecting drifts, and also doing retraining of the different models that we have, are important in the process that we have. This is kind of unique to the manufacturing world - the latency and also the retraining part.
Another thing is that those models - for example we were talking about computer vision models - is that in the manufacturing world you really need precision; you really need to be highly accurate. If you look at the more standard object detection models that try to localize an object, when you put a boundary box around an object or entity that you care about… Like, “Find the dog in the photo. Put a boundary box on that dog.” Those models - they’re not really optimized or built for precision. They don’t care about precision, they care about “Did I find it or not?” Now, if the boundary box is a little bit off, it counts as a correct thing. Well, in our case it really doesn’t count as a correct thing.
You have to find exactly where things are. That means that there need to be some changes into the models themselves. The first thing that one would do there is try to have a high-resolution model. For example, when it comes to this object detection, some of those models start with some kind of a grid, and with that grid they make different decisions; they do some kind of further compensation on top of that.
We can make the grid more granular, but that would make the model much more expensive… So we have to find some ways that will be faster, but also have the precision that is required in this phase. This is another critical difference that the manufacturing world has from the standard computer vision and natural image.
Gotcha. One of the things I also wanted to ask you about, kind of in a non-technical aspect - with micro-factories and this amazing work that you guys are pushing forward, how are you seeing people fitting into this automation process? So many industries are now moving into this automation, and obviously we have conversations, as we should, and people concerned about what does the future of employment look like… How do you envision the automation that you are implementing fitting in with the human workers that are also there? And to add that in, what cannot be automated? Aside from the automation bits, where are humans optimized for the things that you cannot automate?
Yeah, that’s an excellent question, and a very big question. What will eventually happen si that there are gonna be job shifts. A lot of these manufacturing jobs – and by the way, many or most of those manufacturing jobs are menial, repetitive work. People don’t really last there, because of how repetitive and how boring this work is. They quit after a few months.
[40:02] The main issue that manufacturers have is that they cannot find people, because people keep quitting. The turnover rates for a typical manufacturer is 300%. Every time they have to hire three times as many people as the entire workforce because so many people don’t last for the entire year.
So what’s gonna happen is that these jobs will shift. There are not gonna be many jobs where you have to place a specific screw in a specific position again and again and again. Those jobs will shift to more higher-level tasks, like “How do you control the robots? How do you make sure that the robot works correctly? How do you monitor the work of the robot, and how do you program the robot?” Hopefully, those jobs are gonna be less repetitive, and more creative… But there’s no question there is a transition that needs to be made there, and that’s gonna happen eventually over a longer period of a few years.
So when you’re talking to people - and I’m sure you do talk to a lot of people just day-to-day, whether it’s at the coffee shop, or interacting with family, or whatever it is, and this subject comes up, around automation… In terms of people that fear automation a little bit - is that your perspective that you give to them, that maybe there might be this period of hardship in this transition, but maybe in the end the jobs are more satisfying and creativity-filled in the end, than what it is now? Is that the perspective that you try to impart? Or what are some good ways to enter into that subject with people?
Yeah, I mean, if we look what has happened in the past, similar transitions have been made. When computers first came in the ’70s and the ‘80s, people were saying “Oh my god, it’s gonna be the death of some jobs…”
Human calculators, or something like that.
Human calculators, yes. And the reality was different. The net job effect was positive; there were actually more jobs that were created, and we no longer have to be making calculations by hand, or having some of the bookkeeping jobs from before. A similar thing we believe is gonna happen here - there’s gonna be eventually a positive net job effect, and there’s gonna be a shift to jobs that are less repetitive and more creative… Just like we’re having right now - there’s a very big demand for software engineers, but there’s not a big demand for human calculators.
[42:51] Unless you’re in the world of Dune, where they had these Mentats… You know, just to throw a random thing into the conversation. Those were the human calculators. Sorry about that, I had to throw my bizarre tangent into it. So as we look forward at the future of robotics and artificial intelligence and how they intersect with manufacturing, what are you seeing as the most exciting things right now in the state of robotics, and within the computer vision and other strategy type models and research that are going on, that robotics requires?
Yeah, I think there’s gonna be a huge role for computer vision, for the modern-type deep learning computer vision, and understanding higher-level tasks… We’re just at the beginning as an industry here. That’s definitely one thing.
Reinforcement learning will play a bigger role… It’s already picking up as a research interest; reinforcement learning has traditionally been not on top of minds for many researchers, but I think this is picking up steam and more of it will transition to actual applications of that.
Also, the fact that in a typical production line there are products that are being assembled - let’s say you pick up a heat sink, you put it into a motherboard, and then the next motherboard comes in and you do this again, and then the next motherboard comes in and you do this again… There is definitely a big component for unsupervised learning to enhance those models. Unsupervised learning has had a mixed effect. Right now in computer vision we don’t really know if it works, or exactly how it works. There’s been some very recent research work that shows at specific settings it’s beneficial and useful.
But the good thing, I guess, in our application, is that things don’t change tremendously from one product to the other, from one motherboard to the next motherboard. There are definitely changes, but they’re not as big of a change as ones we find in some natural environment… And that kind of works to our advantage in how we model things.
For example, if there are big changes in a production line, that probably means something. If there’s a big change, for example, in a natural image, that may not mean something that is interesting. So all this information, how we model it and we encapsulate it into our models - this is gonna be key to make the best out of those models.
Awesome. Well, that gets me super-excited about these sorts of things, and I really appreciate you digging into a lot of the details of what Bright Machines is doing, but also manufacturing and AI in general. I really appreciate you taking time to be on the podcast and share those things with us.
We’ll definitely link some of the Bright Machines work and also some of the topics that we’ve talked about in our show notes. Also, we have had a few episodes on reinforcement learning - the OpenAI work, and all of that - so we’ll make sure and link those in the show notes as well… Thank you so much, Costas, for talking with us. It’s been a real pleasure, and I’ve definitely learned a lot.
Thank you so much for having me.
Our transcripts are open source on GitHub. Improvements are welcome. 💚