Practical AI – Episode #145

NLP to help pregnant mothers in Kenya

with Jay and Sathy from Jacaranda Health

All Episodes

In Kenya, 33% of maternal deaths are caused by delays in seeking care, and 55% of maternal deaths are caused by delays in action or inadequate care by providers. Jacaranda Health is employing NLP and dialogue system techniques to help mothers experience childbirth safely and with respect and to help newborns get a safe start in life. Jay and Sathy from Jacaranda join us in this episode to discuss how they are using AI to prioritize incoming SMS messages from mothers and help them get the care they need.

Featuring

Sponsors

RudderStack – Smart customer data pipeline made for developers. RudderStack is the smart customer data pipeline. Connect your whole customer data stack. Warehouse-first, open source Segment alternative.

SignalWire – Build what’s next in communications with video, voice, and messaging APIs powered by elastic cloud infrastructure. Try it today at signalwire.com and use code AI for $25 in developer credit.

Changelog++ – You love our content and you want to take it to the next level by showing your support. We’ll take you closer to the metal with no ads, extended episodes, outtakes, bonus content, a deep discount in our merch store (soon), and more to come. Let’s do this!

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another Practical AI. This is Daniel Whitenack, I am a data scientist with SIL International, and I’m very excited today to be joined from Kenya by Jay Patel, who is a technology and analytics manager at Jacaranda Health, and Sathy Rajasekharan, who is executive director in Africa of Jacaranda Health. Welcome.

Thanks for having us, Daniel.

Yeah, it’s wonderful to talk to you both. We have already had a lot of great conversation, even before we started recording, so I’m really excited about this… Maybe Sathy, could you give us a little bit of an introduction to Jacaranda, and some of the things you’re doing, and how it came about?

Yeah, absolutely. Jacaranda is a non-profit organization that works in Kenya primarily, in Africa, and the challenge we’re trying to address is one of the fact that mothers and babies die during childbirth in this part of the world probably six or seven times more frequently than it happens in more developed countries in North America and Europe. And we’ve recognized that it’s not really a question of not having enough hospitals or providers or services, although there’s certainly challenges there… But the quality of care, so the kinds of care that are being provided really needs to improve, and this has been shown in the literature and by many other groups, as well as ours.

[04:09] So Jacaranda works with governments, with the government hospitals in the country to try and improve the quality of the care that’s being delivered in the hospitals, and we do that by using low-cost, scalable solutions that can be deployed within government hospitals to increase the number of moms who are seeking care at the right time and the right place, and to improve the care that we’re receiving from providers when they actually get to a hospital. And that’s where our digital health tools come in. They are one of those low-cost solutions that we’re delivering through government hospitals.

Yeah, that’s awesome. I really appreciate your work in this area. Jay, maybe you could give us some context… So that’s a wonderful story and setup and context, but Sathy, you mentioned digital tools - maybe, Jay, you could let us know where does AI and NLP and these sorts of things fit in? So why are we talking about this on the Practical AI podcast?

Sure. So what happens when a mother enrolls in our service is that she will first go to one of these (so far) 700 public health facilities that we’re partnering with; she will enroll in the service and then we will start sending her messages about her health, about the health of her baby, about her pregnancy… And messages include everything from nutrition, all the way up to danger signs. She can then ask us any questions that she has, at no charge. This is primarily run all off of SMS, and again, it’s free to the mother.

Yeah, maybe you could – because I think a lot of times, at least in many people’s contexts, they might be thinking about chat as like a little window pop-up on their customer service side, or something… But it sounds like you’re focused primarily on SMS, is that right?

That’s correct. We checked with our users, and more than half of them are still using feature phones. So even though mobile phone penetration in Kenya is in the high ‘90s, a lot of users don’t have smartphones, and are still using the old Nokia feature phones.

Sure. So I guess you’re having these conversations… When did you start thinking about machine learning or NLP sort of techniques could benefit you? How did that come about originally?

I was gonna tell the story from the perspective of how Jay came to be part of the team…

That’d be awesome, yeah.

We launched this service now four years ago, with 200 moms. We were super-excited that we’d be at 200 moms using the platform… It was an accident that we’d actually opened up two-way communication for free. We didn’t originally intend for that; we just thought we’ll send moms a bunch of messages and that’ll improve their knowledge about pregnancy… And then I distinctly remember one of our program team members coming in and saying “Hey, these moms are asking questions, I’ve been answering them, but the questions are increasing.” So it turned out we had learned of this latent demand for just a whole bunch of questions needing to be answered…

Back to what Jay was talking about - many of these moms have feature phones, many don’t use data on a regular basis, because it’s still relatively expensive here… So googling something isn’t really an option. The complexity of language when you google something - we all know it’s really hard. So the minute they realize someone was sending them messages about their pregnancy, they started sending in questions.

The questions volume started increasing as we started to enroll more and more moms. We were getting to hundreds of moms per month, to a thousand moms per month. We’re almost at 100,000 per month now… And we pretty early on realized that we need a way to triage the questions coming in. So some of the moms were asking about “What can I eat during pregnancy? Is it okay to eat avocados?” for example, which is a surprisingly common question that we get asked.

[08:15] But out of every 30 questions we get asked, one or two of them will be really serious. A mom may say “I’m bleeding. What should I do?” And we recognized that if we did a first in/first out approach to answering questions, we’d miss that mom or be late to answering that mom who needed to know about bleeding… So our journey to think about machine learning actually came when thinking about “How do we effectively categorize these messages?” This is when we had a tiny team. I had been messing about with Dialogflow to see if we could put some of the conversations in… And quickly, in addition to realizing that we were onboarding more and more users, recognizing that we actually needed someone with a lot more development expertise to come to the team… Which is how Jay comes into the picture, and kind of changes the way we do business with these incoming questions.

Happy to now hand over to Jay to talk about how he actually solved this challenge.

Sure, yeah. And just before we kick it over there, I guess – so you mentioned Dialogflow, Sathy, which is an offering from Google to help build chat conversations, and that sort of thing… When you started looking to solve this problem, was it clear that machine learning and AI even could provide a solution, or was that still something that was relatively unclear? You just knew that technology needed to be brought to the table.

Yeah, we knew tech needed to be brought to the table. In the ecosystem here in Nairobi there was a number of people working on chatbots, and so the original thought was, okay, maybe there’s also a chatbot opportunity here, where we don’t even need people at the other end answering questions, and we can create conversational histories, intent classifications etc. And I think it just took a few weeks to realize that with the complexity of language and information coming through we actually needed a different solution. So this automated chat idea was quickly discarded. Also, from a user perspective, moms didn’t seem to like getting these cookie-cutter responses back from whoever was on the other end of the texting service.

Yeah, that’s definitely something I’ve run into as well, in working in chat and dialogue… So yeah, that’s interesting. So Jay, how did then – when you started approaching this problem and getting involved, what was the process like in terms of figuring out what was the right tech solution?

We were initially connected to a gentleman by the name of Matt Capers; he works at Square, in Silicon Valley. He volunteered to help us figure this kind of stuff out. And we started out by testing various solutions. The dataset that Sathy had put together from the previous two years, we used that – this was a list of questions that moms had asked us, and so we took a few thousand of them and had our team label the questions according to what the mom was asking. So just a one-word label. And it could have been nutrition, it could have been something else.

We fed this list of intents into the three most popular off-the-shelf commercial models available… In fact four NLP models that were available at the time. Dialogflow was one, and then the usual suspects between Google, Amazon and Microsoft.

After some testing, it became apparent that Google’s NLP for this particular use case was most useful.

[12:08] So then we took a larger dataset - I think it was something like 13,000 questions - and the way we figured is to run each of these questions through a translator, and then the output of the translator along with the intent that we had classified for each question would be used to train the model.

After that, it then took a little bit of extra figuring out “How granular do we want the intents to be?” We could have four or five very broad intents, or we could have many dozens of, or 50-60 very fine-grained intents. And it turned out that the best mix was kind of like the middle, the Goldilocks middle, so we ended up with an intent list which was about 33 in length.

That sounds like a pretty labor-intensive process, getting through that data labeling… What was that process like in terms of actually getting into this task of data labeling, and what challenges along the way did you face there?

Yeah, it was quite labor-intensive, it was very manual… So we’d just throw up all 13,000 questions on a spreadsheet, and then assign a few members of the team to go in and read each question and actually assign a label from a list that we had predefined… And then they’d have the option to add labels as they went along. That took several weeks and it took a lot of our team members’ time, when they could have been doing other things.

Yeah, everyone was super-nice to have volunteered to do it, for sure.

Yeah, you need to grace from people when you’re asking for that sort of task… Hopefully now they see some of the value coming through from that.

Yeah, for sure. One other challenge - this is still something that we work on, is this sort of specificity of labeling something. There’s a lot of subjectivity involved. If a mom texts in and says “My baby hasn’t stopped crying”, someone may label it as “crying baby” and someone else may label it as “general baby concern”. So we learned the hard way that we needed to do a lot of training around what is it that we mean for each of these labels.

I think that was the tougher part, certainly for this initial dataset… But Jay will talk about this - we did a much bigger round of training a year later, and had to put in a lot more rigor to the process after that.

Break: [14:50]

So Sathy, you’ve just mentioned how there was kind of this initial round of labeling, and defining the different classes that you’re working with, and then as you went along in that process, you realized you needed more data, and to kind of switch up the labeling… Jay, what did that process look like in terms of training maybe a bigger model with more data? How did you go about that scaling and what was needed to facilitate that?

So when we had run the first round of labeling we hadn’t even collected that many questions. Across the following year, as the service grew and the questions continued to come in, we just basically stored them somewhere. At the end we had over 100,000 questions, but we’d decided to limit the labeling exercise to 100,000 questions.

Along with the questions, they helped us [unintelligible 00:16:56.02] much more experience on what kinds of questions that they get. Instead of me or Sathy pulling together a list of intents, we asked them to help us figure out, now that we’re evolving to a second version, what’s the best list of intents, and do we expand this.

For example, if you have an intent for general baby concerns, how can we break that down and how do we make it more specific? What are the questions that come in that are being caught as “general baby” that we can identify better or help triage better. And this time we had to outsource, and we used a company that actually helped us go through and label each of these 100,000 questions. But as Sathy mentioned, we had to train the team, and then the training had to be quite rigorous… But even after the training, we had to go through several rounds of cleaning and just making sure that what we might identify as one particular intent was identified by the team as another intent, because they’re not really exposed to the work we do on a daily basis.

Yeah, that brings up a really interesting question which is striking me - there’s probably a lot of… I imagine there’s a lot of health expertise within your team and those that you’re working with… But it sounds like you’re not primarily a machine learning startup or something like that. So what was it like culture-wise? You know, when you’re explaining what you’re trying to do to your team, to your board of directors, to those you’re serving - how did that sort of culture change happen and what strategies did you employ to help you bring along people with the solutions that you were trying to build? Maybe that’s a question for Sathy.

Yeah, I think one thing that Jay and the team do really well is try and frame for the team “How is this gonna make your life easier?” If we’re able to label questions, you get to prioritize them, then you can tag the high-priority ones first… But actually, what was even more exciting was what if we could automate responses to a class of questions that we’re 95% certain of in terms of intent classification. That reduces the volume for the help desk.

So by sort of sharing “What does this mean for you at the front lines”, essentially, I think Jay and the rest of the team really brought everyone into this kind of “This is a shared mission mindset”, versus “Oh, it’s the tech team doing something AI-esque again.”

[19:49] So what’s really cool is everyone talks intents and classification now on the help desk team. They all know what they’re looking at. It’s really fun to watch that journey. It has been fun to watch it over the last couple of years to see people kind of ignore what me and Jay were doing in the background, to now being front and center and understanding how it works.

That’s really cool to hear. I think that many of us in different organizations listening to this are probably wishing and hoping we can bring along our teams in that same way, and build that excitement. I’m assuming part of that is, you know, you talk to them about those pain points, but then also they actually saw value out of what you were producing… How quickly did that bit happen? How soon was it in terms of the time when you first started showing them maybe prioritization of questions, or classification of questions, and when they were actually seeing value out of that? How was that roll-out period, how did that happen?

I have an opinion – I’d love to hear Jay’s opinion… I actually think we were more excited about the classification and the sort of precision in recall that we were getting after the first training round… But then after a while we realized the help desk team was like “Yeah, it doesn’t really work that well”, which is what really pushed us to try and improve the classification… And I say “us”, but Jay and the team really worked on capturing statements like when someone says “Yes”, or “Thank you” - how do you filter that out. The sort of real-life, annoying things that happen when you run something at a relative scale. We’re doing 2,500 questions per day now.

I think where we really started to see it making a difference is once those practical things were ironed out, then I think the teams started to see “Huh. This actually works.” And that’s what enables us to get their buy-in to do that bigger round of training later. Jay, what do you think? Does that track, or is that just my assumption from high above?

No, that makes sense. There was a lot of hiccups getting it to work, there was a lot of patience from the team, especially the help desk, on maybe they don’t have full context on why it’s working, but they had enough trust in us to [unintelligible 00:22:04.23] about experimenting… And as you mentioned, once we figured out how to filter the stuff that the help desk doesn’t need to answer, that’s when they really bought into “Okay, it’s having a practical impact on my day.”

It makes sense. I’m always interested, if you listen to the podcast, in the very practicalities of how this all works, which you’ve brought up, Jay… And one curiosity I’m having is around the integration of all of this. I think one of the areas that machine learning and AI practitioners often get blocked in is, you know, they can train a model, but sort of integrating it in a workflow or in existing systems is often very difficult and error-prone… It sounds like you’re dealing with SMS here, and I know that there’s APIs like Twilio and other things that you can use to interact via SMS… But then you’ve got your model sitting somewhere - I don’t know where that’s stored and sitting - and then somehow you’ve gotta integrate those things together along with actual humans in the loop. So what does that integration piece look like for Jacaranda?

So on one end we have a messaging platform that’s called RapidPro. That’s plugged into the telcos and handles the traffic for the SMS. On the other end is the ticketing platform or the ticketing software where the help desk is responding to these messages, and the triage or the NLP model sits in the middle. What happens is just by reading and writing the APIs for these three platforms, we grab the messages as they come in, run them through the translator, run them through then the NLP model, and all the information that comes out of both of these gets posted to the ticketing platform. Any responses then go straight out the other way, but by-passing the middle bit, and hit RapidPro just for delivery to the mother. Basically, all this was integrated by way of the APIs for the three platforms.

[24:10] Daniel, I was just thinking, as we’re reflecting, as we’re talking about this - we used to be really cagey about talking about “Oh, do we use Google’s platform, or IBM”, whatever… And the real experience has been that – it’s what Jay affectionately calls the glue that holds all this together is really where all the hard work and iteration and innovation came in. The development of a ticketing software that works with this workflow and can incorporate the AI in a useful way - that’s actually been the sort of innovation here, over and above the use of machine learning or NLP in what we do.

It’s just kind of fun to hear our evolution of cageyiness being like – now we just tell everyone what we do… But we recognize that there’s so much stacked in here that is unique to what Jacaranda does that we don’t have to be super-cagey about it anymore.

Yeah, that’s a very interesting observation, Sathy, and I think it’s true that – you know, it’s one thing that you can go to GitHub or somewhere and you can just see all the implementations of all of these models that people are releasing, state of the art models… But the process of getting that to work for you, in your context, and integrated with your systems and your employees and that glue, like you mentioned - to me that’s almost always where projects get blocked or held up. So sharing sort of what you’re doing in the sense of the AI or NLP side actually is in some ways – it’s significant, it’s definitely a driving factor, but it’s only a small factor of a much larger system that needs to be put into place. So yeah, I definitely appreciate that comment and context for our listeners as well.

One of the things that was mentioned very briefly - I forget who mentioned it - was translation… I wanna talk about that a little bit more, but maybe before we talk about that, could one of you maybe just describe a little bit for those that aren’t familiar with Kenya or African languages, what does the sort of linguistic diversity look like where you’re at?

So in Kenya we have two major languages in the country. That’s English and Swahili. But of course, you have local languages in the various regions, the various counties that we have… And then you have dialects of those languages as well. Because of the strong public education system that the country has had, we’re actually quite lucky that we can send text-based information and receive text-based information primarily in English and Swahili, because everyone is comfortable texting in that.

The one challenge - and this is a pretty big challenge - is when you start to see a mixture of the languages and more informal Swahili, which is known as Sheng. So you get a real mishmash of languages, more hip words coming in, and that tends to break things a little bit, although I believe now we’ve gotten around it. Jay, do you wanna talk about some of those language-related issues?

Yeah, so most of the messages we get (60%-70%) are in Swahili, but not pure Swahili; it’s always a mixture of Swahili and English. And even the English messages have Swahili words in them. But the Sheng, or the slang, as Sathy mentioned - that in itself is a mixture of English and Swahili and other languages, and it evolves quite quickly; so it’s not something that is maybe as stable as an official language.

[27:56] When we’re running all of this through the translation, what comes out at the other end is quite often very garbled and often doesn’t resemble what the original question was. However, the NLP model seems to be able to parse it for context and apply the correct intent most of the time. And we have gotten the accuracy to about 87 odd percent for general questions, and for danger sign questions it’s in the mid to low 90s. We continue to try and improve that.

Break: [28:28]

So Jay, you were mentioning some of the results and the current state of what you’re doing… You mentioned there’s these different categories of questions, and you’re tracking your metrics in those different areas, like danger sign questions and others… Could you just describe maybe a little bit the make-up of your dataset in terms of how many of these questions that are coming in are danger sign questions that you need to triage with very high priority, and what sorts of questions are those in? What does that percentage look like in terms of the rest of the questions and general information questions?

So about 30% of the questions that come in could potentially be a danger sign. And danger signs include questions like bleeding, or “I have swelling in my feet/legs”, “I have a headache.” And of those 30%, I think overall, out of the questions, it might be 3% or 5% which are actual danger signs. And what we’re trying to do is throw a wider net, so that even if we capture a lot of questions which are not strictly a danger sign, we want to make sure that we do capture those which are. And the help desk can then filter for urgent and high-priority, figure out what needs to be answered now, and then what can wait an hour or two, and then questions about nutrition and whether it’s okay to eat avocados during pregnancy - that kind of thing can wait maybe half a day or a day.

Yeah. And maybe just to add - what happens is the agents who are texting back will escalate messages that they feel a nurse needs to review, and then the nurse may pick up the phone and call the moms. So that’s where that 3% to 5% confirmed danger sign - the mom needs to be referred to the hospital [unintelligible 00:31:22.06] But it’s definitely an area that we’re really actively looking at in terms of analytics, data science, and even a little bit of predictive modeling now, to be able to, okay, in the haystack of danger signs, find the needle of “Absolutely needs to be referred right now, without having to make the phone call first.”

[31:45] Yeah. So you mentioned potentially in the future kind of providing some automated responses… How do you view that workflow coming in, and for what types of questions? Over the progression of this system, how do you see automation being used best, versus the way that it’s interacting with humans in the loop?

We are responding automatically to a subset of the questions we get already. And how it started is that when the AI detects that this is a danger sign question, we would send the mother a menu saying “Hey, it sounds like you’re asking about danger signs. Please select the specific you’re having from the menu below”, and that would list 3-5 danger signs.

What we noted is that we got a response rate of around 3% to that menu. And it should have been obvious in retrospect that moms and probably everyone else hates interacting with menus. I hate interacting with menus. And so we iterated to version 2, where if the AI detects it’s a headache question, we’ll send only the headache response. If it’s a swelling question, we’ll send only the swelling response. And then we follow up with the mom and ask (also automatically) “Did this information answer your question?” So we went from a response rate of 3 odd percent to about 45%.

Now, if the mom says “Yes, that’s great”, we close the ticket, so the help desk doesn’t have to worry about it. But if the mom says “No, it didn’t” or she doesn’t respond, then that particular question gets red-flagged, for the help desk to look at as a priority.

So you’ve got agents sort of in the loop, you’re always responding, which I think is really wonderful, taking that perspective on it… How does that feed back maybe from your agents or the new questions that come in - how does that feed back into your data set in terms of model updates, and when you update your data set, and how you update your data set? How are you handling that loop?

Great question. So the second round of labeling, the 100,000 - it was expensive, it was painful, and we didn’t want to have to go through that again. So in the help desk ticketing platform we’ve built an option for the agents to correct where they note that the AI had flagged the intent incorrectly. So every time they see that, they’ll just from the dropdown menu select the correct intent. And once we collect enough data, we can just feed that back into the model and update it without having to manually label hundreds and thousands of questions every now and then.

Yeah, that’s wonderful. I know a lot of my questions are on the data side of things, but as both of you have emphasized, that’s a real key part of any of these types of solutions… And I know in the health space in particular, data is difficult in certain ways to deal with, because we’re dealing with people’s personal health information, information about maybe personally identifying information, very sensitive data… It sounds like one of the strategies you’re taking is definitely people opting into this service and making sure you have some information from them. How has it been on that side of things, in terms of keeping your data secure, making sure that things are kept confidential, while at the same time being able to combine this data in a meaningful way to create useful models?

The first step - and Sathy can add some context, but the first step is to collect as little as possible. We don’t know our users’ names, we don’t ask for things like age or other demographic information… Obviously, we need a phone number, and we’d like to know which health facility they signed up in, and how many months pregnant they are, so that we can tailor all the message campaigns according to the stage of pregnancy, or whether they have delivered.

[36:03] Then on the backend just making sure that everything is stored according to industry best practices on one of the major cloud providers, and using their security tools. I think that rather than me trying to manage a server here, just using the resources that are already available helped us be better able to secure the data. Sathy?

Yeah, I would just add that it’s such an evolving conversation, because I feel like knowledge and literacy around machine learning, what it requires from an infrastructure perspective, what that data is being used for - I mean, it’s hard enough for the general public who has a Nokia feature phone and is in the middle of a village… How do you consent appropriately and provide terms of service appropriately so that she’s fully aware of how this information is being used? We’re using it to improve the service that they get.

The other area where I think there’s a lot of work to be done is in helping our government partners understand how that data is used, what are these systems, what are these processes… A question we get asked frequently is where is the data stored - is it in Kenya or somewhere else? And I think that’s indicative of actually missing the more challenging question, which is “How is this data being used by a machine learning platform? How are you using it to respond to women? What’s your threshold for risk?” etc.

I don’t think we’re there yet in terms of conversations here, but there’s a lot of groups doing some great work around building capacity to improve those conversations. And I think that’s true not just for Kenya, that’s true around the world, with the conversations we’re having with data and privacy.

Yeah. To some degree this is definitely new for everyone… Also, like you’re talking about, helping people understand the ways in which their data might be used. I know a lot of companies have developed different principles and other things around that, so it’s really interesting to hear from your perspective how you’re approaching that.

In terms of looking maybe forward a bit, I’d be curious to hear maybe first on the technical side, but then also just on the user side, what do you feel like are the challenges and opportunities moving forward that you haven’t addressed yet, that you’d like to dive into? Maybe on the technical side first, Jay, what are the main challenges you’re facing in terms of scaling this, or improving your models, or extending this system?

One of the technical challenges is getting the accuracy up higher than what it is, much higher than what it is. The off-the-shelf, the commercially-available models that we can plug into without actually being machine learning engineers ourselves - they do it well enough; they don’t do it quite as well as we’d like, so one of the next steps is to partner with some machine learning experts and try and figure out how do you go from processing or just pattern matching on these words to maybe building in some sort of understanding and context into what the questions are about, and then respond appropriately.

In terms of scaling, I think we’ve gotten our costs down… It’s running pretty cost-effectively in those terms. And in terms of the number of questions that we can process, we seem to have gotten a handle on that; we just need to increase the accuracy. Maybe Sathy can also mention what’s next in terms of some of the predictive analytics stuff that we’re looking at.

[40:04] The goal is always to get that mom care as quickly as possible… So now we’re looking at what other data, what other information will help us increase our risk rating for a mother, will help us understand which facility she should be referred to… Something we didn’t even talk about is half of the organization works with providers on improving the skills of providers, and so we have a lot of data and insights from that kind of layer of the health system… So can we use these layers of data to really route those moms more effectively to care in a timely manner? That’s work that’s going on now, and just scratching the surface of it is to look at conversational history and look to see if there are triggers, trigger intents that could be predictive of a future danger sign. And it actually looks like we may have enough data to develop a model just from the initial work that’s been done.

And then - Daniel, you were asking a question about the users, and I think the ultimate goal is to support moms with what they need in terms of information, whether they wanna know “Where should I get my baby vaccinated? What kind of diet should the baby be on? How do you transition from foods?” And these are questions the moms are asking us. They want more and more information, because in the context that we live and work in, that information, that kind of support is hard to come by. So the lower cost, the more digital, the more close to the mom we can get, on her phone, or maybe phone plus, the better. So we’re looking at things like voice, we’re looking at the home record in terms of their own medical information that they wish to choose [unintelligible 00:41:50.29] free to access for them, so that it can help them provide a case history to a provider easier…

So there’s a lot of future thinking around this, but the principle is always “What does that mom need to get care quicker?” So lots of work going on in the background right now.

That’s wodnerful. Well, I really appreciate what you all at Jacaranda are doing. I think it’s wonderful, and also a great illustration of how AI and NLP can be utilized by an organization in a very practical way, that really benefits the users. So I appreciate you sharing this story and joining me on the podcast. It’s been wonderful to talk to you both, so thank you very much.

Thanks a lot. It’s been great chatting.

Thanks. Yeah, it’s been fun.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00