Ship It! – Episode #101

Let's go back to AOL chat rooms

with Mandi Walls

All Episodes

In this episode Justin and Autumn are joined by Mandi Walls to take you back to a time before the cloud. Before Kubernetes. When a/s/l was common and servers were made of metal. Back to the days of AOL to discuss how chat rooms worked.

Featuring

Sponsors

FireHydrantThe alerting and on-call tool designed for humans, not systems. Signals puts teams at the center, giving you ultimate control over rules, policies, and schedules. No need to configure your services or do wonky work-arounds. Signals filters out the noise, alerting you only on what matters. Manage coverage requests and on-call notifications effortlessly within Slack. But here’s the game-changer…Signals natively integrates with FireHydrant’s full incident management suite, so as soon as you’re alerted you can seamlessly kickoff and manage your entire incident inside a single platform. Learn more or switch today at firehydrant.com/signals

Ladder Life Insurance100% digital — no doctors, no needles, no paperwork. Don’t put it off until the very last minute to get term coverage life insurance through Ladder. Find out if you’re instantly approved. They’re rated A and A plus. Life insurance costs more as you age, now’s the time to cross it off your list.

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 This is Ship It! 00:52
2 00:52 Sponsor: FireHydrant 02:21
3 03:21 The opener 15:56
4 19:17 Welcome Mandi Walls! 01:01
5 20:18 Getting started at AOL 01:11
6 21:29 Tech stack 20 years ago 02:40
7 24:09 Mandi's role in migration 01:36
8 25:45 AOL's scale 03:03
9 28:48 Let's be friends 00:20
10 29:09 On-prem war stories 02:09
11 31:18 Worst outage 04:22
12 35:40 Team sizes 01:00
13 36:40 Pagers and NOCs 00:47
14 37:28 No AOL for teams 02:01
15 39:29 Casual uses and flexibility 02:26
16 41:55 Benefits of simple tools 02:52
17 44:47 AOLserver? 02:51
18 47:39 Tail end of AOL 03:08
19 50:47 Collecting user data 04:07
20 54:54 How do you scale on-prem? 02:05
21 56:59 Learning from the past 00:54
22 57:53 Forming good relations with other teams 01:42
23 59:35 Thanks for joining us! 00:59
24 1:00:34 Sponsor: Ladder Life Insurance 01:47
25 1:02:23 The closer 01:35
26 1:03:57 JDCO 07:43
27 1:11:40 Outro 00:52

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Hello, and welcome to another episode of Ship It. I am your host, Justin Garrison, and with me as always is Autumn Nash. How’s it going, Autumn?

So excited to talk to Mandi. She is amazing.

Yeah, Mandi is a wonderful guest on the show today, and this is what I’m calling our retro series; whenever we’re talking about something that’s 15 years or older, it’s going to be amazing. And today we get to talk to Mandi about AOL chat rooms.

First of all, that was my first exposure to how much I love computers and the internet, and finding your people. AOL was it for me; that was something I used so much of as a kid, that got me hooked on computers and the community that you can get through your nerdy habits… So I’m really – not just that, but her personality is amazing. It is hard to find someone, for one, that’s a woman that worked in infrastructure, and resiliency, and all that good stuff. And then the fact that she’s just hilarious. Like, her personality is fire.

So we will get to that. And actually, the more I think about it too, AOL chatrooms are kind of the original social networks. It was the original place…

It is…! That is the original place to find nerdy community.

There was plenty of other BBS’es and things that that people were finding, but that was the –

The first mainstream-ish…

Right. The barrier to entry in the ’90s was a lot higher. And so once that endless September sort of thing came around with IRC, and then people saw “I can just jump online, do this dial-up thing…”

I think it made it accessible, and I think that people didn’t know those communities existed, and then all of a sudden – this is before Reddit, this is before Tumblr, this is before all the other things became a thing and got popular. But this is the first exposure to, for one, meeting your besties online and not knowing them before.

Right. You didn’t know who the person was at all. No pictures.

Yeah. And then meeting people based off of interest, right? So just purely off of interest, people that you’ve met from all over the world. This was the first exposure to that, really.

Yeah. And I think in a lot of ways the online social networks are reverting into these smaller knits…

It is. We went from the globalness of Twitter, being able to find people from all over just based off of what they’re saying… Back kind of into the little corners of community, because - I don’t know, the erosion of Twitter…

Well, I mean, it’s Twitter, but also just realizing that your network - you can’t keep all those connections. As much as you have thousands of people you may on some social network, it’s like “Actually, I just want to go have a DM with someone for a while.”

I don’t know, I think it’s definitely easier to post pictures of your kid on Facebook, so everybody can see them, and then you don’t have to go and DM them to each person, and…

As a distribution method, absolutely.

But not just that. But keeping up every now and then – I think it’s really easy as an adult to forget this person exists, or you get so wrapped up into getting your kids around, and your job… It’s really hard to always kind of – social media means you can comment, and be like “Oh my God, your kid got so big”, and “Oh, congratulations on that promotion”, and you get that little bits of keeping each other in each other’s lives, until you get more time… And I think that we are risking losing that. There are good parts, but at this point, Facebook is so toxic. Twitter is getting – have you seen the amount of women who have been harassed on Twitter in tech in the last week? The sad, angry dudes of tech are in force. This whole “Blame everything on DEI” has gotten out of hand. It is ridiculous. And Elon is just lighting the fire.

I have been thinking about joining a BBS. I’ve found out there’s BBS’es that still exist.

What is a BBS exactly?

It’s a bulletin board system. It was like the OG like Reddit sort of forums. It’s just a forum basically, but old school. And I read this book called “Broad band”, which is two words; broad as in woman, and then band like a group of women. And it was about the women who created –

I love play on words like that.

It was an amazing title and a great book about women who created early technology, and the internet, and these foundations. But I found out there was someone who runs a BBS in New York. She ran one of the very first BBS’es, and it’s still available today. If you want to sign up, it’s a monthly subscription, you pay, and they send you a packet in the mail with your sign on.

This is why I love that you’re my tech friend, because you’re so not like average tech bro.

I’m pretty mid according to my kids, so…

[00:07:46.03] You’re an awesome husband that makes cute stuff for your wife, and you’re always reading and informing yourself somewhere… Some people just like to virtue-signal, and we’re like “No, please don’t. Just stop.” But I think this segues really well into my article about why women in tech spaces are shutting down. Women Who Code is shutting down, one of the Portland’s women’s group is shutting down… She Geeks is shutting down… And Women Who Code - almost every female tech accounts on Twitter had something to say about how much Women That Code has had an impact on their career, and getting them in. It is one of the biggest tech organizations. For one, I’m always constantly sharing their tech jokes on Instagram. They’re fire. They help so many women; they’ve done so many classes, they do so many summer camps for girls… They have so many speaking opportunities, so many scholarships… This is one of the biggest, and it is shutting down because of lack of funding. Because the first thing that happened when tech was zero interest rates and making hand over fist – well, they’re still making hand over fist money. But when we got into the supposed tech recession, the first thing they cut was DEI, recruiters and any initiative to get people into tech that are not your average people getting into tech. They were the first teams to go. And then the funding, from all the tech companies that were going into these initiatives that were any kind of - what was the word? I guess donations… Those got cut, too. So it’s really sad to see such an impactful organization close down.

I saw that and I was like “What happened?” From the outside, not being a part of it, it seemed like it was successful and it was growing and things were happening… And then realizing how much does depend on corporate sponsorship. And in-person events are so important, where during COVID it was like “Oh, great.” I naively would think that “Oh, cool, virtual events are more accessible for people, and it’s easier for anyone to join, at anytime, from anywhere, without travel.”

I do think they are though. I think that they’re extremely important.

Yeah, there is a lot of importance there. But also, there’s a lot of importance on that in-person connection. And when I think of every job that I’ve had in my past, it came from meeting someone in-person somewhere, at a meetup, or a conference, or…

There is a good medium, right? So I think smaller meetups are really good online, because people don’t always show up to the ones that are smaller… If you have a monthly meetup, even if I think you can do them in person or online… But I think for one, with women, we’re typically more caregivers, we’re typically… A lot of men will have wives that don’t work, so they can kind of pick up the extra… But very few women have husbands that don’t work, that do the caregiving. So even if you both are switching off, it’s still a very different situation.

So I think, any type of virtual, it’s better for moms, it’s better for disabled people, it’s better for military spouses, it’s better for accessibility for people… But I also think that in-person community – one thing I really noticed is React Miami, I’ve never seen so many women in pictures. I’ve never gone, but a lot of the frontend conferences seem to also have more women there… Which is sad, because the conferences that would be more for my work, you look and there are the same people. But you know what I mean? It is a very distinct difference. Scale was the most girls I’ve seen it in a bathroom at a tech conference besides Grace Hopper… And it wasn’t that even –

Being at Scale and being part of the committee, sponsorship was down. And that was difficult. That made it hard to run a conference at that scale at that, at that size, to be able to [unintelligible 00:11:31.19]

And Scale is not even that big. You know what I mean?

No, it’s a pretty small-ish community-run conference, but it is still primarily in-person. There are live streams and we make some of that available. We don’t have the bandwidth or the ability to do the online networking with folks. Like “Oh, come hang out in this room.” And from my experience, a lot of that just never happened anyway. A lot of folks didn’t stick around. They’re like “Actually, I have something else to do.” And if you’re at your computer, you’re going to be distracted anyway… And so it’s like, what’s kind of the point there? But also from a speaker’s perspective, I have done virtual events where literally no one showed up.

[00:12:05.26] I’ve actually had really good turnouts for virtual events.

That’s great. I remember I worked on one for weeks, and it was just me and the person who would introduce me, was the only person that streamed it with me. I felt so broken…

I think we have to remember though that those recordings last forever. For instance, with Military Spouse Coders, because we’re all in different time zones, a lot of times our attendance will look not a lot of people are showing up, but we get a ton of views after, because people are in different time zones… And sometimes if you’re chasing a kid, or if you have a doctor’s appointment, you can’t watch it then… But they’ll go back and watch it at night. We get a ton of views when everybody is settling down and they have a second to sit and kind of go over the material. So I think it also makes for consumption when people have the time. Even when I go to an in-person conference, sometimes I’ll go back and relisten to a talk that was really good, so I can take better notes…

I am one of the few people that will watch conference talks on YouTube, but I also know that a ton of views is not much. Most of the conferences will have a dozen or so views. It’s not hundreds, it’s not thousands. The really large corporate sponsored events - those will get some stuff, because they put ad money behind it… But a lot of this stuff that’s like “Oh, if I have the choice of a slimmed down YouTube-ified version of that talk that’s 10 minutes long, or the one with slides where someone’s talking to me for 40 minutes - I’m gonna pick the YouTube one, 10 minutes, every time.” And that’s where if you’re on those platforms, and you’re in those ecosystems, it just makes it easier to kind of consume the snackable content. I’ll scroll through a bunch of shorts before I even click on that one that was 10 minutes long. All of those things are trade-offs.

I think it depends, because some people never – well, I won’t say never, but they’re less likely to get that experience. If you don’t have the money to be able to go to those conferences, or to be able to kind of take off work… A lot of students may not come from the background where they can go to those conferences, or take time off work. They could be caregiving for their families, they could be working a job just to be in college… So I think definitely at a certain point yes, you get there… But some people don’t even have that opportunity, and that’s the only way they have access to it, so I just think it depends on perspective.

And I absolutely think that the information should be available, and we should make that freely available. And I’m even looking at pulling a lot of my own personal content off of platforms like TikTok and YouTube and making it available on my site without ads. And “Hey, this could just be on my website.” You don’t need to even spend the time commitment of “I have to watch an ad to do this thing.” If you want to get it for free without advertising, I’m going to try to make that stuff available for people. And that’s super-important too, but obviously, discovery at that point is hard, and people don’t know where to look.

I really think that is one of the best things about tech at one point, is the fact that it’s so accessible to be able to get free content online that you can do in your own time… And that gives people the opportunity to be able to get into tech when we weren’t doing it as heavily gatekeeping as we kind of are now.

My article is “Executing crons at scale.” It’s all about cron jobs at Slack, where Slack has these abilities to run reminders, and jobs and things that run in the backgrounds… And I loved how this article started off with just like “We had a cron server.” Like most every large company in the world, there is a cron server that sits there somewhere that has a cron tab, and people modify the cron tab to run their jobs. And at some level, that becomes not good enough, because the server doesn’t scale, too many conflicts, errors are difficult… All this stuff just gets in the way. And I remember my time at Disney Animation - we had to cron servers. That was how we scaled it up. We’re like “Oh, well, this cron server was for this group of people, and this cron server was for that group of people”, and we just scaled it up that way. You SSH in, you modify your cron tab, maybe you check it into some sort of config management… But in general, it was just a server that was always running, to run jobs whenever.

And in this case, they decided at Slack that they needed something better, and something a little more scalable… And so of course, Kubernetes was the answer. And not just Kubernetes with a custom job scheduler; like, this has a full-on Kafka queue, Vitess, which is like a distributed MySQL database, and a custom scheduler… As well as their custom platform on top of Kubernetes.

[00:16:19.07] So I think it’s really interesting… Once you step beyond the machine, what components do you need to make this scalable or usable for a large company? And especially, I like these ad hoc jobs, because I can set up a Slack reminder for any time, so they can’t predict these things, and it will send me a message. And that literally gets queued on their job scheduling system, and then comes back to me as a message. And so I’ve found just the practicality of how that works to be really interesting.

I find this interesting for one because I think Slack seems to be a lot of people’s favorite alternative to keeping in touch with people. I think Slack seems to me like it’s more loved than Teams and other alternatives. I love Slack, personally. And I think it’s cool that they’ve rolled out so many different services and ways that you can use Slack as more than just that way to keep in touch with work and with friends and yadda-yadda. But I also think it’s really interesting that it’s built in Golang, because - did you see that tweet that everybody was arguing about, like if people should go to Rust, and JavaScript, and I think another language, because they were like “All the other languages are not going to be used, and everything’s going to be built in TypeScript, Rust and one other thing.” And I was like “What?! Do you know all the infrastructure that’s built in Java and C++, and a million other things?” I actually think Rust is going to be very impactful, because so many things are going to be rewritten in it, like the Kernel… But how would you just completely forget that Go and other things existed? And there’s so much legacy software that is built in Java and C++ and C… I was just “How?!” PHP is gonna live on after we’re all dead and buried. So it’s cool.

There’s plenty of COBOL and Perl running out there, so…

And the people that write COBOL are getting paid right now… Because they’re the only ones left.

All [unintelligible 00:18:15.06]

Interesting [unintelligible 00:18:19.21] That’s pretty cool.

So Bedrock is there… It’s actually their abstraction of Kubernetes. I get the name confused with – there’s other tools [unintelligible 00:18:27.07]

Ooh, okay, okay.

They have a platform built on top of Kubernetes, which makes sense with Go, because all the Kubernetes tooling is Go anyway. So you have your Kube builders, and your scheduler stuff is all Go-based… And so it’s just like “Well, let’s just pick that.” And so yeah, they have a platform on top of Kubernetes called Bedrock, and then they built this messaging, queuing system with a scheduler.

Oh, I’ve never heard of [unintelligible 00:18:47.19] I’ll have to check that out. Have you ever used that?

Well, it’s used to manage [unintelligible 00:18:53.03] Okay, cool.

Right. Usually, you touch a file and say “If the file exists, then you don’t run.” And so yeah, it’s a way to do that better.

It’s pretty. I’m always looking for new Linux tools to try out.

So let’s go ahead and jump into the interview with Mandi and talk all about AOL chat rooms.

Break: [00:19:11.23]

Welcome today Mandi Walls. She is a DevOps advocate at Pager Duty. But Mandi, first, I want to start off with asl.

Yeah, right?!

If anyone knows what that acronym means, it’ll bring you back to a time that we want to talk to Mandi about… What were you doing – how long ago was this? 20 years ago or so? What was the infrastructure you were responsible

for?

Almost exactly 20 years ago.I started at AOL in the summer of 2004, and I was there until 2011…

And in that the space of that time I ran AOL’s channels. So News, Sports, Entertainment, Games, games.com, MoviePhone, I ran aol.com for a while… And in multiple platforms these things migrated. So yes, [unintelligible 00:20:04.09] It’s now being torn down… But yeah.

That was amazing.

All gone. All gone.

How did you get started there? What brought you to AOL?

I was working at the National Institute of Health. So we were down in the DC area, and I was at NIH, working for HGRI, which is the Human Genome Institute at NIH. And we were doing a combination of Solaris and Linux stuff… And I’m a Linux person, and Solaris is a – it is what it is.

It was a silence there, that’s what it was.

Well, I was like “Is this a sweary podcast…?” And then AOL advertised “Hey, we’re looking for Linux administrators”, because it turned out they were moving off of commercial Unix, onto Linux… And I was like “Oh, that sounds more interesting than running a bunch of scientific software.” Which - human genome is super-interesting, but it’s its own pocket. It doesn’t move quite that fast; it wasn’t at that time. Yeah, and [unintelligible 00:21:09.12] joined at AOL. It was also much closer to home, because I was living in Reston, and NIH in Bethesda. So that meant taking a horrible DC beltway to work every day.

That sounds not fun. Not even a little bit.

So bringing this back 20 years, what did the tech stack look for an AOL chat room?

Yeah, so AOL had a mixed bag… So it’s interesting, because at the time no one really talked about what they were running. Everything was very secretive at that time. And even to the point where if you wanted to know what was going on at AOL, you probably had to read Kara Swisher’s columns, to find out what was going on… Because things were just kind of secretive there. But the platforms behind all of AOL’s products were all significantly different, which was super-weird, and really just bizarre. There was no consolidation at that time. Everything had kind of gone its own direction for the things that it needed… And they had bought other stuff; like, Prodigy was in there, and it ran on 36-bit machines, and you’re like “Where did these even come from? Why is this still here?”

[laughs] 36-bit?

Right…? It was super-weird… And there was just other weird stuff in the mail system… And chat has its own infrastructure, and then we worked on the website, so I was in web operations… And our stuff had been a combination of Solaris and IRIX.

Because if you’ve got money to burn, you might as well by IRIX. And they were moving everything onto commodity hardware, into at the time Rel. So early versions of Rel, so 2.1 or whatever that first release was at the time. And that’s what we were hired on for. There was a couple of us that were all brought in around the same time, 2004, to help with the Linux side of the house, and we just stuck around. But yeah, it was a big mix; a lot of different stuff behind the scenes over there. Because everything was built at different times, and as they add new features, they just built whatever worked best for that platform, and off it went.

Whatever they knew, right? …from some other experience of like “Oh, I played with this last week, so I’m going to deploy it to production now.”

Yeah. And off it went.

I feel like that still happens a lot, though. Companies, especially big enterprises, don’t let their teams talk to each other, and then they just end up building – there’s six different databases…

You’re like “But why? You could share knowledge about the one”, or especially if it works for –

“Or you could have your little empire over there, and I can have my little empire over here, and we can battle it out.”

Never will they war. This is not –

Yeah, right?

But I mean, you’ve just described microservices to some extent. It’s just like “Oh, this is just like madness over here, and now we consolidated, and now we went back to madness.”

It’s just wild they’re not allowed to talk about it… I’m like “You could have got advice, or something.” I don’t know.

Not the way things ran at the time. Super-crazy.

[00:24:09.05] So you had this mishmash of infrastructure and tooling, and you’re moving it onto Rel… And what were you responsible for in that migration? Were you doing provisioning, and Linux servers? I’m assuming this is hardware stacks, and you have data centers, places, and…

At the time we were buying a combination of – well, they’d put everything up for bid, because you’re gonna buy half a million dollars of hardware at a time… So you’d get like a six-month bid-out on whatever they’re going to put in the data center. So sometimes we’d get Dell machines, sometimes HP would win the bid. So you’d be flipping back and forth; we’d have a mix of hardware, and mix of ages, things would go back on lease return… A lot of the gear at that time - they’re in owned data centers, but the gear is leased, so they’d go back. And so you’re just constantly refreshing the farms, and all the fleets were constantly in motion for things coming in and out… And if you needed to scale anything up - that’s a requisition; it’s not a “Slide your credit card in the cloud and get more gear”, it was “Oh, it’s a ticket, and four teams are involved, and there’s all this budgeting…” And if you happened to get extra hardware, you’d hide it in a different project for a while…

You don’t tell anybody… [laughs]

Right? So you didn’t have to return it… And it would just kind of sit there; nobody’s gonna notice… “There’s four or five machines over there, just in case we need one.” So there’s a lot of begging, borrowing and stealing of systems around the system, because there’s just – we could not get capacity onto the floor fast enough for the way things were being built out. It was just absolutely nuts.

So this is 2004. CDs started disappearing from supermarkets in the late ‘90s or so… So this wasn’t dial-up days. This is like AOL, post–

Well, dial-up was still printing money at that time… But yeah.

But you’re moving into – like, you have these services… And what kind of capacity are we talking about? Do we have hundreds of machines, do we have thousands of machines? Do we have dozens of data centers? What sort of scale?

All of that!

Okay. AOL was big. It was THE thing.

And now I read white papers that are like “Oh, we have 5 million hosts over here.” I’m like “What?! That’s a different number.”

There probably weren’t 5 million hosts on the internet in 2004, right?

Yeah, exactly.

The capacity constraint was so different. But yeah, 2004 was sort of the beginning of web 2, so the beginning of what we call the portal era. So Yahoo, and AOL, and that stuff… Aidn Google was just kind of rising at that point. So part of the insanity was we had our own web server. So the AOL server was written in C…

Oh, like you had your own-own.

Yeah, exactly.

It’s like “There’s no Apache here.”

No, not at all. Not until 2008 I think is when we started going to Apache. So yeah, it was AOL server, which is C core with TCL as the user language. So TCL, Tool Command Language… If you’re not as gray as me, you probably have never seen it. Its other claim to fame is that it ran TiVo. The TiVos were programmed in TCL. So it was AOL server and TiVo, done in TCL. And yeah, so we’re porting all that stuff over from the Solaris boxes onto Linux boxes, and spreading it out, because the capacity at the time was kind of stranded. So AOL had regional data centers, and they were large, and they were owned. That was the big deal at the time - you’d owned your data center. And we were getting into – as things were growing in capacity.

So at the time, aol.com was like the sixth-largest site on the internet. It was big, so we were spreading things out, trying to collocate things closer to users… And this is at the same time the rise of global DNS sharing… So Akamai was the commercial provider of the time for that stuff, where you’d go to www.aol.com and it would point you to the closest place. And that was Akamai handling all of that stuff.

[00:28:08.18] So we had hundreds of servers in a dozen locations to serve the US… And there were these little pods that ran with all the stuff you needed on the backend as well… Because when you log in as an AOL user, it knows all this stuff about you. So it knows who you are, and what you like, and what you wanna see on the homepage.

And if you have mail.

And how much mail you have. All this stuff. So we had to bring all that stuff with us when we’d load up these localized data centers. So there’s – yeah, there was a whole lot of stuff all over the place at that time. And all these – the owned data centers, the big ones, and then these colos. So it was just crazy. There was just a lot of stuff everywhere.

Mandi, I love you. I don’t know you, but we’re gonna be besties. Do you know how hard it is to get people who work in infrastructure with a personality? Like… [laughter]

Oh, we could talk, yeah…

Can we just talk – you said pour one out for the… Like, we are right here. I just – I love you, and where have you been my whole life? We’re gonna be besties. Obviously, cloud and on-prem have their place, right? But because you were in the trenches with on-prem and with building infrastructure 20 years ago, is there ever a time when people get – you know how sometimes we think back at the past and we’re like “Oh, it was great”, but it wasn’t great? You just made me feel so grateful for the fact that I started tech in the cloud… Because like “Yo…” That’s a lot. So is there anything when people say stuff about reading infrastructure on-prem and they make it sound easy - do you ever side-eye them and you’re like…? [laughs]

Oh, absolutely. Like, if you haven’t been running a crash cart down the cold aisle, trying to plug in and fix something, you haven’t lived. But also, I feel bad for you. It wasn’t fun.

I love you so much…!

It wasn’t fun, man. It wasn’t fun.

It was a great experience, but “fun” was not a word for it.

Not at all. We learned a lot of lessons. That was the learning period. We know what not to do. There’s a reason people love the cloud, is because this other stuff is mayhem. And it’s just crazy.

That’s what I’m saying. They both have their place, and there is a point where on prem just makes more sense. That’s just how it is. But sometimes I feel like we romanticize things a little bit, when we get too far, and –

Yeah, infrastructure people like the control of being on prem, and being able to artisanally curate their switch ports, and all this stuff…

It’s like when people start making coffee, and they do pour over, and they’re like “Because I need it to take 35 minutes”, and you’re like “Bro, you could have just made an espresso.”

Life is too short for this.

We’re a culture of sometimes liking control too much into misery… When people are like “I want to control my own servers for social media”, and I’m like “Dude, I have to do that at work. I don’t want to –”

[unintelligible 00:30:56.18]

I think we get to a point where people really romanticize too many options, and then I’m like “You know what, I’ve got a whole life, and maybe I don’t need all of those options.”

Right? Totally. It’s definitely like that. Yeah.

I love you. We’re going to be besties. You’re so funny. Okay, what is the craziest thing that happened to you when you worked at AOL? Did you ever have like a big outage, or…?

Oh, absolutely. What was the worst? Tell me the best horror stories. Because I just want you to know that you’ve made 12-year-old me so freaking happy. I was sneaking on the internet at 10 o’clock at night when my parents went to bed… The sound of AOL starting up is the sound of my childhood. You made my whole little teenage nerdy finding friends on the internet life.

That’s cool.

You powered my teenage years.

[00:31:50.28] The irony is I was never an AOL user before I joined AOL. I had no experience with the service at all, because it was a long distance phone call from my parents’ house to the local pop. So that was not going to happen from my parents’ house. But yeah, the biggest outage we probably had, the one that still gives me nightmares - so one of the deep configurations in AOL’s server is that you can actually get into it and see what each individual thread is doing. It’s super-cool. It can tell you exactly what request every thread is serving. But you can also then see “Hey, all my threads are full. What is going on?” And then you have to get into the configuration and tweak how many threads there are.

So when we were doing a deploy to aol.com - and I think we were in five or six data centers at that time - and you drain one, load the software, and pull it back up, and then it rebalances on the global DNS. Well, it would load up, and then the threads would fail. You’re like “What is going on? Why are the threads full?” Something in the new software is just a little bit too slow, and it turns out we only had 10 threads available on every server… Which is not enough. At the time it seemed like “Oh, 10 threads in there… It’s pretty quick”, but it slows down enough that it would block the entire thing.

So what you’d get is a stampede. So one datacenter would fall over, and then all the traffic would swing to another data center, and then that data center would get flooded and fall over, and all the requests would failover to another data center. So you could watch all the traffic sort of spike all over the place, until we got a fix pushed out to get more threads into all the systems.

So it was fascinating, but also kind of a nightmare, because we were dealing with push-based SSH in a loop really to get all these configs out to all these systems, and then finding – we weren’t using any version control for any of this… Like, come on, it’s 2006 or 2007. That really wasn’t gonna happen. And so there’s certainly no configuration management, we weren’t doing any of that cool stuff… So yeah, we were just sitting there, waiting in a loop for all this to fix itself. So the whole farm would quiesce, and all the services would come back up. So that was a bit of a nightmare. It took about half an hour to get the whole thing straightened out.

And that question I had was “How did you do those deploys?”, and it was basically - because there was no version control, there was no config management, no such thing as containers… So it was just like “I have a file, it works on my system… SCP it to every machine”, right?

One hundred percent. And they’re all bare metal. It’s all bare metal at that time, too. There’s no VMs, no containers… Everything’s bare metal, everything’s individually IP-ed, off it goes, and you have to push to each one.

So you had your CSV file of all your inventory.

Yeah, we had a machines.dat. That was its name. And it was a text file, it was space-delimited, so whitespace…

Yup. So you’re [unintelligible 00:34:41.09] those fields, and you’re just like “Go!”

Pulling that out, piping it right into the loop, and off it will go. Yeah, it was crazy.

All the bad words I say to Git, you’re making me really grateful for it.

Yes! Yes! Be thankful for Git. Be so thankful for Git.

I’ve said some really mean things to it, and now I feel like I need to go back and apologize.

I know. It’s karma, right? It comes back to bite you. And that one experience was a big part of why I went to Chef after I left AOL, because I was like “There has to be a better way to do this.”

But I feel like you were – like, I didn’t know you when you worked there or what you did, but I felt like your voice and having your voice in those rooms were probably like fire, because you were in the trenches…

I’d be on mute. I’d be on mute a whole lot. Yes. I mean, we had definitely different outages where you’d be on a headset, like a battery-powered headset or whatever, and you’d be on it so long that the headset would die… There were some dark hours.

How big is the team that’s running all of these services, and web services for that?

It would vary. Four to six, eight at the max… Like, these are little teams. The engineering teams are huge. We ran the channels, which was called Big Bowl. So if you’ve ever been to Chicago, there is a place called Big Bowl; it’s a restaurant. There’s also one in Reston. And that’s where they came up with the concept for this. That’s what they called the product, it was a Big Bowl. We ran 200 DNS names or so, 70 channels across it…

[00:36:08.29] I felt eight people was not enough for that.

Right? Hundreds of developers dealing with this thing…

You’re giving me anxiety.

Itty-bitty teeny-tiny operations team to deal with it. But that was the thing with the monoliths; you could observe all of this stuff out of one big spaghetti mass of code, and then hope that the handful of people on the other end could figure it out when you screwed it up. So… Yeah. It was little teams; very small teams for all that stuff.

Yeah, no developers on call… What even was – on call, you’d have like a USB [unintelligible 00:36:44.28]

I’d have a pager. Like a legitimate, actual real-school Motorola pager that we were all assigned. And the NOC would call us. So AOL had a NOC, which - you have to at that scale, really. And those folks were on all the time. They were based in Columbus at that time.

What is a NOC?

Network Operations Center.

Interesting. Okay.

So they’re 24 by 7, 365, and just rotating teams, watching the blinking lights. If anything goes down, they’re on the [unintelligible 00:37:16.07] They’re calling up on the phone, “Something’s down. Can you log in?” You’re like “Well, yeah, I guess so. I was eating dinner, but whatever… Yeah…”

Did you use AOL chat rooms for coordination on your teams, or anything? Or was that too –

No, there were some weird shortcomings with the chat rooms, in that you couldn’t really put them together for teams. That made it super-hard for us to use our own products to actually talk to people. So think about Slack and stuff today - it’s super-easy. You can add as many people as you want to to a channel. You could really do that with the AIM stuff. So we’d use it for person to person, but we had our IRC channels that we ran internally to talk on teams.

Yeah, I was gonna say [unintelligible 00:37:57.10]

That’s crazy. I didn’t think about that. But I don’t think I’ve ever talked to – unless you were in an actual chat room… I don’t think I ever talked to people in groups on AIm, so that’s crazy. Slack definitely spoils us. I can’t even go to Teams. Slack has ruined me forever. I have friends Slack channels. Like, whole slacks just for friends.

Of course. Yes.

Every old company that I went to, I have like an old coworkers Slack.

Yes. I’ve got like a nonprofit Slack, and then we’ve got like a friends chat… Slack has ruined us all. They know they’ve got us.

Absolutely. So easy.

It is. Oh, man, that’s crazy, just thinking about the fact that you couldn’t even use AOL to do your stuff internally. It’s just…

Yeah. And there was other stuff that was – if you think about it, the product AOL Mail was very much consumer-focused. But we’re a tech team at that time, and we want procmail rules to move mail around, and all that stuff, that you couldn’t do with AOL Mail at that time. So even then, operations had our own mail server on a different subdomain, and that’s where we kept all our mail. It was just so divided from the customer experience; probably not the best way to do that if you’re really down with preserving things from the customer… But the consumer products weren’t suitable for the users on the tech side.

It’s interesting, you’re describing this wave that we see over and over again, in any technology, where it’s like “Oh, this consumer thing is great, and it’s mass-adopted, but it’s not flexible enough for the power users, for people that really want to dive deep into it”, and so we switch back to this “You have to run that yourself.” And a lot of people ran their own mail servers for a very long time, because they needed that power, they needed bigger scale, whatever… And then it consolidated, and we’re like “Oh, now guess what? The consumer products get some of those features”, and bring some of that power into “Oh, Gmail can just add filters for me”, and I can do that routing, all that stuff.

[00:40:01.11] And Then I’m wondering what the next shift is going to be; what the next gap in consumer features are, that we’re like “Hey, guess what?” Maybe at some point the cloud makes things boring and easy, and then you’re like “Oh, but I can’t do the thing that I need to, so I have to go buy a datacenter, or buy some hardware”, that sort of stuff.

Yeah. It’d be super-interesting.

I think we’re already at that point. Look at how much people like running their own servers for Mastodon, and stuff.

Those people are weird…

I remember being behind a startup, and in line for an observability booth, and people were talking about running servers in their grandma’s garage… And I’m just like “Are we back to this?” I’m like “I feel like we’ve already done this and bought that T-shirt. Is this cool again?”

I mean, I never stopped, so I don’t know what you’re all talking about… [laughs]

No, I don’t run anything at home anymore. I used to have a whole bunch of stuff, but I’ve moved a whole bunch of times, I’ve moved abroad for a while, and I came back, and I’m like “I think I’ll just put all this stuff in the cloud. I don’t need to have it at home anymore.”

See, all my stuff’s in the cloud, but my kid wants to run stuff on a Raspberry Pi, and I blame Justin osmosisly…

I mean, I’ve been running a home theater PC of some sort and a NAS at my house since 2005.

Oh sure, yeah.

In college we had them, too. We had them probably in 2003. And ever since then, I just kind of got hooked, and I’ve just run them ever since. And it varies in what I’m running, and what hardware and whatnot, but there’s always something that runs locally, and I have backups and storage and stuff like that… And I do own some of that. And I don’t need all the power. I want the consumer version of it most of the time… I’m just like “I just want something that works.” But I paid for it once. My Plex server and my Synology - it was like six, seven years ago that I paid for it upfront, and I’m just like “Yeah, it just works.” And we’re fine.

You described outages, you’ve described a little bit – all of your updates, basically, for chat rooms were just like SSH for loops? So just like “Here’s the files, here’s the new thing”?

Yeah, everything came through as a tarball. And if you were lucky, it would be production-ready. And if you weren’t, you had to open the tarball, fix a config for prod, reroll the tarball and push it back out.

So you are one of the few people in the world that know all of the tar flags.

Yeah. I know the old ones, and I use a hyphen, and [unintelligible 00:42:24.21] like “You don’t need a hyphen anymore. [unintelligible 00:42:26.27]

[unintelligible 00:42:27.12] You don’t need that hyphen in there. That’s just a wasted character.

“I have so much muscle memory on this… What are you talking about?!”

You did not say a wasted character…

It is a wasted character! You do not need the hyphen [unintelligible 00:42:38.09]

I’m done with you… It’s fine.

Those are the things that learning through that period of just like “I have to get this script right the first time, because it’s going to be deployed, and I don’t want to run this script again, because then I’m going to Ctrl+C it in the middle of my for loop, and I don’t know which servers are good. So I have to do it all again.” So it’s like, you’re gonna learn the tar commands… I learned regex early on from that, and it just has stuck with me… And it’s one of the best things that I learned, because I’m just like “Guess what - this applies in a lot of situations.” And now that [unintelligible 00:43:08.00] command is not scary. I’m fine. I’ll get it on maybe the second try now, but it’s just like “Oh, these things are pieces that I learned through doing it and struggling over and over again, on call.”

And that was one of the great parts about working on a Unix platform, just at the foundational level. The individual tools are so neat, and you could plug them together so well… So yeah, we’re able to read through machines.dat, pull things out, set an awk, and send it off to the for loop super-easily… But we had scripts that would do whatever, and once we got to Java – so we migrated from AOL server to Tomcat in 2006, I think…

War files now. You don’t get a tar, you get a war.

[00:43:50.01] You get a war file, with an XML config wrapped in it, which is a nightmare… And no good practice around making sure things are good and ready for prod. So we’d be unwrapping everything, and then rolling it back up and pushing it out, just to make sure. Because one of the interesting things about AOL at that time - it was like the only place that I’ve really encountered that spent a lot of money on the non-prod environments. So there was full deploys across for dev and integration testing… Because if you were going to integrate with the dial-up stuff, there’s this service called Unified Preferences that held all the backend information about all the users… And if you were going to integrate with that and pulling it in, you had to load it up in the integration environment, make sure all this stuff worked. So I had all these environments and all this stuff… And we’re always getting stuff to go into prod that was configured for dev, and integration, and not for where it was supposed to go, for any reason…

What sort of data behind the scenes were – is this MySQL then?

It was. It was MySQL… And one of the unfortunate things about that era was that there wasn’t then a lot of open source out of any of those. AOL server was open source, but there were so many other cool bits and pieces that AOL – and Yahoo too, actually, at the time… That just never made it out into the world. So we had these MySQL servers, and they had this proxy software in front of them called Atomics… I don’t remember what it stood for. But it basically allowed you to put HTTP calls into your database, so you could put it behind the [unintelligible 00:45:21.00] and do round robin across a set of databases that were all replicas of each other. And it made it super-easy to deal with the databases. It would have been so neat if that thing had made it out into the world for other people to use, but it never did. But that was the backend of those systems, was MySQL servers at the time… So yeah.

There probably wasn’t a ton of choices for databases either, right?

No, because it was all commercial. So some of the older stuff ran on Oracle, but then the web stuff to get the kind of scale out of it, you don’t want to pay for Oracle…

Oh my gosh, I did not know that this was open source. I’ve just found the GitHub. GitHub/aolserver/aolserver. Last commit. Oh, there’s one that was two years ago, but everything else is like 20 years. 19 years ago, 21 years ago… This is amazing.

Yeah, it’s classic. So the other dirty secret of AOL server was that the guys at Bitly were AOL employees, and they took AOL server with them to Bitly. So there was some AOL server behind Bitly for a long time. I think they’ve migrated off of it now, but… Iit was over there, too.

There’s your nsconfig TCL file right there.

There you go. Yeah.

This is way back. This is amazing. I love that.

Yeah. It’s all there. If you want to run it, go for it, man. Yeah.

It’s also crazy, because back in the ‘90s and early 2000’s AOL and Yahoo were so big. It’s hard to imagine how it is now, where Yahoo is barely existing, and AOL is gone. It’s crazy.

Which is funny, because - I mean, sheer scale… Yahoo is still infrastructure and development bigger probably now than it was then…

It’s so huge.

There’s just so many more people, and there’s so much other things to do. These are still big things, they just aren’t in the mindshare, and they aren’t the common thing you really think about anymore.

Absolutely.

They were like the biggest email at the time. Yahoo and MSN… It was crazy.

And then Google came along like “One gig of free email” and everyone was like “Ah, screw that.” I was deleting every old – like 10 megs? “I don’t know what to do with this.”

Well, not just that, but Google has an integration to use your mail for everything. So I’m just lazy and don’t want to make six different logins, and I’m just like “Sweet…”

It changed the whole landscape of that stuff.

So you left in 2011, right? And this is right around – like, DevOps was a thing. It was becoming – all of those lessons learned that you’re talking about were definitely coming into view publicly for people, and they started talking about “Hey, how do we not throw things over the wall? How do we do this config management stuff?”, all that stuff. So what was it like at the tail end of “Oh, hey, we’re going this direction”, or “We’re going to change everything to make it better, hopefully, for the ops team or the developers”?

[00:48:04.23] Yeah, AOL wasn’t headed in that direction when I left. So the Velocity Conference kind of kicked all this off. The first one of those was in 2008, and then things kind of got rolling after that, with web operations being something that was like – you had to do it at scale, you had to think about things a little bit differently than people had been thinking about systems administration in the past… And also sort of taking into account “Yeah, you can’t do this at massive scale with these tiny little teams, when you’re just on the receiving end of a waterfall of garbage from the application teams.” Because they’re being slammed in the head for deadlines, and all this stuff…

Yeah. Features, and – yup.

A number of places where everybody had just crazy expectations that no one was going to meet. So at the time that I left, AOL wasn’t really headed in that direction yet. It was a very tumultuous time at AOL. They were ingesting the Huffington Post at that time, so that was a whole other platform they were dealing with…

I didn’t even know they bought that. That’s crazy.

Yeah, we were at 7 and 70 Broadway when a HuffPo got bought, and they all come in and they took two conference rooms to be Arianna’s office, and she had these nice couches… It was very nice, and we’re like “What’s going on?” “Oh, that’s Arianna’s office.” “Okay… Can’t go in there anymore.”

But I think they figured it out eventually. There’s still folks over there that are running all these things. Like you said, stuff is still there. Yahoo and AOL are now owned by – I think it’s one entity. It was under Verizon for a while, and then I think they’ve been spun out or whatever… But they’re all still doing their thing, and… I think Kara Swisher had their CEO on her show, on her “On with Kara Swisher” podcast last week or a week before, talking about what they’re doing over there… Because AOL was the thing. If you were like a Midwestern housewife, home in the middle of the day, like - I know what you were doing, because you were on our systems. [laughter]

It was kind of cool. You could see things… It was so much in the zeitgeist that you could see in real life mirrored in the metrics. For things like the Super Bowl, right? And you’ve got two quarters of play, halftime, two more quarters of play, and then you’re done. So if you’re watching at the time the sports channel and the rest of the channels at AOL, you see the traffic bottom out while the game is running, pop back up as everybody’s checking their news and email during the halftime show, bottom out during the play, and then come back up to normal after the show. And you could do that for any big stuff: the Emmys, the Oscars… It was crazy what you could see as a reflection of what people were actually doing in real life, because we had so much of a view on it… Because it didn’t matter, it was all in the monolith, whether you were looking at sports, or news, or whether, or entertainment, or anything; it was all there, and we could see all of it.

Did you ever start recording any of that data, or learning from the traffic patterns? Because I had a friend who worked at MSN at the time, but I think after they were – were they bought by Microsoft? Something like that.

Yeah… Hotmail was the original product there.

And they were already kind of starting to record clicks, and data, and what people were doing… And it’s interesting, because I feel like people are under the impression that collecting data and learning through data is something new, but we’ve been doing that for forever, you know? So it’s like… Did AIM do that at any point, or…?

Oh, I’m sure AIM did. And on the website, we did, too. There were tracking pixels, and other cookies, and all kinds of stuff built into all the pages, to know what folks would do, where they would click in different features, so that you’d know “Hey, they really engaged with this particular thing, but they didn’t engage with this other thing… So we’ll put more development on this particular module”, whether it was like a horoscope, or weather, or whatever it was.

Oh my God, I used to always check my horoscope.

You had to check the horoscopes, right? I’m glad you did, because they were paying a bunch of load… So [unintelligible 00:51:49.02] and that was important. But yeah, there was all kinds of commercial products at the time that were helping us out on all that stuff… They’d spend a lot of money on that, because you want to push your resources to the things that people are going to engage with, because ultimately you’re selling ads. And when you have something as big as aol.com, you make all of your money for the entire year out of ads before the end of February.

You’re making a lot of money to run that thing, and it then gives you the capital to run everything else afterwards. So yeah, a lot of cash there.

I love just like the use of data to learn more about customers. I feel like people are almost outraged about social media in different places taking your data to learn about you, and I’m like “We’ve been doing that for forever.”

And the data then - there was no metrics, or even like open telemetry… You’re just looking at AOL logs, hit logs. Like “Hey, this is how many people are coming through.” I can scrape it and pull the IP address to get some basic information, but that was the data, was an access log.

Yeah. But it’s gotten progressively – like, just look at that Target thing, where people… Target was sending people baby coupons before they knew they were pregnant. And that was like 10-15 years ago. So just progressively, the more data that people started collecting, they started using, and it’s gotten more and more, I guess, accurate in some ways… But it’s just interesting, there’s so many different ways to use data to either sell, or learn more, or to improve your product, and it’s crazy that we’ve been doing it for so long and it just keeps progressing.

And you wonder why people want to run their own servers… [laughs]

I love that though. I remember I got to meet one of the ladies who did Alexa Shopping or something at Grace Hopper, and I was like “Yo, you keep reminding me to buy more popcorn. I love you.” I mean, to a certain extent, right? You don’t want people to have sensitive data… But we use that every day. Facebook’s like “Do you want this new pair of Converse?” I’m like “Actually, I do… They’re really cute.”

And it’s been an interesting evolution, because like you say, we really only had access to whatever tracking pixels they put on the page, and that went to the product managers; and then on the operations side, all you really got is the access logs. So you can see regionally who’s coming in, where they’re from… Do we need to then break things out, so we’re closer to those folks, so you can make operational decisions? But then - yeah, you can see “Hey, for whatever reason, today no one’s engaging with this particular channel. What’s going on over there?” and they can look at that sort of impact of an editorial decision, or what kind of features they’ve published today that people aren’t engaging with. And it’s interesting in that it’s real time without being participatory. The users aren’t giving you more than what they’re looking at. Like, there’s no data coming in from the user, like there is with social media. So they’re not telling us “Oh, hey, I went to the park today with my friend.” They’re just clicking on whatever information.

You see the actions, [unintelligible 00:54:49.29]

Yeah, just reactions.

How did that apply to scaling? Because you mentioned during Super Bowls, and those things, the chat rooms would get really busy… How did you handle that on the backend, especially being on prem, and having hardware that you have to buy? Like, “I need to scale this thing up”, and you have to make some operational decision. That’s like a six-month process.

You overbuilt Everything was overbuilt. Absolutely overbuilt. And when we had something like dot com, or the channels, or whatever, that had to be in multiple locations for DR, or whatever, you made sure that every location could handle all of the peak traffic at your anticipated peak. And we can look back – one of the other things that AOL never got to open-source was their monitoring system. And it had a – I forget its name; it was weird. There were like tuna boats or something involved. It was strange…

Wait, tuna boats?

Yeah, that was part of the transports. I don’t remember all the details there… But it was aggressive. It was really, really good, and gave us a lot of information about when you were hitting peak. And we could put custom data into it, and a bunch of other really interesting things that were unique at the time… And again, it would have been interesting to see if it had gotten open source, what people would do with it. But it would give you enough to know “Hey, you need to bulk this thing up”, because like you said, there was no dynamic provisioning. It was all solid-built bare metal at that time. Everything has to be fully deployed.

[00:56:17.23] You get the page and then you dust off those extra machines you had in the back and you’re like “Hey, these are gonna be web tier now.”

Right? If you need to redeploy, then you need to pull in the extra machines you squirreled away in some other project, and reprovision them, and you might have to reload their operating system because they’re on the last version… And then put on the runtimes, and the current code, and hook them all in, put them in machines.dat and off they go… So they get all the hookups… Because in the backend there was a custom CDN, and small object brokers, and repeaters for commands, and all kinds of other weird stuff they had to talk to for that particular platform. So yeah, there was no quick scale-up, so we were overbuilt all the time, on all those platforms.

Again, I’m so grateful for the cloud…

Absolutely.

Also, I just feel like that is amazing. You lived in a really exciting time, even though I feel like it must have been very hard at the moment… But, I mean, you’ve got street cred, Mandi.

I mean, we learned a lot of stuff. We learned that push sucks, to deploy stuff, so you want to pull-based deployment as much as you can… Because you don’t know what’s down out there. You’ve got 1000 machines; at any given time one or two were probably offline, taking a nap, doing something… All that stuff. And deployment being ready for prod - that was a huge thing to try and teach engineers to think about, because they don’t know anything about prod. They don’t know what prod looks like.

Well, they were just learning how to code, right?

Exactly.

This is the early 2000s. It’s just like “I don’t know… Just learn the language and bang out some characters, and ship it.”

They’re busy trying to figure out TCL, man. They’ve got no idea.

Is there anything you miss about those times, being an engineer in those times, compared to now?

No… We eventually had a pretty good relationship with the engineering team. And I feel like if you’re in certain kinds of DevOps or SRE type deployments, you might not have as good a relationship across lots of engineering teams that we had… But that took work. That was hard to try and persuade people to come to the table with us and talk about “Hey, we want your stuff to succeed. We’re not here to turn your stuff back and make you go back to the drawing board. We want to be able to deploy your cool stuff into prod, but you need to work with us on this.”

So we eventually had pretty good relationships with most of the engineering teams on the content side… That I hope other folks have. You hope that you have a nice, mutually beneficial relationship with all the people that you’re working with. But the other stuff, like putting tickets in, and requisitioning storage, and dealing with all that nonsense - absolutely not. Overbuilding and wasting so much power and energy for some of that stuff, to have that running?

That’s crazy… So much money…

Yeah, all the cash that went into it… It was of its time, and I like the cloud much better.

That’s wild to me, that people don’t want to have a good relationship with the engineering team and the SRE team. You need them. That’s like when people talk crap about QA, and I’m like “You’d better be nice to those people.”

It’s all symbiotic. You all rely on each other.

Not just that, but we’re all in the same struggle.

Yeah. Everybody gets paid out of the same success, right? You all got to do it. So…

Like, your life’s gonna suck if their life sucks, so why don’t you just work together…? That’s crazy.

Mandi, this has been great. Thank you so much for coming on and talking to us about –

It was super-fun. Absolutely.

Where can people find you online? If they want reach out and say “Hey, by the way, my AOL is still down…”

Oh, yeah, I can’t help you there… [laughter] Most of the time these days I’m on Bluesky. So I’m lnxchk on Bluesky. You can also find me on LinkedIn, just as /in/mandiwalls. And I’m in the HangOps chat, if folks out there on HangOps are hanging out in there, on HangOps…

I forgot about HangOps. I used to do HangOps all the time. That was great. Yeah.

Yeah, HangOps is still a busy Slack.

Is that a Discord? Where’s HagOps?

That’s a Slack.

Oh, there’s a Slack…?!

Yeah, come join us on HangOps.

Yup. The HangOps Slack.

Okay, I have to go join that now.

Thank you so much, Mandi.

Alright, thanks so much.

Break: [01:00:22.02]

Thank you so much, Mandi, for coming on the show. We would love to talk to you again in the future about a lot of other things… Hopefully everyone enjoyed that. Also, if anyone out there, listening - if you used to run infrastructure, especially ‘90s, early 2000s, we would love to talk to you for more of these retro episodes. We’ve got at least one more lined up… And I love talking about this stuff, just because it was so different, and people forget what it was like.

It’s talking about your childhood.

Yeah. I mean, there’s some nostalgia to it, and then there’s some of just like “I don’t want to ever do that again.” So email us, shipit [at] changelog.

It’s also really cool to see how far things have come. The industry has really kind of gone through an evolution. It’s amazing.

Things have changed a lot in the last 20 years, and I wonder what the next 20 will look like.

Well, it’s interesting, with all the use of AI, and all the things that people are – you know, the different infrastructure that people use. I feel like when I first got to into tech, CI/CD and green/blue pipelines were the new cool thing, and now they’re like the old thing. All of a sudden I’m like –

If you’re not doing that… Yeah.

Yeah. I’m like “Whoa… How did we get here?”

Yeah. So I like the looking back and just seeing how things were… So feel free to reach out if anyone else wants to talk about it.

Also, can we have any excuse to talk to Mandi again? We should just make stuff up to talk to her again.

She also has a podcast, so people should – I’m gonna drop that in the show notes too, because people should go check that out. It’s part of the Pager Duty podcast; they have a lot of different hosts, but…

I want to listen to it. Me and Mandi have to be besties after this.

So for today’s outro I have a fun game that we’re going to play again…

I’m slightly scared.

Yeah, you might want to be. This one, I don’t have a – there’s no good name for it… So it’s just like an acronym –

I’m really sad that you don’t have an acronym for this.

Well, it is an acronym, but I couldn’t make it spell something. So the letters are JDCO, but that didn’t spell anything, so I was like DOJC? I don’t know. So we’re just gonna go with whatever we want.

I’m disappointed that this isn’t some weird name, but okay.

Yeah. So it stands for Java, Data, Cloud or Other. And those are your multiple-choice questions here for projects we’re going to talk about. And all of these projects are part of the Apache Foundation. And so I was scrolling through the Apache Foundation, and I was just like “They have a lot of projects.” Almost all of them have to do with either Java, data, cloud, or other.

All the things I love.

So I was like “This might be a good one”, Autumn. So there’s some that might be kind of obvious, and you’re gonna pick one of Java, Data, Cloud or Other. So Apache Cassandra… Which category does that fall under?

It’s a database, but it’s also built in Java, and it’s got an Apache license.

Right. And all of these would be like Apache, Name, something, and they all have Apache licenses. The vast majority of them are written in Java… And so this one would include databases and data processing of some sort. So Hadoop is another one… Hadoop is a data processing. So –

Also a lot of streaming in different ways, stuff like that.

Yeah, there’s a lot of that in here, and this is why I wanted to talk about it and see what we think they are.

I’m terrible at remembering names though, so…

Most of these I did not even remember what they did. I knew the names of them, and I’m just like “Where would that fit, if I was guessing this?” So this is kind of for the audience to learn a little bit about just what projects exist, and kind of where they fall. So CouchDB… That’s another one that you probably know.

That’s a database.

And I didn’t even know that was an Apache project. I honestly did not know.

Yeah. It was weird, at Google Next I saw a whole car wrapped in CouchDB stickers. It was very interesting.

That’s one way to use those conference stickers.

They also gave me popcorn and a donut, so I’m totally their friend now, because there was a donut involved.

How about how about Apache Ant?

[01:06:11.05] That is, I’m pretty sure, a Java… Nah, is that a framework?

It’s a build tool. It’s a Java build tool. So yeah, it’s a Java thing… So yeah. Let’s see, how about CloudStack?

I’m gonna go with cloud.

Yes, it’s definitely a cloud – it’s like the Apache version of OpenStack, in many ways.

It does a lot of that self-hosted –

I didn’t know they had an Apache version of that. That’s interesting.

Let’s go with Flink.

It is a data. It is a data processing engine. And so it’s kind of like a – I think it was stream processing, if I remember correctly… Guacamole?

Guacamole… Java.

This one would fall under cloud.

Oh, interesting.

It’s an html5 Remote Desktop protocol.

That’s a cool name. Is there a Salsa that goes with it? Is there Chips that go with it? Can you imagine if they had different parts, and one was like Salsa [unintelligible 01:07:05.02]

You’re just building out a menu here.

This is like “Okay, we’re gonna go to the Mexican restaurant…”

Now I’m hungry, darn it…

I used to manage a Virtual Desktop Environment. Guacamole wasn’t part of that, but I knew of Guacamole a long time ago, and I’m like “Oh, this thing is cool. It does Remote Desktop through a browser”, because [unintelligible 01:07:22.11]

Oh, that is cool.

Yeah. And it’s all html5.

I might go check that out.

How about log4j?

Oh… Java.

This falls in the CVE category… [laughter] Okay, Apache Brooklyn.

Ooh. Brooklyn. Interesting. Um… Cloud?

This one I’m gonna put under Other… It is a framework for modeling, monitoring…

You didn’t tell me Other was an option.

That’s the O.

Oh. I feel like you just said Java, Database and Cloud.

And there was an Other. Because I have a couple in here that are Other. It is a framework for modeling, monitoring and managing applications through automatic blueprints. So it’s making blueprints and then stamping out these applications. And I don’t know where it’s used, I don’t know who uses that… If anyone knows, let me know.

I’ve never heard of that.

Yeah, I didn’t hear of this one. This was kind of a fun, like “What? What does that do?” Flume.

I think Flume is data, isn’t it?

It is data. It’s a log aggregator.

VCL. Java? I’ve never heard of this before. I’m guessing.

VCL is cloud. It’s another VDI/cloud connection environment. This is specifically for, I think, managing the infrastructure side of it.

Interesting.

But yeah, I think Guacamole is a component in there, but they have this larger platform. More like a Zen desktop.

That’s a missed opportunity. They should have named it Salsa, obviously.

Probably, yeah. This one made me mad…

Oh, no…!

This one’s called Yunikorn.

I don’t know, but it better be fabulous, because they named it Yunikorn. It better not suck. I don’t know. How

old is it?

It’s pretty new. It’s new-er…

Cloud?

I would put this under the cloud category. It’s a scheduler for Kubernetes. The description was “Standalone Resource scheduler responsible for scheduling –”

Kubernetes gets all the cute stuff. Kubernetes and Salesforce gets all the cute stuff. There’s never cute stuff for Java. It’s so annoying. That’s it, I’m gonna start – and then what’s that new Kubernetes adorable thingy, and it’s like all cute? I’m gonna start reading Kubernetes, dang it.

Phippy? Do you mean Phippy, the characters?

No. The new one, that they just released. It’s like [unintelligible 01:09:38.14]

Oh, [unintelligible 01:09:39.05] I can’t say it. But cute, yeah.

They always get all the cute stuff. You have all the funnest developer advocates… JavaScript and Kubernetes get all the good stuff, and it’s not fair.

You can just join the club. It’s fun.

Dang it. Now I’ve gotta go learn how to run Kubernetes…

But Yunikorn is mainly – they say it’s scheduling batch jobs, long-running services and large-scale distributed systems. And I’m like “That’s pretty much all of the things.” So I don’t know how the difference is… But on their website it also said it focuses mostly on ML stuff. So I think it’s ML/batch…

Interesting.

Yeah. Pig. Apache Pig.

I don’t know… Data?

Yup. Good job. A platform for analyzing large datasets on Hadoop.

Interesting. I don’t know the – like Hadoop is like elephants? Or pile Pig – I don’t know. Roller.

Cloud?

This is Other… It’s a blog platform, all written in Java, and it ties into Maven. I’ve never heard of it before in my life, and I was like “Alright, fine…” And the very last one, let’s go with – this one’s called Nuttx.

What?! [laughter]

Someone lost the naming battle on that one. Nuttx.

Why do you set me up for these things?! Data…? I don’t know…

It’s a real-time operating system. It’s a Linux operating system for real-time embedded systems. So it’s in Other. But yeah, I’ve never heard of before in my life.

You can tell a dude named this. Like, why?

Like “Nuttx.” Okay, there we go. And now it’s an Apache project.

So thank you everyone for listening to this episode. Thanks again, Autumn and Mandi, for coming on and joining and talking about AOL chat rooms… And we will talk to you all next week.

See you, guys.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00