The Changelog – Episode #228

Servo and Rust

with Jack Moffitt

Guests

All Episodes

Jack Moffitt joined the show to talk about Servo, an experimental web browser layout engine. We talked about what the Servo project aims to achieve, six areas of performance, and what makes Rust a good fit for this effort.

Featuring

Sponsors

Code School – Learn for free this weekend (November 18-20). All Code School courses and screencasts are FREE for everyone this weekend ONLY!

Hacker Paradise – Do you want to spend a month in South America, expenses paid, working on open source? We teamed up with Hacker Paradise to offer two Open Source Fellowships for a month on one of their upcoming trips to either Argentina or Peru.

GoCD – GoCD is an on-premise open source continuous delivery server created by ThoughtWorks that lets you automate and streamline your build-test-release cycle for reliable, continuous delivery of your product.

Notes & Links

Transcript

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. πŸ’š

Adam Stacoviak

Welcome back everyone, this is the Changelog and I'm your host, Adam Stacoviak. This is episode 228, and today Jerod went solo talking to Jack Moffitt about Servo, an experimental web browser layout engine. We talked about what the project aims to achieve, areas of performance, and what makes Rust a good fit for this effort.

We have three sponsors - Code School, Hacker Paradise and GoCD.

Break

[00:00:39.12]

Alright, welcome back everyone. We have a big show today, a show that our listeners and our members specifically have been asking about. They've been saying, "Give us more Rust, give us some Mozilla" and what came of that is a show about the ambitious browser engine project from Mozilla called Servo. Servo is a huge project - 597 contributors. Lots of people involved, so we thought "Who do you even talk to about this?" We asked Steve Klabnik, a friend of the show, "Who would be a great a person to have on?" and he said "You've gotta talk to Jack Moffitt." So that's what we're doing, we have Jack Moffitt here today. Jack, thanks so much for joining us on the Changelog.

Hi everyone, happy to be here.

We have a lot to talk about, Jack, but what we like to do to kick off the show is to get to know our guests just a little bit better. We find hacker origin stories are inspiring and sometimes interesting and insightful. You have quite a history. I'm looking at your Wikipedia and Servo is not your first rodeo. You've been involved in XMPP, Erlang - maybe not working on the language of Erlang, but using Erlang. Icecast, which I think we might even be using for our live stream still today, and lots of other projects. Can you give us a little bit of your origin story?

[00:03:51.14] Sure. It sort of starts with Icecast. Icecast was the first open source project I worked on. I was going to school at SMU. SMU had lost their FCC license several times, so the Student Radio Station only played in one building. And I thought, you know, all of the dorms have Ethernet jacks in them, and we should be able to get this radio station to everyone, but no one wanted to pay for the real network products at the time - they were pretty expensive - so I started working on one, a streaming media server, along with a couple other people. It sort of grew from there.

That project started collecting contributors and got more complicated. As part of that, I joined a startup that was doing internet radio. That startup ran into issues around MP3 royalties at the time; the patent owners wanted to charge for actually streaming of MP3 audio, not just the encoders and the decoders. So I started looking for how are we gonna solve this problem. We needed a royalty-free codec, so at that point I met Christopher Montgomery of Xiph.org, who was working on Ogg Vorbis at the time. We started paying him to finish that off full-time, and then I helped found the Xiph.org Foundation. After that work was ready to ship, it's sort of gone from there.

I've been quite involved in patent-free audio and video codecs. Even today, there's a project at Mozilla called Daala which is doing the same thing for video with many of the same people, including Monty on board. People also ended up at Mozilla, independently of me.

From there, I did a bunch of startups and various things, always keeping sort of an open source bent about it. I did some frontend Javascript work with an online games company that we started to do chess online. We pivoted that into a real-time search engine, which is a story for a whole other podcast...

[laughs]

That got me into Erlang - that's why I did the Erlang stuff for a while, doing a lot of backend infrastructure for massively multiplayer games - a similar game to PokΓ©mon Go called Shadow Cities, and then I ended up at Mozilla working on Servo. So I've been around the block in terms of the kinds of projects I've worked on.

In terms of languages that you've been involved in, it sounds like Javascript, Erlang, Rust, perhaps C and C++... Has that rounded out?

Yeah, mostly C and not so much C++, but otherwise yes.

Okay. Any favorites?

I really like Erlang. Erlang is great. I also have a really soft spot for Clojure. I did use Clojure at a couple of places as well, and both of those languages I feel hit a really nice sweet spot for certain kinds of tasks. I did fall in love with Javascript and fall out of love with Javascript probably several times over the course of my career.

Where do you currently stand? Are you in or out of love at this point?

I'm sort of ambivalent, I guess. I love it as a deployment language. It's supported everywhere, and my ideal goal in life is to make it as fast as possible for the web platforms developers to make responsive apps and really good apps that equal the quality of native apps.

I'm guessing that perspective probably gives you a very special kind of love/hate relationship with it.

Yeah, it probably is frustrating when you want to make some performance optimization and you can't, because either the semantics of the language or the semantics of the web prevents you, but also it's a fun challenge to figure out what areas can we make performance improvements on and how can we achieve that? There's a lot of competition in this space, particularly with this project, so it's fun to be the underdog and try to win on performance.

Right. And how long have you been with Mozilla?

I've been here for about three and a half years.

Okay. Now, when we talk about a lot of where the people we have on the show are coming from in terms of their background or their experience or what brought them into software, there's a lot of people that have a video games interest, there's others who have language interests, or mechanical or hardware interests... We all kind of end up in this software space. I read something about you that I thought "Maybe this has something to do with your interest in programming", but maybe it came afterwards. Tell me about Lousy Robot - what's this?

[00:08:13.28] Lousy Robot is a band that I joined right after I graduated college. I dropped out of college and did a startup in San Francisco - the traditional hacker thing to do, I guess. Then later on I went back and finished. When I finished, I remember thinking "You know, I've always wanted to be in a band. There's no reason I shouldn't." So I went online on Craigslist and found some people looking for band members and found these guys called Lousy Robot that had an indie pop band here in Albuquerque. I really liked their music, and thankfully they really liked me, so I started hanging out with them, going to practices, and played my first show on the stage...

Awesome.

I did that for several years, actually. We did a couple small tours in the Southwest area...

So you played keyboard for them, it says. Did you have a lot of keyboard experience prior to this, or you just decided "I'm gonna learn it and I'm gonna do it."

I played the piano when I was a kid, but I was always more interested in sound design type stuff. I got a MIDI keyboard when I was in high-school I think, and I started programming things for the Gravis Ultrasound, if anybody remembers those awesome soundcards... Writing my own mod tracker, and sound effects and stuff like that. So I always had this sort of music hobby going in the background. I've never been able to do as much with the programming side of that as I've always wanted to, but yeah, it's definitely been a fascination of mine for a long time.

I love that you just decided, "I'm gonna find a band", and you find one called Lousy Robot, which by the way, is a spectacular band name. You thought, "I'm gonna go be part of this band" and you just kind of got that done. That seems ambitious.

Yeah, I mean... You can't sit around waiting for things to happen. You've gotta go after the things that you enjoy doing. Most of my career I've worked remotely for the companies either I've started, or when I've worked for others. So it also fulfilled sort of a social need that I had, being trapped in my house all day... Well, it's not really trapped, but being in the house all day and not having much in-person interaction with the outside world means that hobbies like that are really helpful. I can sort of get the social needs I have satisfied; even if I can't get them satisfied at work, I can get them satisfied through hobbies.

Actually, that's one of the reasons I began podcasting and got involved with the Changelog. It was the same reason - I work remotely, I'm kind of a hired gun contract developer. I used to be in my basement, coding all day, and now I'm in an office above the garage coding all day. But I'm just very isolated, and I'm living out here kind of in the suburbs of a small town [unintelligible 00:10:55.09] Nebraska, and I just wanted some social interaction with people that had similar interests and people that were smarter than me, so podcasting was a natural fit. It sounds like I also could have searched out for some electronic bands and tried that route, but that sounds probably harder than just hop on the microphone and talking to people.

It could be. I mean, it can be. Getting up on stage and performing for a bunch of people is definitely an interesting experience. I recommend everyone try it.

Is that something that you missed?

Yeah, I sort of miss it... Not so much that I wanna be up in front of a bunch of people necessarily, but it gets the adrenaline going very specifically, and that's a pretty good feeling. It always felt good after a show, especially if you had a decent size audience and they were really into it. There was just a lot of nice energy in the room, it always left you feeling good.

Well, if you ever wanna consider podcasting, I have a great name for a podcast all about the Rust programming language. I won't say it here on the air because someone will steal it, but I have a great name for you. We can talk offline about that.

[00:12:01.02] Let's talk about Servo. This is an ambitious project, like I said in the intro, from Mozilla. It also has a Samsung angle, which I didn't realize before doing a little bit of background on this. Samsung is involved... But let's take it from your angle, Jack. Tell me about the beginning of Servo and Jack Moffitt - how did it start and how did you start being involved with it? Give us that from your perspective.

So I'd have to say it started with the Rust programming language. I've been very interested in different programming languages for a long time, and my career has several that I've managed to use professionally. I went to the Emerging Languages Workshop at Strange Loop back in 2012. Dave Herman gave a talk there - he's also at Mozilla Research - on the Rust programming language. There was a whole bunch of people presenting their own programming languages, and Dave Herman and Niko were both there, talking about Rust. I had heard about Rust - it was sort of in this pool of languages that were sort of systems-y, they were sort of emerging.

I hadn't thought that much of it at the time, and when I heard Dave describe the different kinds of memory usage in Rust - back then we used to have these [unintelligible 00:13:18.12] for shared pointers, owned pointers and things like that, and it was a lot more complicated syntactically, but all those concepts really meshed well with the Erlang knowledge that I had at the time. Erlang uses message passing as sort of its main concurrency primitive, and one of the downsides of using message passing is that you're copying data all over the place. Whenever you send a message in Erlang, it's gonna copy it and send it to the other Erlang process, and that can manipulate it from there. And Rust has this really nice thing that falls out of ownership, which is since you know that you're the only owner of a certain pointer, when you pass it in a message to another Rust thread, it can just effectively give you access to the pointer now and pass the ownership along with it, so no data is actually copied. So you get all of the beautiful semantics of Erlang message passing, but you get it in a wonderfully fast implementation, and it involves no data copying.

That really intrigued me, so then I started looking more into it and got pretty interested. Then I noticed they had a job opening for basically what I claimed at the time was the first professional Rust programmer, leading the Servo project.

I like that.

I hopped right on that...

This just sounds like Lousy Robot all over again. You're like, "You know what? I like Rust... With Lousy Robot, I wanted to be part of a band. I like electronic music, I can play the keyboard a little bit, I'm gonna get involved with these guys." With Servo or with Mozilla, it was "Rust is interesting. Here's an opportunity to be a Rust developer, the first professional Rust developer. I'm gonna go get that job." Is that the gist of it, or is that an unfair characterization?

No, I think that's more or less the gist of it. People talk about opportunity knocking, but I think that you can't do much when opportunity knocks if you're not prepared, and also if you don't build a bunch of doors for it to knock on.

Right.

So I've always spent my career trying to keep my eye on what's coming, what's happening, what are the opportunities around, so that when something was interesting, everything is already lined up to sort of make it happen.

Interesting. Let's bookmark that maybe for the end of this show. I'd like you to perhaps try to cast forward and see where's opportunity gonna knock for young developers the next few years. But I don't wanna take us too far upstream from the main topic, which is Servo.

I've said it's ambitious, but I haven't said exactly what it is... Sometimes we make this mistake of diving in to deep on our show, and one time we got to the very end and realized "I don't think we ever clearly stated what the project is in layman's terms, so that we could all be on the same page. Give us Servo in a nutshell. What is it and what are its goals?

[00:16:08.10] Servo is a project to create a new browser engine that makes a generational leap in both performance and robustness. There's two sides of this. One is browsers as they exist today are largely built on this architecture developed decades ago, where CPUs only had one core, the memory was perhaps more constrained, we didn't have GPUs... So the kinds of computers that web browsers ran on back then were really different. At the same time, the kinds of web pages that existed back then were also extremely different. They were not dynamic, they had very simple styling... You basically had all the semantics of the styling in the tag names, and there were some different few by browsers; then we got CSS, we got Javascript, we got dynamic HTTP requests and things like that. These days, lots of web pages are basically on par with native applications in terms of the complexity and the stuff that they're doing, but the browser architecture is still written for these documents. There've been tons of changes inside the JS engine, but overall the architecture has been slow to move.

Right.

On the other side, on the robustness side - basically browsers have become so important and so ubiquitous that they've become huge targets for security exploits. There's lots of private data going through them... Pretty much everything I do online goes through my browser, so you could find a huge amount of data about me if you can get access to that. They're also on every computer, so if you can get route access to the machine somehow through the web browser, you can effectively control armies of machines... So they've become very important in a security context, but they also have a very poor track record here.

C++, which all of the engines besides Servo are really written in, just lets you do anything you want with memory, at any time. People think they're really smart and really careful, and yet we still find new vulnerabilities in pretty much every piece of C and C++ code every day. They're getting better, but there's only so much you can do.

The idea was "How can we attack these two problems?" We knew that in order to take advantage of modern hardware we were gonna need to do parallelism, and we wanted to somehow solve the safety issues with C++ for parallelism. Because one of the reasons you don't see more parallel code written in Firefox or Chrome is how incredibly difficult it is to write parallel code when you have free access to memory.

So Rust and the Servo project were tightly intertwined, at least at their origin and trying to solve this problem.

Right. So when I said "ambitious"... This thing began late 2012, early 2013 at Mozilla, and today - let's just call it the end of 2016 - we are in a pre-alpha developer preview. You've all been working on this for a long time. You've come a long way, but it seems like there's still a long way to go. Is this just a huge undertaking in scope?

It is. The web platform is very large. There are lots of complex features that will interact with each other, especially in web page layout, but also just the sheer number of Javascript APIs is staggering, and more are being added all the time. In fact, there's not even enough people on my team to really keep track of all the new changes, just to specifications and stuff, as opposed to working on all the things that have been specified and developed over the last couple of decades. So it is enormous in scope, and a large part of the challenge is how do we attack this problem in such a way that it can be obvious that we're making progress to the people with the money, and also to the outside world, so that they can keep interested.

[00:20:14.10] Yeah, because you definitely have the interest of the developer community; the question is how long you can maintain that interest until people start calling things paperware or such other things. So real quick - we're hitting up against our first break, but let's lay out just the understanding of the team. I keep saying "almost 600 contributors." Surely, those aren't all core team members. Give us the layout of the project in terms of who's working on what; at least the size, so we can see the scope of you and your team and the effort both at Mozilla and perhaps at Samsung if you have insight on that too.

We do have a small core team; there's four of us on there right now: Lars Bergstrom, myself, Josh Matthews and Patrick Walton. Then there's a number of people who have reviewer privileges; those are the wider team. These are the people who can approve for code to be checked into the repository. That sort of access is relatively easy to get for anyone who's making regular contributions. Then we just have a ton of people showing up either with a Javascript API that their application uses and they want supported, or maybe they're just interested in Rust or web browsers and wanna know how they work; we just get a ton of people coming and showing up and wanting to know how to contribute, so we've developed a lot of strategies to help them.

There's this big community of hundreds of people who are hacking on Servo. In terms of its relation to Mozilla, there's about a dozen people employed full-time to hack on Servo. The project itself is meant to be a community project, not owned by Mozilla. We have plenty of reviewers who are unaffiliated with other companies, we have reviewers are affiliated with other companies, and that probably brings us to Samsung.

Samsung was sort of very interested in this work early on and had some engineers working on it for a while back in 2013-2014. I think at the height they had over a dozen engineers hacking on it. The idea for them was basically modern mobile hardware like phones and stuff have a very similar architecture to modern CPU hardware - they have GPUs, they have multiple cores, they maybe have different kinds of cores in different configurations, and they were making a big bet (and they still are making a bit bet) on Tizen and having application developers develop for smart TVs and mobile phones and things like that using the platform.

They've been doing this for a while. Tizen is a thing that already exists and it uses Blink as its engine, and WebKit before that. They're running into all kinds of performance problems that also the Gecko (Mozilla Firefox) developers are running into, so they were very interested in what could be done about this problem, and how can we take advantage of modern hardware, how can we make this code safer? I think for them a large part of the argument is that the access to the Javascript development community is huge. Not having to support arbitrary, random -- not necessarily proprietary, but just not one of the standard native application toolkits and being able to just use the web platform, it gives you access to a huge amount of developers that you don't have pretty much any other way. I think that was a lot of their motivation.

They have since sort of shifted their focus, so there's not very much active involvement from Samsung at the moment, although that could change anytime.

It sounds like maybe time that somebody goes and updates that Wikipedia article.

[00:23:53.14] Could be... [laughter] I think I'm firstly not allowed to touch the Wikipedia articles about the projects or myself, but...

Right, right... Well, we could use this as a secondary source, or something. No. I love Samsung. I'm actually surprised... I was at OSCON London recently and I met some people from Samsung doing cool open source work; something that I was unaware of is how much they are invested in the open source community, which is awesome. We love companies that put their money where their source is, so that's very cool. Shout out to Samsung for that.

Let's take our first break. When we get back, Jack - you mentioned these two big goals: performance and robustness, and how Rust playing in nicely to that. I wanna dig down deeper on those two things. I know you have six areas of performance that we're gonna talk about, so let's pause here and we'll get into performance and robustness on the other side of this break.

Break

[00:24:45.07]

Alright, we are back with Jack Moffitt talking about Servo and Rust, performance and robustness. I just had a thought while you mentioned a few minutes back, Jack, about Rust and Servo kind of growing up together as technologies. That sounds really great, especially if you have people on both teams that are working together, or perhaps the same person on both team. But it also seems like it makes Servo even more difficult a project, because your underpinnings are such a moving target. Has that been a struggle for you guys, as you move along and Rust changes underneath your feet?

It certainly was a struggle back when I started. My first day on the job of Mozilla, Servo did not compile, and there was no easy way to get it to compile. They were using sort of a pinned version of Rust, but there was no documentation or infrastructure or automation around which Rust version Servo was pinned to; it just sort of happened to be the one that was on somebody's machine, and whenever they happened to upgrade Rust to another version they would also make changes to Servo and then commit those. So I started in this sort of chaos land of Servo doesn't compile, and on top of that - maybe a lot of developers haven't experienced this, but when you can't trust your compiler, that is an interesting situation.

You try to compile it, and the compiler segfaults - what do you do there? [laughter] So I spent probably the first week and a half just updating Servo to the current version of Rust, which was kind of an ordeal back then because they had a deprecation policy back then where if they were working on a feature and it didn't pan out, they would sort of deprecate it in the next release, and then in the release after that it would get deleted. So a lot of the work on Servo happened in Rust 0.4, and then I started right when 0.6 came out, so tons of these features that Servo had been using just didn't exist anymore.

[00:27:59.17] Coming on my first day on the job, it was like "Okay, so what does this feature do? Oh, what did it do, so I know how to replace it?" and the answer was "I don't even remember." [laughs]

So that was sort of a special situation, but it sort of repeated that way until Rust 1.0 came out; there were major breaking language changes all the time. We built infrastructure to pin specific version of the Rust compiler, and then we would update it at specific times. We would try to keep on top of it, but usually it would be like once a month, or if there was a particularly bad run maybe it would take a couple of months for us to get an update.

Part of the reason for that churn was that when you would update the version of Rust and you would make all the changes in Servo, you would often find that some bug got fixed in the borrow checker, for example, making some code that you wrote before now invalid, and maybe that code didn't have a trivial workaround, like just changing the syntax of some API call. You had to restructure the function, or maybe it turned out that what you were doing was completely illegal and memory unsafe, but the compiler just hadn't caught it before, and now you need to go and rethink some stuff.

You would make these changes, and then you would find new bugs in the Rust compiler. The compiler would segfault, or it would run into some kind of assertion thing that was not in your application, but sort of in the Rust compiler itself. So then you'd say, "Okay, now we'll file the bug against the Rust compiler", the Rust team is super quick and responsive, so they would fix the bug maybe the next day. In the meantime, maybe ten other changes have landed, each with their own bugs, and maybe those also have new breaking syntax changes or something, so in order to get the fix that you wanted, now you've got ten other things that are also going in there. Sometimes this would turn into a vicious cycle where you'd be spending two weeks just trying to upgrade Rust, then doing this... So it was kind of a mess for a while.

When Rust 1.0 came out, this settled down a lot. Now we basically pin the nightly version, or we change it whenever some Rust feature comes along that we need access to. It's generally a partial base worth the work for somebody, and not really a big deal.

On the other side of the coin, being the "first professional Rust developer" and being Rust's flagship application at the time, while it had its churn issues, you probably were like the first-class citizens when it came time to influencing the language design or the needs of the language, even bug fixes and stuff like that, because if Servo is halted... I'm sure the Rust team was very interested in keeping you guys moving. Was that the case as well?

Yeah, they gave us a lot of attention. If we found bugs, they would fix them right away. This has gradually tapered off. On the run-up to 1.0 they stopped giving us such preferential treatment. Probably the biggest example of this was the removal of green threads for native threading. Green threads was something that Servo was designed around at the time, and there was no fallback really for it. They just sort of like removed the carpet out from under us.

These days, Servo is still the flagship application more or less, but we're not driving Rust development anymore the way that the needs were back in the early days of Servo. These days it definitely has a life of its own. They definitely take our concerns into account, but largely our concerns are the same concerns that everyone who's using Rust has. For instance, number one on the list is compile performance.

Right.

We get along really well; there are core team members on the Rust team that are also core team members in the Servo team, and it's very nice to have such a good relationship with the compiler. I think this has resulted in probably more performance than we would otherwise get. If there's some problem that turns out to be a code generation issue in the compiler, we know the guys who can fix that. It turns out to be a pretty nice relationship, even if, I would say selfishly, not all of our needs are being at the top of the priority list anymore.

[00:32:11.27] Let's talk about the two aims that you laid out at the beginning for Servo as a rendering engine. Is that the fair thing to call it, a rendering engine? A browser engine? A layout engine?

We've been calling it a web engine these days.

Okay, a web engine. I just wanna use your nomenclature. So performance and robustness - and you touched on why Rust is such a good fit for that in terms of the ownership model and the memory safety guarantees and things like that, especially with regard to robustness, and also you said with the performance of not having to pass around that memory, and getting some things for cheap or free. But you had these six different areas... Like we said, it's ambitious - there's subsystems upon subsystems, and you have six areas of performance optimization or ways that you're going about it. Can you give us some insight into those?

Sure. Well, let me touch on those first two things first. I'll start with robustness, because that comes mostly from Rust, and it probably was well covered when you talked to Steve last time.

The inspiration for this can be sort of summed up with this one example: there's a Javascript API called Web Audio, which allows you to manipulate sound from Javascript applications. When that was implemented in Firefox, it had 34 security critical bugs that were filed against it. One of the things we did was sort of look back and see what kinds of problems could Rust have helped solve, instead of just saying, "We think Rust will solve this problem." We can go back and inspect the data and see what it could have solved if that could have been written in Rust. So in the case of Web Audio, there were 34 security critical bugs; all of them were array out of bounds or use-after-free errors, and all of them would have been prevented by the Rust compiler, had that component been written in Rust.

So that's sort of like the quick summary. A hundred percent of the errors in that API would have been caught by the compiler before they shipped. And Web Audio is not a special API; it has no security properties of its own, it's not doing anything really crazy... It's just sort of your run-of-the-mill Javascript API, and that points out just how dangerous C++ is as an implementation language; even this thing that didn't touch anything secure had 34 vulnerabilities where somebody could [unintelligible 00:34:28.20] your machine.

Yeah, dramatic change.

Yeah. On the performance side, the intuition is basically if you look at modern web pages... Pinterest is a great example. A Pinterest page has all these cards that are laid out in a staggered grid, and you can imagine that each of those cards could sort of be operated on independently of the others. So that's where you can kind of see where doing layout in parallel might help, because if you look at web pages, they're highly structured. News sites are another good example. They often have lists of articles with a blurb and a picture, and you can just see the same structure repeated over and over and over, and it makes sense that each of those little sub-pieces could be handled independently at the same time as the others.

So those were the two input motivations. I'll talk about some of these... There's basically six of these branches of development that we've been pursuing. The first one I'll talk about is CSS. Servo does parallel CSS styling. It does this in, I would say, a not novel way. The algorithms that existing engines use for CSS styling are largely untouched. The only thing we bring to the table really is using the features of the Rust language to make parallel implementations of those algorithms very easy.

[00:35:51.18] For example, the Servo CSS engine has all the same optimizations pretty much that modern engines have. Pretty much we copied those optimizations from the Gecko and Blink engineers, but being able to use all of the cores on the machine is a huge win, so it turns out that CSS restyling is sort of the best-case parallel algorithm. It scales linearly with a number of cores, so our initial estimates after we wrote the system showed that it was basically four times faster on a four-core machine than stuff running in Gecko or Blink.

That's restyling. The next stage after restyling - so once you compute all the CSS properties and figure out how they cascade and all that kind of thing, then you use those properties plus objects in the DOM, elements from the web page, and you compute where those objects are gonna be and how tall they are and how wide they are.

For this, we actually had to come up with a completely new algorithm based on work that came out of Leo Meyerovich's parallel layout work. He has a couple papers for that, that I think are in the Servo Wiki if anyone's interested. Basically, the problem with the existing engines is that the way they work is you can imagine just like there's a document object model in Javascript, there is a parallel one on the C++ side. So there's an object that's the root of the document, and there's an object for [unintelligible 00:37:15.03] under that, and so on and so forth. So when they call layout, they basically call a function called layout on the root of the tree, and that's it. That function does a bunch of work, and then it calls layout on all of its children, and so on and so forth.

It works its way down, yeah.

And the problem here is that in each of those functions when it's calculating the layout information, it can look anywhere it wants in the tree. For instance, if I wanna find out what the size of my neighbor is, I can just go read that data directly. If I wanna know how tall my parent or any of my children are, I could just go read that area right out of the tree, and it doesn't necessarily have to be things that are right next to me. I can look way far off in the tree... For instance, if you're in a table, there are things that might be affected by the layout of the table, or some interior thing might be far away in the tree... This is really bad for parallelism, because when you design a parallel algorithm you have to be very careful about what data is updating when other things are reading it. If you don't know the pattern of data access in an algorithm, it's very hard to change that into a parallel algorithm. So your best bet is basically to put locks on everything and then try to make lock contention not a problem, or to get rid of as many locks as you can.

So this didn't seem like a promising way to start, so instead, the way that it works is we start from a thing that we know can be parallelized, which is tree traversals. It's very easy to do parallel tree traversals. For instance, you have the very first thread start with the root object, and then create a job for each of the children it has, and they go off on different threads. Then each of those children creates jobs for their children, and they get scheduled on whichever threads. It's pretty easy to describe that, it's easy to reason about, and similarly, going from the bottom up, it's also pretty similar - all of the children of a particular node get finished, and then once that's done, once the last child's processed, you can start processing its parent, and all the way up the tree.

If you use that as sort of the constraint that your algorithm has to operate in - and when I say constraint here, I mean the data access pattern you need to make this work is, if I'm going top down, I'm allowed to look at any of my ancestors, but I'm not allowed to look at my siblings or my children, because they might be getting processed on a different thread. My parent already got processed, or I wouldn't be being processed, but all this other stuff could be happening at the same time.

But they may have information that you need, right?

They might, and we'll talk about that in a minute. Its base case, you basically restrict yourself to only being able to read information from things you know can't be written to, so this means basically your ancestors and yourself, and no siblings or children.

It's like a day a straitjacket.

[00:40:06.01] Yeah. So you're not able to express all of the layout calculation in just a single tree traversal, so we use several passes of them. A good way to think about it is you go from the bottom of the tree up, and you pass along how big you are - we call it the intrinsic width. Basically, it feels like an image with a certain size - of course, that's its intrinsic width - and it gets passed up. Then you get to the top of the tree and now you know how wide everything is sort of requested to be, and now you can go through and assign the actual widths to everything. Now that you know what the width of the parent is, which is, say, set by the window size, now I can say "Okay, the thing below it must be this wide, because there's only this much space", and you can go propagating this information all the way to the bottom of the tree.

Then, once you know how wide everything is gonna be, now you can go up the tree and figure out how tall everything is, because if you know the height of yourself, then you're done. If you have the heights of all your children, then you can figure out how tall you are. This is where things like line-breaking text will happen. Then, when you get all the way up to the top of the tree, you're done - now you know how wide everything is and how tall everything is.

This is pretty simple to reason about. You have to divide up the layout work into these three passes. That's not so much of a problem. But then we run into this problem that you mentioned. What if you need to know what your neighbor's doing? This happens with CSS floats. If you float some content in a web page, that means that the layout of the thing next to you is affected by your own layout. For example, when you try to figure out how wide a paragraph of text is gonna be, you need to look at what all of the floats are that your neighbors have, to figure out how wide they are, so you know how wide your text can flow.

This sort of breaks parallelism, because the only way to do this in that sort of constrained problem space is to defer the calculation to higher up in the tree. Basically, if you need to read data from your neighbor, then you just say "Okay, I know I need to do this. I'll delay the calculation until my parent is getting done" and then when the parent is getting done, it can go and read in a bottom-up traversal, it can go and read any of the children's data at once. So you basically have to defer the calculation to one step later, or whatever subtree the constraint [unintelligible 00:42:38.21].

That works fine, but it breaks the parallelism. For that little subtree, now you can't do the things all independently on different threads, you have to do them all in one thread at the same time. So it's not linearly scalable, like restyling is, but you can still get a lot of performance there. Most things turn out to be easily expressible in those constraints; CSS floats is an example of one that is not, although a very popular one.

Well, can we just agree that CSS floats are the worst?

They are complicated.

Think of every web developer on Earth, and then add up all the time that we've collectively spent dinking with floats inside of Web Inspector, and then think about how much wasted time we have there. And then how much time it's causing you guys headaches in terms of parallelizing the layout calculations. Ugh... The worst.

Yeah, it's kind of interesting. I wonder if Servo is successful as we hope it will be, then you have this sort of negative feedback loop for using floats. Because if you use a float in your page, it will layout slower, because it won't be able to use all of the potential resources of the machine in every case.

[00:43:55.05] A good example here is Wikipedia. Wikipedia has this floated sidebar that basically covers the whole page. So a Wikipedia layout in Servo is like a worst case example. But Wikipedia mobile does not have this. It does the navigation in a different way that doesn't use floats, so the layout performance of Wikipedia mobile is vastly improved compared to the normal desktop Wikipedia case. So it could be that if you use a lot of floats, then you'll just get negative performance feedback, and you'll be like "Why isn't my site as fast as these other sites?" Hopefully it will be well known that floats is one of these problems, and you can sort of fix that in the code and we can all make every page faster.

That'd be awesome, especially if the work that you guys are putting in in Servo is also getting over to Blink and the other engines; just the cross-pollination of that effort, because then we have even more of a chance of it being like not just in Servo-driven browsers, but in lots of different browsers you have this exact same performance problem with floats, or with whatever happens to be a performance-negative tool that we are given would be very influential and awesome. Cool.

Anything else on layouts? It sounds like y'all put a lot of work into that, even describing it to me is a little bit tough.

It's one of the most complicated bits. It's one of the bits we did first, because we knew how hard it was going to be, so we got that out of the way. Of course, we're still adding new layout stuff; it doesn't support every layout feature that the other browsers do yet, but it supports many of them now.

One thing I should add is that after we did those two pieces, that's when we started sort of doing some initial rough benchmarking to see how fast it was, and when we discovered CSS styling scales linearly. A parallel layout is also a lot faster. It's not linear, but you can expect double the performance, especially on pages that don't have parallelism hazards like floats.

But one of the other ideas we had is "What about power usage?" It's not just performance of wall clock time, it's like "How are we treating the battery? Can we do better there?" So we did some experiments for that. We had an intern over a summer record a bunch of data and do some experiments in this area. The intuition here was, "Well, if we can get done faster than a traditional browser, even if we use all of the cores instead of just one - you can make a case that maybe that uses less power, to only use one of the cores... But if we get done faster, then all the CPUs can go back to idle and therefore can be idle longer than they otherwise would be."

We wanted to see if that intuition was correct, or what other kinds of things might affect battery performance. So what we did is we took a normal MacBook Pro and we turned off the Turbo Boost feature. Turbo Boost basically reduces your performance by about 30%, but it affects battery performance by more than that. So you save about 40% of the battery performance and only lose 30% of your CPU performance. Servo is fast enough that it can make up all of that performance in its parallel algorithms. So the Servo performance is basically unchanged; it's still as fast or faster than a traditional engine, but it uses 40% less power to get there. That was a cool finding. I don't know if this will scale forever or how much there is to gain here, but it definitely seems like the initial experiments prove that there's definitely a lot we can do about power as well. So it's not just about using all of the resources in the world; it turns out that using the architecture the way it's meant to be used can save you a bunch of power.

[00:47:51.19] If you go back to the Samsung example, if they can meet the same performance goals that they have for some product but do it on a generation older CPU, because it is multiple cores, you might be able to save some serious bucks there. So that's about it on the two - parallel style and layout.

Let's tee up a couple more. I may have you pick, since there's lots of these... We wanna talk about the current state and the future; we're hitting our next break, so Jack, pick one more Webrender, magic DOM, the constellation - what's the most interesting of all of these performance areas that you can share? Then we'll take a break.

Probably Webrender is the one that people will be most interested in. The idea here is basically if you look at CPU architecture diagrams from two decades ago, there's like one core, some cache, and stuff like that, and now they have multiple cores on them. We sort of laid that out as one of the motivations for Servo itself, but if you look even harder, it turns out now there's GPUs on the chips as well, and those GPUs are getting larger and larger every generation. Now it turns out that Servo isn't even using half the CPU or half of the chip, because while we use all of the cores, more than half the [unintelligible 00:49:03.02] is just graphics processing.

We wanna be able to use the whole chip, but how do we get stuff on the graphics processor? Of course, since it's called "the graphics processor", it makes sense to start with graphics. Current browsers do compositing on the GPU, which basically means they take a lot of the rendered layers - basically pixel buffers of the different layers and just squash them all together, and they can control where they appear relative to each other, which is how you can do stuff like scrolling and some movement animation really fast in modern browsers. In Servo, we wanted all of the painting to move over to the GPU, as well as all of the compositing.

Basically, we launched this project called Webrender, which tried to explore how this could be done. The idea here was immediate mode APIs are really bad for GPU. Immediate mode APIs like "set the pin color to black", "set my border size to 5 and set the fill color to red", "draw a line from this coordinate to this coordinate"... If you do this, the GPU never has enough information to be able to figure out how to order all of the operations such that they're done most efficiently.

For example, if you draw a line with that state, and then you change something, and then the next thing you draw you use the same sort of parameters as the first thing you drew -- well, if you'd done that in a different order where you draw the first and the third thing together, and then drew the second thing, it would be much faster. So really you want to use what we call retained mode graphics on GPUs. This is what modern video games do. The GPU knows the full scene that it's gonna draw, all of the parameters, and it can figure out how best to use its compute resources to do those things.

We realized that web pages themselves are basically their own scene graphs. Once you do the layout, you get what's called a display list, which is sort of all of the things that you need to draw. The idea of Webrender is like if we can come up with a set of display list items that are expressible as GPU operations, then we can just pass the display list off to this shader, and everything happens really fast. The side benefit of doing this is that anything that you move to the GPU is like free performance on the CPU. Now, all of a sudden, if [unintelligible 00:51:16.15] over to the GPU, now we have even more clock cycles on the CPU to do other work, like for instance running Javascript.

While Webrender doesn't make the Javascript engine faster, it's not like a new [unintelligible 00:51:32.20], it has the effect of having more CPU cycles for the Javascript engine, so you will see speed ups in other areas as a second order effect.

[00:51:45.22] We prototyped Webrender this late last year, we landed it in Servo early this year, we redesigned it to fix a couple of performance problems that we found right around June of this year. Now it's basically landed in Servo, it's the only renderer that's available in Servo, and it's screaming fast.

Some of the benchmarks that we've shown show things like... We'll run a benchmark in Webkit and in Firefox and in Blink, and you'll see something between two and five frames per second, and it Webrender it's screaming along at 60. That's because of [unintelligible 00:52:23.07] It's able to do it at 250-300 frames/second sometimes, but there's no point. So it does seem to be quite fast.

Now we're just adding more and more features... It's got enough stuff that it supports anything Servo can draw. It doesn't have quite enough stuff to support everything that, say, Firefox can draw, but that will be there in due time, probably pretty shortly.

Nice. Well, let's take this next break. Up next - Servo. The state of the project, the future and how you can get involved. Stay tuned for that, and we'll be right back.

Break

[00:53:00.20]

Alright, we are back. Before the break, Jack, we were talking about all these different ways that your team is squeezing all the performance you possibly can out of Servo - the parallel layout, the parallel styling, Webrender, using the GPU for things, and there's other stuff that we didn't have time to talk about. All of these efforts, and it sounds like you guys have made huge strides especially around the parallel layout and the work done there. This begs the question, how fast is it? You gave us the idea with Webrender where it was rendering on the GPU at 60 frames/second, but what about the big picture? Swap out Gecko and swap in Servo, assuming there's feature parity at some point. What's the win?

I'll talk a little bit about the qualitative win and not so much the quantitative at first. The qualitative win is pages should get more responsive. By getting all of the stuff done in parallel, we can return to running Javascript more quickly, which means your app - the time between you clicking a button or trigger an animation or something like that, and you running the next line of code or the next event in your event cue is much faster. You see this already with Servo in things like animations, where animations in Servo will be silky smooth, where they might struggle in other browsers.

The way that you'll see this is you'll get dropped frames, so that the animation will sort of stutter, or scrolling performance won't feel magical. Another example is when you do touch scrolling on a mobile device, the time between you start the up swipe and the display actually moving in some browsers can be pretty slow, whereas on iOS devices they're always showing this beautiful scrolling, where it feels like the thing is moving under your finger. That's what we're trying to get to - the really fast and responsive user interactivity stuff.

[00:55:53.06] The other thing there, and this is a little more nebulous to describe, but with every major performance improvement, web developers have been super creative in finding ways to make the most of it. The same way that when new GPUs come out -- of course all of the existing games are running faster, but it takes a little while before people figure out how to fully exploit those games and do even more unique or crazy things with that hardware. So I'm hoping that Servo will sort of enable a bunch of things that we don't quite know what they'll be yet, in this new world where apps are much faster.

On the quantitative side, this is an extremely complicated thing to measure. I can give you benchmarks for individual pieces - those are pretty easy to benchmark in isolation; it's less easy to compare them with existing browsers, although we've done some of that as well. But in terms of holistic system performance, what can you expect? I will say that we do - this is sort of a qualitative way to address it, but we do want the user to feel like there is a major difference just from using the browser and how fast it is, in sort of a similar way to when Chrome first launched, how people were impressed with how different it felt and how responsive it felt. We're hoping to have another one of those moments, but maybe even a bigger one of those than people have seen before.

There is a way that we can try to answer this question. There is a new proposal by some people at Google called progressive web metrics. The idea here is to develop metrics that measure things that users perceive. A couple of these are time to interactivity - this measure is like "How long did it take from when I hit enter in the URL bar to me being able to meaningfully interact with the app?" There's sort of a crazy technical definition of what this actually means - that I'll spare you - but this is a metric that if you improve this, it will meaningfully improve the lives of users. There's a couple other of these, and that's how I suspect we will measure these performance improvements in Servo compared to other engines, and also how other engines will sort of try to measure their progress in a similar direction.

One nice thing about this idea - these progressive web metrics - is Google wants to make them available to the web authors. I think the way that [unintelligible 00:58:23.29] is they fire as events. You know how there's like "Document unload" and "Document ready", or "DOM ready" - these will be new events that would fire. Time tracked interactivity would fire when the page is interactive, so you as a web developer would be able to track these metrics for your own applications, and use them to make your applications more interactive and better. Also, browser developers can use it to improve their side as well.

I think that is where we want to get to. We want to get to a sort of a meaningful set of user relevant metrics that all of the browsers measure and publish and can be compared by web developers. I don't have any results... We don't have progressive web metrics in Servo currently, but we're expecting to add them soon. I don't have the numbers yet for the holistic system performance, but that is how I think we will get them, and we do expect to make improvements there.

The quantitative metrics that we do have are things like existing, known benchmarks like Dromaeo - we've run Dromaeo for DOM performance; we can run things like Sunspider and all of these Javascript benchmarks, although they aren't very interesting for Servo because we're using the same Javascript engine as Gecko there. Any individual benchmark we can run, whether or not the performance things that we've done in Servo affect those benchmarks enough to make a difference - you don't know until you try it. The reason there's some discrepancies there is that we tried to tackle things like parallel layout - really hard problems that we know we're gonna have to invent new technologies or algorithms or something in order to solve them, but we haven't spent that much time on things that have known solutions that are just missing pieces. But we know exactly how we're gonna attack it, and it's gonna be exactly like it is in Blink or Gecko. For instance, the network cache.

[01:00:17.00] There's not really anything Rust is gonna add to how you design a network cache; other than the safety side of it, there's not really any performance wins to really be had there that are gonna be really user noticeable. Servo doesn't really have one of these, and of course, that makes everything feel really slow when it's fetching stuff from the network every time.

How sensitive some benchmarks are are sort of a function of the individual benchmarks, and sometimes they run across these things in Servo that aren't really optimized yet, because we sort of know how to do it it's not high priority, versus things that measure stuff that we've made direct improvements on.

Let's talk about timing - the age-old question of when things are going to ship. Every software engineer's favorite question is "When is it gonna be available?" Y'all have a pretty good public roadmap - we'll link that up in the show notes to this episode; it's on the GitHub Wiki for Servo. So you have plans, you have a roadmap laid out, and you're making huge progress in many areas.

This has been a three, four-year project - undoubtedly, at least you and your team, you guys are probably super ready to get this into the hands of users and not just developer previews. What's the roadmap look like, and the timing? How are you guys gonna roll this out over the next year or so?

This has been a constant struggle. We've basically started with a project that not only is it a rewrite, but in order to rewrite that, we rewrote C++ in addition.

[laughs] No big deal.

If all rewrites are [unintelligible 01:01:47.02], then surely the rabbit hole of rewrites is gonna be [unintelligible 01:01:51.17] failure. We wanna make sure that these projects aren't failures; I think Rust is over that hump for quite a while. Servo, I'm hoping, is over that hump, but it depends on what people think. In order to do this, we need to string together a sort of a series of enhancements that people can notice, see for themselves and things like that. We don't wanna just sit in a room for ten years, saying we're working on making the web two times as fast, and then you won't get to find out unless we succeeded ten years from now, and the whole while you have to keep investing mindshare, or in Mozilla's case money, until you get the result. We wanna get the result as incrementally as we can, for all those reasons.

We've sort of struggled with this in Servo, because the web is so big. Since we've started the project, there's probably like a year's worth of work that's been added to the platform that we haven't even gotten to. However many man years of work we had when we started, there's probably n+1 every year added to that.

One of the ways that we thought about doing this is by making parts of the engine compelling enough that certain types of applications might benefit from them, even if they don't have access to the full platform. One way to imagine this is if you're a web content author and you're making a mobile app and you're using web technologies, since you control the content of the site, you can avoid using features that Servo doesn't support yet, but you can still take advantage of the performance features that we do have to offer.

We've been sort of looking around for partners who have the ability to do this and want to move forward. We haven't had a whole lot of takes yet, although that's sort of the style that our collaboration with Samsung was in as well. That's one way.

[01:03:54.06] The other way we can get this to new users is just make a browser people can use and iterate on it from there, although the amount of stuff you need to get to that point is quite large. We did release a Servo Nightly at the end of June, which has a bunch of functionalities that you expect from a browser, like a URL bar and multiple tabs, and the ability to navigate in history, switch between tabs and things like that. SO we're starting to get to a point where end users - or probably web developers will be the most likely target - can download a Servo, give it a spin, see how it works, play with some of their content in it; hopefully, they'll find some missing piece and want to contribute to the project and help make the world better, or give us feedback about things that are broken and that are important to them, or just keep an eye on how it's going and give us feedback if our performance wins are actually something that they experience meaningfully.

Then the final long-term goal is "How do we get this shipping as a real browser to hundreds of millions of users?" That's always been the long-term goal of the Servo project, but it's unclear how to get there. Tomorrow - it will already have happened for your listeners - Mozilla is announcing their new Quantum project, which is basically getting huge performance wins out of a next generation browser engine. As you can imagine, a key part of this new project is taking pieces of Servo and putting them into this project. They're gonna take the Gecko engine and basically rip out style and rendering, and put in Servo's parallel styling code and the Webrender code... There's some other stuff they're doing on the downside that isn't related to the Servo project as well in there, but a huge piece of this is taking technology that we've developed in Servo and getting it into a production web engine. Even though the whole Servo isn't ready, we can at least take these individual pieces and start giving people some incremental improvements in the existing web engines.

Well, that's exciting.

Yeah, it's gonna be pretty good. Like I said, on the styling side it scales linearly, so the number of cores is directly correlated to how much benefit you get. With telemetry from our existing user population in Firefox we can see that at least 50% of the population has two cores, which means that style performance will basically double for all those people. I think 25% of people - I don't have the number right in front of me - have four cores, so they can expect four times performance improvement in that subsystem.

So you might ask - back to your holistic performance question - "Is anyone gonna notice if styling performance is faster?", I think the answer will be yes for a couple of reasons. One is that there are a bunch of pages on the web that do take a long time to style. For example, one that might be relevant to your audience is the HTML5 specification, which the single-page edition takes multiple seconds to render in Firefox; it takes about 1.2 seconds just to do the style calculation. In Servo, that is down now to 300 milliseconds. So you're going from something that takes multiple seconds to something that takes 300 milliseconds, and then of course total page load time is something like a third of the total page load time. We're talking about taking almost a full second off of the page load time. Probably an outlier in terms of page size, but it's a real performance improvement people will probably notice.

The second way I think people will notice this is in interactive pages, where you're interacting with an application and the Javascript code is making lots of changes to the DOM, and then layout is running again. Each time that cycle happens, you have to do restyling. Making that faster will mean the engine spends less time in that stage, and it gets back to running your application code. I think people will notice a responsiveness increase for especially interactive-heavy applications. If you couple this with Webrender, which makes animations and all that stuff faster, then you get even more benefit.

[01:08:08.11] One of the reasons we tried to parallelize everything in Servo is because of Amdahl's Law, which says that the limit on your performance gain through parallelization is capped by the longest serial piece. So if you have a piece of code that's not parallelized, that's just making the performance of the whole system worse, so you have to parallelize everything to get everything faster. Those two pieces go really well together, and are gonna ship in Quantum. The idea is that those will roll out to users sometime next year. They'll probably be available on nightlies and people can play around with them before that. Of course, if you want to, you can play around with them in Servo right now.

Let's talk about that... Getting started - you try to make it very easy. Projects like these, of the size and scope, especially in a systems-level language - a new one - that many people don't know very well, they're intimidating. Help us here on the show; talk to our listeners about how they can get involved, help out, try it out, give it a test drive and help push the web forward with you guys.

It's really easy to get involved, and we have stuff to do for people of all skillsets and of all language backgrounds pretty much. Most of the code in Servo is written in Rust, but we do have a fair amount of Javascript that we do, and also Python stuff. There's always tooling automation and things like that for people who are system administrators.

One of the ways that we help people try to get on board is we have a page called Servo Starters, which basically is a list of bugs that we have flagged as easy for new contributors to get to. The philosophy here is we pick bugs that are basically so easy that the hurdle that people are jumping through is just getting the code checked out, getting your changes made, the mechanics of getting it on GitHub and getting a review, interacting with CI infrastructure, and that kind of stuff. It's pretty easy to get started, and there's so much stuff missing in Servo... I know this sounds like I'm talking against my own project, but the web is really huge, so don't count that against me. There's so much to do that there's probably that you have personally used that is not implemented, that is actually fairly straightforward and you could go and try a hand at it.

We have these Servo starters, we also have bugs that are called E-Easy, although that can sometimes be a trap, because sometimes we don't know how much work is actually there and it turns out it should have been E-Extremely-Difficult-Run-Screaming. [laughter] But for people who wanna get started contributing there's a good way to get started; we have a bunch of people on the team who love mentoring new contributors; we do this all the time. We also support things like Outreachy and the Google Summer of Code and a couple of other similar programs that are run by different universities for students in various classes. We just do a ton of work and try to onboard new contributors, make sure that there's work for new contributors... We actually are sort of victims of our own success here - Rust is sort of popular enough that we have a bunch of people hanging out in the wings. Then, of course, we do a pretty good job of identifying some of these easy bugs that they're usually gone within hours of us filing them.

One of our team members calls these the "E-Easy piranhas", because basically if you dangle some E-Easy bugs out, thousands of fish jump out of the water to try to snap at it.

Yeah, I'm hanging out on your issues page as you talk, just to get some context to that. GitHub.com/servo/servo - there's 1,775 open issues. Of those, 28 have the E-Easy label, and of those, there's only four that aren't actually assigned. So you have 28 easy things, and maybe 23-24 of those have already been taken by the E-Easy piranhas. They've already been snatched up.

[01:12:09.28] Yeah, so we are constantly struggling to keep up with demand, I guess, but it's a job that we absolutely love.

[laughs] Awesome problem.

Yeah, it is an awesome problem and I'm very fortunate to be the owner of this problem. We're constantly adding new stuff there, so if people wanna contribute and they find out there are no E-Easy bugs left, you can reach out to us in IRC, on the mailing list, on GitHub, and someone will create an E-Easy issue custom for you, based on the kinds of stuff that you're interested in working on. We have to do this all the time, because usually we don't find out they're all gone until somebody shows up going "They're all gone! I'm so sad!", and then we'll make a new batch.

Can I ask you kind of a philosophical question, to a certain degree, about this?

What's the driver behind desiring so much contribution? What's the goal there?

We wanna get a web engine that ships to users. We have so much work to do, that a dozen paid people are never gonna finish. If we don't get some other people helping, then a) we're probably not gonna finish, and b) most of our ideas are terrible. The only reason that we've had as much success as we have is through iteration and attacking each other's ideas -- "attacking" is probably the wrong word there, but you know what I mean.

We're batting around these ideas, trying new things... The more people who are involved, the more of that that happens. Just to give some examples, the Webrender was the brainchild of Glenn Watson, who's on our team. He came from the games industry... Of course, he was a person that we hired, but he had a completely different perspective - that was one of the reasons we hired him - about how all of these things work. Webrender is the direct result of his different perspective. Access to those different perspectives is definitely one of the things we wanna get.

There's also a large amount of people on the team who are really passionate about open source in general, and that's how we wanna spend our careers, working with other people on making good stuff that everyone can use.

Well, that definitely resonates with us around here at the Changelog, for sure. Very cool. Well, that sounds like E-Easy is the way to get started. Of course, you mentioned the nightly builds, which you can download and give it a test drive. Lots to do, lots of work yet to be done, not just by those at Mozilla or those at Samsung or those at any specific camp, but the whole community can get together, build Servo together, learn some Rust [unintelligible 01:14:48.25]

Jack, thanks so much for joining us... Any last thoughts or last words for you that you wanna get out there - you have the ear of the developer community - before we close out?

Yeah, we'd love to hear feedback from what you think you could do with the things that we've already done, or what kinds of performance problems you struggle with in your unique applications. We're coming up with new project ideas all the time. We're currently starting a new effort to try to significantly improve DOM API performance, which we call Magic DOM. We'd love to get feedback from what kinds of things developers are struggling with, we'd like people to run nightly and let us know what happened on their own sites... It turns out that if you have people run your code on the stuff that they authored, you're much more likely to get a minimal test case that's actionable out of it, because you don't know exactly how to shrink it down. That's a lot of the kind of stuff that we would love to get feedback; even if you're not interested in contributing, we'd love for you to just take a look and let us know what you thought.

Very cool. Well, thanks so much again, Jack Moffitt. All of the links for this show will be in the show notes. If you wanna get a hold of Jack, we'll have the links to him in the show notes. Servo, of course, all the Wikipedias, and Jack's even gonna send over some slides and some other things that he has in reference to some of these six areas of performance that we discussed, if you're interested. I know we had to breeze through a couple of those.

Thanks again, Jack, thank you to all our listeners. We really appreciate you tuning in. Of course, our sponsors - thank you, we love you as well. That is the show, we'll see you next time!

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. πŸ’š

0:00 / 0:00