Open is the way
Joseph Jacks (JJ) is back! We discuss the latest in COSS funding, his thesis for investing in commercial open source companies, the various rug pulls happening out there in open source licensing, and Zuck/Meta’s generosity releasing Llama 3.1 as “open source.”
Matched from the episode's transcript 👇
Joseph Jacks: …and in order to upgrade the software, you would have to order, and they would have to ship you a new series of floppies, from which you would need to kind of like physically install, and you would own those physical like pieces of technology that would give you an upgrade. And then the pricing model was all built around that, too. It was perpetual licensing, where you would like basically buy that version, and then you would have maintenance that you’d pay for the version, and in order to upgrade a new version of the software you’d have to physically spend a new upfront amount for the new version, and then do that over and over again.
[34:18] Most of the – I would say probably 99% with some interpolation synthesis, mixing and matching of ideas… Most of the categories in software, including SaaS, which have become a little bit more like fractal domain-specific… Cal.com has almost like one feature, and wow, it’s a billion-dollar company. That just happens because of economies of scale, and humans using more and more technology. But most of these ideas were implemented in the seventies, eighties and nineties, even in the mainframe days. Scheduling, booking things… I mean, this is like calendar functionality.
I was recently watching the oral history of Mike Markkula, who was the first CEO of Apple. He was a guy that Sequoia founder, Don Valentine, introduced to Steve Wozniak and Steve Jobs to basically help them with the business plan, and like run the company. And Mike was also a really brilliant programmer, and he wrote Fortran and basic programs that would actually run on the early Apple computers. And the collection of those programs came with a credit, sort of a copyright credit at the bottom, which was Johnny Appleseed. You might remember this from the early Apple computers, this Johnny Appleseed name. So Johnny Appleseed was Mike Markkula. And Mike in the computer history museum describes this – it’s on YouTube, you can watch it. It’s a really great, multi-hour oral history. Mike actually personally wrote the programs for the applications that would ship in the early Apple computers. So including the calculator, calendar app, bookkeeping app… A lot of these basic – a very basic, primitive word processor…
And so I just think back to like the most fundamental, basic things that humans do as part of their work or their lives have been re-implemented and re-implemented so many times in the history of technology, going back to literally the fifties and sixties, that when we have new ways of distributing and innovating in the core technology, what happens is you just reinvent and you re-implement what happened, what came before, with shinier and newer, sexier, more sort of societally-relevant and interesting abstractions. And then those things end up taking off. And oftentimes people are given a choice. There’s sort of like the free version of the thing, maybe it’s open source and it’s like less sophisticated, but you can totally control it… Then there’s the proprietary version of the thing, which is much more polished and sexy and packaged. This is so fascinatingly – for the first time, and sort of skipping to the AI stuff, we’ve seen this incredibly shockingly surprising almost acceleration of equivalency. This actually happened this week - I mean, the latest LLaMA models are state-of-the-art. And by all accounts, the benchmarks are pretty much indistinguishable, and the capital required to produce that is similar. Mark Zuckerberg literally did an interview a couple of days ago saying they’re spending tens of billions of dollars building LLaMA, and they’re giving it away completely. And in fact, they’re making – Yann LeCun was just tweeting about this the other day… They’re making their licensing even more permissive. You can do distillation, quantization… You can use the outputs from LLaMA models to train other models… It’s almost like an Apache or MIT-level permissiveness. The artifact is very different for a neural network that has massive amount of information compression. Not in the Pied Piper sense, by the way, but in the neural network, AI, tokenization of –