Changelog Interviews Changelog Interviews #524

Mainframes are still a big thing

This week we’re talking about mainframes with Cameron Seay, Adjunct Professor at East Carolina University and a member of the Governing Board of the Open Mainframe Project. If you’ve been curious about mainframes, this show will be a great guide.

Cameron explains exactly what a mainframe is and how it’s different from the cloud. We talk COBOL and the state of education and opportunities around that language. We cover the state-of-the-art in mainframe land, System Z, Linux on mainframes, and more.


Discussion

Sign in or Join to comment or subscribe

2023-01-29T14:03:19Z ago

What a great episode!! Cameron’s enthusiasm, modesty, and perspective are awesome. Whether it’s open-source, mainframes, or inspiring folks to get started or change direction in tech, this guy is obviously something special. We need more like him in this world or perhaps we just need to be more intentional about encountering them. Thank you all for your efforts.

2023-01-31T01:36:18Z ago

This was very interesting… I’m currently a retired “corporate IT guy” and started my career back in 1979 after learning COBOL at a for profit outfit named the “Computer Processing Institute” in Hartford CT.

COBOL actually is an acronym for COmmon Business Oriented Language, and I spent the first ten or twelve years of my career as a coder for banks and insurance companies, which all employed huge IBM mainframe installations to process their business.

There is one aspect of the business world of computing that was glossed over a bit as one of the points discussed was around the “two or three” key people who understand the topology of how the systems process the business. The key thing to understand about why this is the case is that the business no longer understands the intricacies of their own business, primarily because it resides IN THE CODE. The primary reason why various conversions to newer technology fail (aside from the fundamental difference between mainframe sequential and PC/server parallel based architectures) is that no one in the business can no longer describe accurately and completely how the business operates at the transaction level. This is the primary reason why there are thirty and forty year old applications chugging away on mainframes!

2023-02-02T02:48:44Z ago

Loved this. At the end of the year, I won’t be surprised if it’s my favorite.

Jerod Santo

Jerod Santo

Omaha, Nebraska

Jerod co-hosts The Changelog, crashes JS Party, and takes out the trash (his old code) once in awhile.

2023-02-02T15:21:22Z ago

Thanks! My goal is to make this one NOT your favorite. Not because it isn’t worthy, but because that’d mean we put out some seriously good shows this year…

[deleted]

2023-02-04T16:34:48Z ago

In the ~6 years I’ve been listening, this episode been my favorite so far. Cameron is an amazing guest. Definitely going to learn some COBOl over the summer.

2023-02-06T18:48:52Z ago

I was so excited for this discussion and even shared it with a few COBOL devs before listening myself.

This discussion fell short in almost every area except historical context.

The transactional processing strength of mainframes was an interesting point but completely overshadowed by failing to go deep on why and how. Contradictions throughout such as the number of COBOL devs is not a big concern at the moment to the IRS wants 600 by yesterday - what? The discussion felt completely off rails by the time I made it to the “How do we get people interested?” chapters.

The state of education is concerning given the important of mainframes to our economy.

2023-02-06T19:18:28Z ago

I thank you for your comments and appreciate the criticism. I’m sure IBM and the mainframe community would appreciate anyone with ideas as good as yours about teaching COBOL doing so rather than directing less skilled people like me, whose skills are apparently not up to the task, how to. Oh, I forget, NOBODY TEACHES COBOL. Come join us! We are few but we are doing the best we can.

Ben Richardson

Ben Richardson

Glendale AZ

I believe that I am the only person who has implemented hardware assisted compression (a feature included in z/OS) in IMS databases using the included IMS exit stub in RESLIB. I did this for PCS Health (now CVS) in 1997 and I have done it at American Airlines in 2018. This has great usefulness for two things. One, it extends the life of legacy applications which are about to outgrow the database design limits, even it 8GB OSAM has already been implemented for that EOL extension. Two, it can eliminate expensive vendor software that does compression. That vendor compression is rarely as good as the custom Ziv/Lempel/Welch that I can generate from an unload file. Contact me on LinkedIn if you have a need for this compression.

2023-02-27T23:45:51Z ago

You are making a way where there was no way. Several attempts to engage community colleges with IBM supplying the course material have failed from what I can find… not currently a thing except for Professor Seay!!!

2023-02-07T02:46:48Z ago

The best Changelog I’ve listened to in a while. Cameron, thank you so much for educating most of us on what makes a mainframe a mainframe, the difference between a mainframe and a supercomputer, and what they’re being used for out in the wild! This was really a special episode.

2023-02-07T16:44:23Z ago

Just finished my 2nd listen, and it won’t be my last. I might just keep this episode handy for when I need a little pick-me-up :D

Sounds like I’m not the only one who might consider this the best all-time episode of the changelog. and that’s high praise because I think they are all pretty good.

Is there a video on this episode available? I’d love to see Cameron in action!

In addition to being inspirational, it’s timely. I’m actually witnessing this aging-out of those with the “domain knowledge” at 2 of my clients (one of them is using an application written in COBOL). It’s triggering some interesting discussions. And as a maintainer of some very old software myself, nowadays I spend alot of time thinking about these topics.

So many business rules (and edge cases) got baked in over the years - even decades - only the authors have a chance at understanding how the whole thing works. So even though bringing in new people with the appropriate skills might be a good first step, the domain knowledge of often what amounts to a highly-customized “rube-goldberg” machine is the really hard part.

I understand why Cameron says that managers in this space are so important. Somewhat naive upper management hear tell of this mythical place called “the cloud” where you click a few buttons and all your tech problems are solved. And they’re like “I want to go to there” :D

Ben Richardson

Ben Richardson

Glendale AZ

I believe that I am the only person who has implemented hardware assisted compression (a feature included in z/OS) in IMS databases using the included IMS exit stub in RESLIB. I did this for PCS Health (now CVS) in 1997 and I have done it at American Airlines in 2018. This has great usefulness for two things. One, it extends the life of legacy applications which are about to outgrow the database design limits, even it 8GB OSAM has already been implemented for that EOL extension. Two, it can eliminate expensive vendor software that does compression. That vendor compression is rarely as good as the custom Ziv/Lempel/Welch that I can generate from an unload file. Contact me on LinkedIn if you have a need for this compression.

2023-02-27T23:06:23Z ago

I love it. Great intro by Cameron. I started doing COBOL in 1980 at AT&T (Western Electric) and then moved on to supporting the IMS DB/TM version 1.2 in 1981. I’d love to get in a discussion of the internals of how mainframe stuff happens and why it’s so different. Some stuff I can add: ACP - Airline Control Program is now the TPF operating system with SABRE as the client application… it was all one thing in the beginning. IMS DB/TM (or DC) was written after 1965 and ran standalone for NASA in 1968. It had to exist on 256K of magnetic cores and no tape and no disks. And it allowed a live real-time countdown for Saturn V moon shots. System R (relational or RDBMS) became DB2 in the early or mid 1980s. When I started I was 23 and the youngest one in the department. Everyone else graying guys who had been hired to design missile guidance systems before SALT I and SALT II. I can describe how IMS schedules the incoming transactions and how this method is mirrored in z/OS because IMS code was used to create the first MVT, DOS, MVS, MVS/XA, MVS/ESA, OS390, z/OS… all the way to todays software with the compatibility to run really old code on the latest hardware. And the record for Linux on z is hard to find, but I think it is more than 10K instances on one box.

2023-04-02T22:05:35Z ago

This is a fantastic episode! Really enjoyed listening it and it’s been my favourite so far.

But it glossed over SO MUCH. I have so many more questions about mainframes by the end of the podcast than I had before.

  • If single threaded performance is the core strength of mainframes, how does it compare to modern x64 CPU? Top x64 performer is roughly as fast as mainframe 1,3,5 or 10 years ago? Lets say it is 10 years, does it mean that modern x64 server could be used in place of mainframe if sent back in time or there is something more to it than just transactional speed?
  • x64 CPUs don’t waste cycles, they wait on memory. Feeding ALU is biggest challenge, hence all those tricks with SMT, speculative execution, cache hierarchy. If system Z “doesn’t waste cycles”, how do they achieve it from architectural points of view? Enormous caches? low latency RAM? How does IPC compare between x64 and maintframes for a typical workload?
  • On scaling vertically. It was mentioned that adding new processor or entirely new unit “to double performance” is an optional. Neither can help single threaded performance, so how does it scale actually? Or it can actually help single threaded performance with some insane hardware magic?
  • If single threaded performance is the reason to go with mainframe, then why IBM invested in developing best in world hypervysor for zOS? More I think about it, less I understand.
  • “main CPU just do transactions, everything else is offloaded to other subsystems”. How exactly does it work? x86 CPU doesn’t write bytes into ethernet either, network card does, often offloading big chunks like TCP stack. How mainframes are different? What are abstractions main CPU is operating with? Does it even pull data like in x86 or data and instructions being fed to the main CPU for completely deterministic and entirely local computation?
  • With new model every 2 years, what were the biggest changes (other than raw speed) in a last decades?

I really hope there will be more podcasts on this topic.

2023-04-10T02:42:32Z ago

it can actually help single threaded performance with some insane hardware magic?
everything else is offloaded

These two questions go to an item that was just briefly covered in the podcast. While the CPUs don’t have blinding-fast 4ghz speeds, the entire system is tuned to work together and the subsystems take a big part of the I/O demands thus offloading that work from CPUs. Network, encryption, disk don’t demand cycles from CPU, so the CPU can focus on business logic. This allows everything to more seamlessly scale which provides better throughput for the single-threads and the multi-threads.

But, these machines are not doing GPU intensive rendering or games because that is not the target audience. One size does not fit all workloads.

390s then z/os had not only CPU cycle management but also I/O management well over 20 years ago which allows co-existing workloads to seamlessly share the host. The management tools were awesome 20 years ago!

Player art
  0:00 / 0:00