JS Party – Episode #338

Undirected hyper arrows

with Chris Shank

All Episodes

Chris Shank has been on sabbatical since January, so he’s had a lot of time to think deeply about the web platform. On this episode, Jerod & KBall pick Chris’ brain to answer questions like, what does a post-component paradigm look like? What would it look like if the browser had primitives for building spatial canvases? How can we make it easier to make “folk interfaces” on the web?

Featuring

Sponsors

WixWix Sudio is for devs who build websites, sell apps, go headless, or manage clients. Integrate, extend and write custom scripts in a VS code-based IDE. Leverage zero set up dev, test and production environments. Ship faster with an AI code assistant. And work with Wix headless API’s on any tech stack.

Fly.ioThe home of Changelog.com — Deploy your apps close to your users — global Anycast load-balancing, zero-configuration private networking, hardware isolation, and instant WireGuard VPN connections. Push-button deployments that scale to thousands of instances. Check out the speedrun to get started in minutes.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 It's party time, y'all 00:56
2 00:56 Hello party people 00:35
3 01:30 Chris on sabbatical 👀 06:33
4 08:04 Sponsor: Wix 00:55
5 08:59 The post-component paradigm 04:27
6 13:26 KBall needs clarity 02:26
7 15:52 Defining 'behavior' 03:18
8 19:10 What a future might look like 07:18
9 26:28 Show us some code 01:47
10 28:14 Intent-driven UX 04:57
11 33:11 Sponsor: Fly.io 03:06
12 36:18 Spatial canvases 14:04
13 50:22 Folk interfaces 08:49
14 59:10 Jerod gives Chris ideas 06:11
15 1:05:22 Closing time 00:59
16 1:06:21 Next up on the pod 01:11

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Hello, party people. It’s me, Jerod, your internet friend, and I am joined by my friend, KBall. What is up, KBall?

Hey, hey. Excited to be on the show again, and talk about some fun stuff today.

Always excited to have you on the show, excited for our guest today… Hopefully, there’s no surprises hiding in this name, because I didn’t ask you how to say it… Chris Shank. Is that right, Chris?

Yes. Very straightforward.

You know, every once in a while there’s a surprise lurking in an otherwise unassuming name Chris and Shank… But thankfully, none here today. So we’re excited to have you, Chris. Joining us from sabbatical… Living the good life, man. What’s up with this? Are you just taking some time off?

Yeah, so I guess to introduce myself, I’m out of SoCal, so I’ve been there for a couple years in Long Beach, and I was working remotely for a fintech company for the past five years… Super-cool stuff. I was able to work on design systems, data-heavy dashboards, real-time market data, streaming through - you name it, we were working on that.

I think while all that was happening, I’ve been – I have a plethora of side projects, and I think one of the things I learned from working there is I really these very open-ended, researchy kind of questions of “There is no right answer”, and just sort of parachuting in, exploring that space, trying to figure out what works, what doesn’t. And yeah, I sort of decided, I was “I think I want to take a break. I think I sort of want to do an independent researcher kind of role, and just see what I can explore.” I feel I have a lot of ideas, a lot of thoughts, and been doing that for most of this year, I think. It’ll probably be ending shortly, but it’s been a really cool experience to do this kind of just see where my attention brings me, talk, collaborate with people, and stuff. So yeah, it’s been really fun.

That does sound really fun. Kball, have you ever done anything that?

Oh, yes. They’re lovely. I actually have a theory about time off, that I read somewhere, which is that you should do fractal time off. So every week we have a weekend, but every month or so we should have one long weekend. Every six months we should take a week, or something. Every five to seven years you should take a multi-month break, and some sort of sabbatical. So I have kind of done that intentionally once, taking three months, not focused on research so much as on travel, living someplace very different… And done it sort of unintentionally in some ways a couple of times after either layoffs, or a company started to implode… Those times I did end up taking paid projects much more quickly, but it was still not doing the whole full-time job thing. I highly, highly recommend that type of variation in your work life, if you can make it work financially, family obligations, all those other things.

Right. Well, I’m just jelly, because I don’t think I’ve ever done anything quite that more than a week or two. We do two weeks every year, contiguous, at the end of the year, which is always a nice reboot… But I’ve never done multi-month… Let alone - Chris, I mean, you’re coming up on… Is it eight or nine months at this point, right? Because –

Yeah… It’s extended a little further than I thought it was going to.

Yeah. Well, I mean, when you’re having a good time, stretch that sucker, right?

Yeah. But it’s been really refreshing and grounding, I think… And yeah, it’s sort of a leap. It’s a little stressful; you’re in the red, but…

Right. Just living off savings for a little while, right?

Yeah. But it’s been worth it.

That is a stressful thing. You watch that bank account number ticking down… And even when you’ve – at least I’ve found; even when I have done the math of saying “It’s okay for me to do six months or three months” or whatever it is, still, the effect of watching that tick down is a little piece of stress.

Yeah. No, it is. I think on the plus side I’ve never been in an environment where I just could do what I wanted… Like, coming out and going through school, work, you’re always sort of told what to do, and you’re always in an environment that’s nudging you in a certain direction… And so just being able to take a step back, be “This thing isn’t happening today. I’m going to work on it tomorrow.”

[laughs]

It’s a weird feeling that, I don’t know, I’ve never really experienced before.

[00:05:39.12] You can get some amount of that if you do a consulting contractor type of thing, if you work at it… Like, I knew a guy, also based in Southern California, actually, who over a period of years set up a set of freelance contracts with people where they – it was with companies that wanted his skill set, but were okay working asynchronously, had a bunch of work that they were willing to pay to get done, but didn’t have strong deadlines on… And to do that, he made other trade-offs. He was sometimes working at a lower rate than somebody in the most on-demand thing, or he was working with older technologies, or whatever. There were trade-offs involved. But he did this, and he cultivated this set of clients, and then he told me “Okay, I’ve spent five years setting up my life, so now I can choose every hour, any day, any time zone. Do I want to work today, or do I want to work this hour or not?” And he took off around the world, and just was doing that from everywhere. So it is possible even in a work environment, if you craft it enough, but it’s hard.

Yeah. No, it has been hard. There’s totally been days when I’m “Oh, I don’t want to do anything”, and you can’t not do anything… Yeah, it’s been interesting to push through that though, and have – I think for me, just having long-term projects that I’ve been working on for months, and trying to… You know, there’s a premise, or a vision of this idea of “How can we do this?” and just working on that has been very uncertain, and in a kind of way – I don’t know, it’s been illuminating.

That’s awesome. I think we need both things going on in society, right? We need those are just consistently pushing the ball forward of progress, and working and doing all of the maintenance things… And then we also need time and people to think deeply, and attack hard problems, and have the luxury of saying “I’m going to spend a week just thinking about this thing. I’m going to try 17 iterations and see what happens.” A lot of us are in situations where you just can’t do that. There’s no way of doing that and accomplishing all the things that are in front of you today.

Break: [00:07:48.05]

We’ve brought you here to pick your brain a little bit, to noodle with you, to hopefully glean some of your big ideas that you’ve been having over the last nine months… And so we thought we would start with this one, about the post-component paradigm. We’re going to go post-components. We are definitely in the component as substrate era of web development, right? I think we’ve arrived there over the last 7 to 10 years… And you’re thinking post-components, so let’s unpack that. Why are you thinking about that, and then what all the implications are as we go.

I mean, there’s a lot… So I’ll try to distill it down. When you’re talking about post-component, it’s a little facetious, right?

So they’re still components.

[laughs]

[00:09:49.23] The component paradigm has gotten us really far. It really enabled us to – like, if you go back to 2012, 2015, whenever that idea started and started to spread, it helped us build more complicated user interfaces on the web. There’s no doubt about that. And I think because of that, we’ve hit this wall… And part of that wall is the limiting factor of a lot of these web applications that we see today, I’d argue are not complexity at the rendering level. They’re complexity at the behavioral level. One of the things that I think we’re really lacking is a set of “Here’s all the different types of web applications you can make.” The ones that I’m thinking about are partly what I worked on; we’re streaming real-time market data, we’re contextualizing new stories that are also streaming in real time with that market data… There’s an endless amount of APIs and data that you want to integrate. Outside of FinTech you have real-time spatial canvases, multiplayer experiences… The type of applications that we’re trying to build today are a lot more complicated than they were 10 years ago.

I’ve had anecdotally experiences of “What the hell is happening in this application? Why is this data coming in wrong? Why are our streaming services being canceled and restarted?” And then there’s also this other aspect of - the UI is staying the same, and all of the bugs and the additions that we’re adding to this application are behavioral. It’s just “Oh, we’ve got to multiply X, convert this currency”, or whatever. And that doesn’t happen in the UI, or how we render things. It’s happening in the behavior, 20 yards away. And I think we really lack primitives and approaches to understanding applications better. We really struggle to see how “Oh, a user interacted with something, and there was a click event. What is the application doing?” And part of that stems from this tension of co-location and components sort of being really encapsulated things. Though Reason React aggressively wants you to co-locate things is “Oh, it helps you think reason about the application.” It’s “This button or this table is just its own thing”, and you just need to think about state inside of that thing. And it’s like, well, what happens when this table is talking with the drawer, and affecting other parts of the application? The behavior that we want to co-locate is getting hoisted up and up and up. And then it’s like, well, when now we don’t have co-location, we’re just passing all this data down. And so when you want to say “Oh, this cell in a grid was clicked”, it’s not just the side effects and the thing that the user interface needs to do in those cases is not encapsulated to the table.

Right.

It slowly becomes braided with the rest of the application. And I don’t think we have a good way to understand that, what is the application doing as a whole, not at the component level… Because we could only see –

Can I ask some questions? Because I’m feeling I’m a little confused here, because a lot of different things are getting talked about here. So if we think about the pieces that go on in a frontend application, you have rendering templates, which is I think one of the pieces of components. You have state management, and you have interactions with logic of different sorts. And in some cases, all of those things are co-located in a structure that we call a component. And in other cases, they’re separated, and you have different services, or you have a state manager, or you have things like that.

[00:13:53.05] So when we’re talking about – if I think of what I’m understanding you properly, what you’re talking about is saying, “Hey we shouldn’t be putting all of these things in components. They should be separated out and you should have services, and you should have a centralized state or a compartmentalized state that is not tied directly to the components, because you have behaviors that are not linked purely to one location on the page.’ Is that right? I’m trying to understand what are the pieces here that we’re talking about.

So it’s tricky because there’s a couple of things to unpack here. We think about - one, as you’re saying, as an application gets more complicated, more and more state needs to be hoisted up. And we have state management systems, and there’s all different types. But even if you have a centralized store, a Redux store, what you’re going to find is the behavior that you’re defining in that Redux store is just data. So there’s some state that you need a map. Like, “Is this grid being displayed or not?” And what’s our default way to model that behavior? It’s boolean. And when you start having more booleans to model behavior, when you start checking the state of your application, “Oh, this event was sent. If we’re in this state, then we should do this. We’re in this state, we should do that.” There’s this conditional logic that starts increasingly growing. We need to be extremely defensive when events start coming in and we start processing them. And that conditional logic is where a lot of bugs hide. Part of it is even the state management tools that we have are really concerned about making data reactive, as opposed to describing behavior.

Chris, when you’re talking about behavior, you say “We don’t have primitives to model and discuss behavior.” I’m trying to understand that particular – I mean “We don’t have the primitives”, to discuss it… But behavior - is this user behavior? It seems you’re also discussing business logic as information comes in from the outside, which also needs to have specific ramifications, or I don’t know, resulting consequence. When you say behavior, are you speaking specifically of user behavior, or are you speaking of external systems?

The behavior is specifically “What is the application doing?” If a user interacts with the application, we need to perform side effects.

That was the word I was looking for earlier, side effects. Thank you.

Yeah. User interfaces are all about a user interacting with the application, there’s some computer model, and the user interface sits right in between and is coordinating both of those.

It’s a reactive system, not in signals or reactivity, but it’s a reactive system as it’s constantly – the user interfaces that we’re writing are constantly communicating between these two worlds.

And we’re treating everything as data. That’s first off, is when you’re trying to model behavior, there are useful things observables, which is “Okay, let’s start thinking about streams of events.” But there’s other notations state charts, state machines, behavior trees, that let you start modeling the flow of behavior. And one of the important qualities that they have is selective message passing, which pretty much just says “If you send us an event and we’re in a state that doesn’t accept that event, then we’re going to ignore it.”

[00:17:49.19] A very easy example of that is you have a form, the user submits that form, but they double-click it. So we have two submit events. A lot of state management solutions don’t have that particular affordance in them. So essentially we will just execute that twice. And there’s actually something really interesting, because I saw recently that React is not allowing async event handlers, because of this exact thing. Because if you have an async event handler, it’s just going to be called straight away, perform some asynchronous stuff, and there’s a chance that the user can press it again. Press that button again, and that same handler gets called again.

So what happens in that case is we have to say “Oh, let’s keep track of whether it’s running or not.” There’s all this extra work we have to do to prevent a trivial case, a user double-clicking a button when we sort of assume that it’s clicked once. And so why aren’t we helped? Why are these frameworks that are helping us render things, why aren’t they helping us with these very simple behavioral edge cases, for example?

So what would a future look like? What would a future that has – it’s post-components, so we’re thinking about behaviors, we are in the world that you’ve been noodling in with this selective message passing functionality… What would that look in the browser, as a framework, as a developer? Can you describe what that might be?

So first off, I think the first step that I’ve been exploring is this concept of intentions. When a user interacts with a button or something, there’s some intention that they’re putting into that interaction. And that intention is not typically defined in code. If you’re inlining some closure in that event listener, you’re just performing some imperative stuff. So that intention is implicitly defined, which is - I click a button, the intention is to close it. Or there’s all these buttons on this video call… Those aren’t typically defined in code. And what that means is that the place that that interaction is happening at needs to have all the state available to it for it to figure out what to do. Does that make sense? So we have to pass down whether that component has that state, or it’s defined in some store, there is that component where we’re attaching that event listener to must have all the state that it needs to respond to it. Does that make sense?

The event handler that we have could be as simple as forwarding that event to a global store. But that intention is still implicitly defined in code. And so there’s all this additional data that we needed to pass down the component tree in order to respond to events.

Right.

And so the first thing I’ve been thinking about is what if we could declaratively define that intention in the DOM, for example, as an attribute? And we can make it easier for that interaction to be handled anywhere; it doesn’t matter where, as long as you have a protocol like “These are all the intentions.” It doesn’t matter that a user clicked on something, or had a keyboard press… There’s a lot of times when an application is going to have multiple ways for a user to generate the same intention in that user interface. And so if we can map the raw event, the raw interaction event on the element that it’s going to happen on, we don’t need to inline event handlers. We could use things like event delegation, which sort of uses the browser’s event system and event bubbling to say “Hey, we know the exact intention that a user meant when they interacted with this particular part of the user interface”, and we could respond to that anywhere. Maybe we want to respond to it in a component, maybe we want to respond to it at the root of the application. It doesn’t matter where. And this is important, because I think it’s really important that we could more flexibly create a shearing layer between the behavior of our application and the thing that’s rendering it.

[00:22:25.00] So when you say a shearing layer - like some sort of declarative area where you say “Here’s all the behaviors of my application.” And they’re distinct and decoupled from rendering. Is that what you mean?

Yeah. Yeah. Shearing layer’s like “We know at scale these things are going to move at different speeds. And so we want to create ways that allow them to move at different speeds.” That’s the sort of the concept of a shearing layer. And so my first premise is – let’s say we have a global store already. We still need to pass it everywhere - through context, or whatever dependency injection the component tree provides us. Why do we have to do that? Why do we have to pass down this entire context everywhere? Why can’t the event just come up to us and we know exactly what to do with it? That’s the first step of what I’ve been thinking about.

So is this kind of inverting the dependency tree then?

Yeah. Yeah, yeah, yeah. Because the user interface cares less about the data that it needs to respond to events. It just needs to be passed the data that it needs to render things. Does that make sense?

Yes. I just, I wonder… As you’re describing this to me, it sounds to me like PubSub. It sounds like event –

It’s an event-driven UI paradigm?

Right.

Yeah. But here’s the difference. Typically with event delegation what you’re saying is “I’m going to respond on behalf of this DOM element that was clicked or interacted with”, whatever event. With event delegation historically in the jQuery days what we would do is we would have to use some kind – we’d have to introspect the DOM somehow. You’d have to be like “Okay, it was a button…” You have to re-infer the meaning of what that DOM element was. So now you end up with this problem where the DOM structure that is rendering something is also important in how we respond to it.

And so you’re coupling the DOM structure to the behavior of your element, because you need to figure out –

If you’re using raw DOM events.

Right. If you’re –

But if you have a list of – so if I’m understanding you correctly, you’re saying essentially “Let’s define a list of intents, which could be modeled as events”, that then we have some sort of mapping or way of annotating, or something… So there’s an event handler that gets added to all these components that maps, from the raw DOM event to the intent, and publishes that via global PubSub of some sort?

Yeah. And so essentially it allows you to have some form of locality of behavior. Me as developer, I could say “Hey, this button - it doesn’t matter where it is, it always means this thing. It’s always going to do this thing when the user interacts with it.” And we could respond to that behavior anywhere. It loosens the coupling that we typically have, because the browser, through the way that you add event handlers is not modifying – it is the same mechanism that components and component frameworks allow us to attach event handlers declaratively.

Right.

You always you – you end up in this weird thing where you have to directly attach an event listener to a DOM element, or whatever element that you’re trying to render… And yeah, I feel like a little more looseness there can really help us, be like “Hey, we don’t have to have all this state globally, but we also don’t need to below all these components with state that they don’t need to render things, they just need to respond to events.”

[00:26:07.02] And as such, what happens with components that depend on global context is they become less composable, bbecause you’re injecting all this other stuff, or you have to pass props down… It defeats the purpose of having really composable components. So yeah, that’s like the first step that I’ve sort of been thinking about.

Have you coded this – because it sounds like good architecture. I’m with you. I think it’s abstract in the way that you’re describing it to us, of course, because we’re just using our words… But do you have code? Have you put this together?

Yeah, I’ve got a repo with a bunch of examples of this, from TodoMVC, to spreadsheets…

Okay. So this is a better way of building web apps in the post in the post-component world, and you have sample code of like “Here’s how you would actually go ahead and declare all these intents, and here’s how you would have a shearing layer, and all these things”, and this would provide better decoupling from the DOM. And from the current model, that doesn’t currently provide for us.

Right.

I think that’s useful. Definitely share that code with us.

Because in the abstract –

Right now, or…?

Oh, I mean, send us a link. We don’t need to go through it step by step, but share it with us so we can put it in our show notes, so that we can follow along… Because I think for me where I’m lacking right now is example code. I think I’m with you… Kball, are you with him?

Yeah, I think so. I mean, I’ll be interested to see the code, because yeah, it feels like something that I have already seen in the world, and it’s kind of in some ways orthogonal to components… It does take us in this direction of – I mean, component frameworks themselves are not sufficient to build complex applications.

Right.

You need additional architecture and you need additional tooling. And so this might be a useful take on those. I think some of the things you’re talking about do remind me of existing solutions or approaches like MobX, or Xstate, or things like that, where you’re managing state and you’re managing messages, and you’re kind of dealing with that… And you’re doing a layer on top, which is you’re sort of standardizing what is the event space… And yeah, it’d be useful to see code.

I would be curious to think about kind of this concept of intent-driven UX. What does that enable for a user? So we talked about - okay, there’s this aspect of how do we write code, and what is reusable for us… But I’m more interested almost in what does a user interface – what new user interface paradigms are enabled? I’ve been thinking about this in the domain of a current age of AI. I think you see sort of interesting interfaces starting to emerge. Photoshop’s thing, where instead of you telling it what to do, a very imperative user interface paradigm, you sort of sketch out an area and say “I want something like this”, which is much more intention-based, and then it just fills in the details. So I think we are in an emerging age of intention-based user interface, and I’m kind of curious what that would look like, not in the implementation layer, but in terms of what it enabled.

Yeah. So I feel like part of this is we’re taking something that is generally explicit, or at least trapped in JavaScript. The concept of there’s an explicit user intention, that maps to a DOM event, on a certain element. That is something that is explicit. And now that it’s a DOM attribute, it has some declarative nature. And one of the benefits of that is you can go in DevTools and you can see, you can just look through the user interface and be like – you could literally just see the type of interactions that you can interact with as you’re scrolling through.

But I’m assuming an end user is probably not going in DevTools.

Well, he’s talking about a developer user.

Right. I don’t know exactly what you meant by – like, are we generating user interfaces, or are end users generating…

I’m just wondering what this enables in terms of what a user is actually going to experience.

[00:30:10.25] I don’t think it’s at that level, is it?

Yeah, it’s more at the – I’m not sure it affects the end user, unless they’re going into… Like, it makes certain things a little more observable if you’re going into DevTools, and stuff. But one of the things it could enable is we could see command palettes. We could generate a command palette from all these intentions that are in the DOM. So there might be additional affordances that you can extract from knowing this in a much more declarative fashion.

Yeah. Like “Here’s a list of things this web app can do.” And we know those, because we have defined them.

Yeah. As opposed to just living invisibly in JavaScript.

Well, that’s actually interesting because that takes us into – one of the really cool things about HTML and moving towards markup is it allows, for example, browsers to automatically infer a bunch of information, and then expose it in different ways, for example, to a disabled user, who might need different interfaces or different explanations. And so if you’re exposing your behaviors in that same way or your functionality, one might say okay, well, maybe I’m publishing a website, but somebody might have a voice-driven conversational interface that can access those same different things. Or maybe you get an automatic API interaction of some sort. I don’t know, there’s some interesting potential out of that.

I’ve thought about that too, and maybe there are ARIA attributes that can convey this information that we could use, instead of just a custom attribute that the browser doesn’t understand. But yeah, I could see it helping assistive technologies. I can also see it – there’s also a malleability aspect here. You could – and I don’t know, maybe this isn’t… I don’t know how useful this would be, but I can go in as someone that sort of knows the web and start making things more malleable. I can change the intention of this button. Maybe that’s good, maybe that’s not… I think there’s another aspect here, another topic that I’ve been exploring, about making the web more malleable, that it could be related to… But I haven’t really thought deeply about it.

And at least more programmable. I mean, you’re kind of providing an API for things that are slightly less human to be able to – I mean, assistive technologies are that. They are acting on behalf of a human, and so maybe more easily programmed… Which is good, but also can be bad, when you’re trying to keep the bots off your website, and they just keep submitting your forms, even though you’ve got the Captcha, and they’ve figured their way around it… And you’re like “No…! Stop being so stinkin’ human.” Meanwhile, I can’t identify all the squares that have a bicycle on them…

Break: [00:33:05.21]

That brings us to another topic, the spatial canvases stuff. So you’ve been trying to play with primitives for building – is this part of your malleability work, like spatial canvases, which –

…we have a canvas element, so I’m thinking this one’s already checked off the list, or… I’m just messing with you.

Don’t you like the canvas element? It’s pretty good, right? What are you thinking about there?

So the framing there is less about “Can we provide primitives that allow people to make spatial canvas apps easier?” This is something we see with TLDraw, with Excalidraw… Those are React applications. You can extend them… They have extensibility mechanisms, stuff like that. So you could take a TLDraw component and do what you want with it, and create a spatial canvas. But it’s essentially a black box, with some hooks that allow you to hook into it.

So let’s say you pretty much are constantly fighting against what this black box supports and what it doesn’t. So let’s say I want to put a video on my canvas. If TLDraw doesn’t support that, then I need to come in, add that code so I could paste the video in and play it in the canvas. And then there’s a lot of these interactions around, like sticky arrows. I think sticky arrows are one of the best innovations of this new generation of whiteboarding apps. You could just be like “There’s a connection between these two things”, and if I move one, that arrow is going to render between them, as I wanted it to. It’s not paper and pencil. We can actually do something with a digital whiteboard. That’s really powerful, but that interaction is completely tied into the canvas.

Usually how it works is there’s some representation keeping track of “Here’s all my rectangles and shapes, here’s all my arrows. This arrow is connected to this shape, and when this shape moves, I need to update, re-render the arrow.” There’s this very coupled connection between the two, for example.

So one of the first projects that I started working on, I dove very deep in the arrows, through the framing of this. What if we could define connection declaratively in HTML? What if HTML just had an element where you could define a connection between two other DOM elements, for example? Can we take that type of interaction and just make it available in the DOM, usable on any website, and it’s not trapped in a black box? That was the first primitive that I started working on. So that project is called Quiver, the tagline being “Your quiver of declarative web arrows”, or whatever. And essentially what it is - it’s a little toolkit for creating arrows through custom elements.

There’s all different types of ways that we could render an arrow, but the crux of that particular primitive is you can define connection between a target, which – so it’s a custom element, it has a target attribute and a source attribute, and they’re just CSS selectors. So you can define these arrows in HTML, and there is some magic that will render that arrow between those DOM elements as they move.

[00:40:04.01] And what’s important is it’s not just – the way we’re observing movement of like bounding boxes and stuff works with browser layouts. So you could have – maybe you want to define a connection between two things in a flexbox. It works with browser layout, and it also works with manual direct manipulation of dragging and dropping things. And there’s all different types of connection that we might want to define.

The most basic type of arrow is a binary relationship between two things, there’s also hyper-edges and hyper-nodes, end-to-end elements. There’s also undirected hyper-edges, which is a set, a group of things. And there’s all different types of ways that we might want to render what those arrows are. You might want a curved arrow, you might want an orthogonal edge, like right edges… Maybe you’re trying to define a connection between two text items, and maybe that is going to be rendered differently. But essentially, we’ve taken what is traditionally trapped in a spatial canvas, and taken it out, and you can use it on any website. You could use it – imagine it’s just like part of the web. Yeah.

And more recently I’ve been working on how can we make parts of a web page directly manipulatable. So it has a shape, it has a location, it has rotation, right? …traditional interactions that you have if you draw a shape on a canvas. But can we do that for any DOM element? And so that’s also a custom element, and the framing, again, is what if there was a way to just in the browser, through HTML or whatever, that allows us to make parts of a web page directly manipulatable? And that’s really important. Have you ever been reading a blog and you’re “Man, I really wish I could add some notes, I could add rearrange things as I’m processing it”? That’s what these kinds of primitives would enable; that kind of malleability where permissionless innovation, permissionless extensibility… Like, I might be reading your blog, and I want to rearrange it. I wannna define a connection between things. I want to – more importantly, and this is work that hasn’t really been done yet, but I’m starting to think about it… How can we get across single-origin security policies? So can I make multiple web pages malleable, and define a connection between them, and annotate them? And what’s important here is, again, the canvas isn’t a black box that I’m bringing in. It’s just part of the browser. It works on any web page, and you just need to register a couple of custom elements in order to do that.

So diving in a little bit… I’m looking at this Quiver, and it’s quite cool what you’re doing there. So just to flesh out a little bit - you build this custom element that essentially underneath it has a set of observers tied to these targets, that kind of report back on where their positions are… And then it’s rendering a custom, absolutely positioned SVG, if I’m understanding it correctly.

Yeah. At the moment I would love to figure out how to render with ClipPath in the future.

It’s an extensibility thing. So imagine if you could just render an arrow with a single CSS attribute. It’s going to make the arrows a lot more extensible, because we’re not doing all this SVG manipulation and stuff. We’re computing an SVG path and putting it in a CSS property. And so you can come along and be like “Hey, I want a different renderer.” You could just programmatically write that CSS variable, define the background on it. So there’s a certain level of extensibility that I’d love to figure out how to do with ClipPath… But that’s sort of an implementation detail that doesn’t really –

[00:43:59.08] Yeah. I mean, the thing that I about SVG is it is – SVG is the neglected stepchild of HTML, in some ways. It is an incredibly powerful tool, that is often neglected. And, I mean, the barrier to entry for accessibility might be a little higher, but the – or for extensibility, sorry; but the actual extensibility of the concept goes a lot further with SVG, because you can do incredible things with that. You could even hook into one of these fancy SVG rendering libraries and you bake that inside of a custom element, and now you can do all sorts of different stuff in a way that is drag and droppable.

Yeah. So I’ve started building out things like “Can we retarget arrows? Can we animate things along in arrow?” There’s some CSS properties to do that.

Yeah… Oh, this is really interesting, because I remember - just real quick, back in the day I was looking at embedding SVG… So SVGs have conceptually media queries. So I was looking at situations where you might have a resizable image, that would actually change what it was showing based on where you were. So it might have the full version, and then you shrink it, or you put in a smaller thing and now it’s showing the mini version, or just the icon, or whatever. But it was flaky; the browser support in particular for that was flaky, and depending on which browser, if you were embedding it, it wouldn’t work super-fluidly. But with observers, you could do that explicitly, and you could suddenly bake in these really powerful components that shift their behavior based on their context.

So that’s the next step. Let’s say you can render an arrow… What if you can perform computation along that arrow? So what if I can take an arrow that I have, and I want to say, maybe – so one of the big things that I do a lot of is visual programming, live programming… There’s all different types of notation to do that. And so it’s “Is this a primitive to create live and visual programming environments on the web? Can I extend this custom element and when it gets a source and target, can it perform some computation along it?”

So one example is this concept of a propagator network, which is essentially like - just think of a spreadsheet. You have cells, and you have arrows between cells that will propagate values as they change. So I can go in and say “Hey, I have a count”, and then the arrow performs a multiplication by times two, and outputs it in another cell. We can start creating these live and visual notations on the web in HTML. So you essentially are just like – you can just connect up some DOM elements to an input. And it works with any HTML element. So you can have an input feeding into – something that I saw today on Twitter was people wanting to watch videos at two times the speed, and it’s like “Can we do that visually in HTML?” Imagine you had an input, a number input, and you can just draw an arrow to the video and increase or decrease… Like binds. You’re defining connection between those two elements saying “Hey bind the value of this input to the playback speed of the video.”

And so once you start imbuing connection in these arrows with computation, the hope is that everyday people can start creating – I don’t know how high of a ceiling it would be, but hopefully there’s a low floor for people to start imbuing behavior onto web pages, visually, and live, through defining connection between HTML elements. So that’s one of the potential – that’s one of the things I’ve been exploring.

[00:47:47.15] That’s interesting. So I can definitely see the path to using it as a toolkit to build an interactive website, or web application. I’m creating an interactive thing where you can bind, and you can do these things. I less see it – I think you’d have to have that intent when you set up the website. I’m having trouble seeing how I could impose that over the top. I am imagining like you could put these things in a…

Extension, or something?

…an extension, and then you can do stuff on there. But the challenge that I’m running into is you have to be able to understand the intent or the functioning of the website that you’re on well enough to plug into it. So there might be particular touchpoints you can tap into, but as you highlighted, behavior is very bespoke. It’s not well documented. It’s not well created. So until you have – if you land on a page that is defining behavior in a an interpretable way, possibly. But otherwise it feels like it’s hard to know where to hook in properly.

Yeah. A lot of this is still new, new ground that I’ve been exploring. So I can’t say I have the answers. But I do imagine a web extension being the way that we can be like “Hey I wanted to find some – I want to change the behavior of this web app.” Maybe it’s possible, maybe it’s not. I think having the option to enable something like that is the direction that I want to head in, that I want to keep exploring. But there’s certainly a tension between what’s happening behind the scenes and someone trying to override that.

That will be interesting to explore and figure out, for sure.

I would love this ability to just move elements around inside of my browser, because I could submit so many false bugs that way. Just load up your site, move something, and be like “Screenshot. Why is this thing out of place? I don’t understand. So I’m going to go fix it.” But that’s just me.

Yeah. I mean, that’s the goal. If we can make –

Jerod, you can do that already.

I know, but I’d hav to pop open the dev tools, and stuff… I just want to click and drag, man. Come on. Make my life easy. Trolls need tools, too. [laughs] Speechless. I left you all speechless on that one.

I just – if you really want that capability, it’s probably not very hard to build.

Well, I don’t want it that bad, Kball. I’m just hoping that Chris builds it for me and then I can just use it. I mean, you know [unintelligible 00:50:17.19]

So last topic… I mean, I think the arrow stuff is really cool. Ironically, I literally just put an SVG-based arrow on our website last week… Which you can enjoy at changelog.com/news. There’s a little arrow there, that points and says “This is our latest issue.” And I just built a little SVG arrow, a handcrafted SVG. I made that path myself with the help of one robot. But other than that, handcrafted. I do SVG for many use cases, especially arrows. Good stuff. Let’s talk about Fulc websites.

Yeah. Yeah, yeah.

Fulc interfaces, not Fulc websites. Fulc interfaces.

Yeah, I mean, it sort of builds off this concept of “What does a more malleable and extensible web look like?” and trying to lower the floor for people who aren’t professional developers - maybe they have some web experience, maybe they don’t - to create one-off interfaces for themselves, for their family, for their community, for their friend group, whatever. The point isn’t to globally scale a product, like a web app; that’d be traditionally what you think about.

Mm-hm. It’s like the small web.

It’s a small web, right?

I love the small web.

I wish it was smaller, the web in general. More small websites, less big ones.

[00:51:42.27] More small, yeah. But that’s the ethos, right? That’s a totally valid use case. And it’s always going to be more contextual to the people that are interested in it, and less – there’s going to be less assumptions of some random person across the world that’s designing and building out that product. So it sort of ties into a lot of this canvas stuff of like “Can we make the web more directly manipulatable? Can we make it easier to define connection between elements, and perform some kind of computation along that connection?”

It’s also just related to “How do we build a web page? How can we share this web page?” That’s another aspect that I’m thinking about here… Specifically through this concept of a self-modifying HTML file, of - imagine an HTML file that can save to itself. So you have this live – you can do whatever you want. You can edit with contenteditable, or manipulate a canvas. And there’s a button that allows that file to save to itself.

Jerod, it’s Nick Nisi’s blog.

I was thinking the exact same thing. He had to use PHP to do this, but if you could do it just with HTML - I mean, then more of us could allow comments that rewrite the file that hosts the comments.

Mm-hm. So we don’t want build tools. For these specific instances we don’t want build tools. We don’t want our content hosted somewhere else. We just want that file.

Is this possible? Can you make an HTML file that writes to itself?

Yeah. I have a little – it’s pretty raw right now, but I have a repo called Fulc, F-U-L-C, that has that functionality. It is technically – it works using this new file… Oh my God, file system API. It’s only available in Chrome. But you can essentially save… You can say “Hey Chrome, I want to save a file”, and it’ll let you select the file. And then you could persist that handler, that file handler in IndexedDB. So anytime you save that in the future, it’ll just save back to itself once you define it.

So only in local context though, yeah?

This is the tension of like the web is hosted over the internet, and there’s some very authoritarian server being like “This is the HTML file.” So the particular context here is you can take – let’s say my blog or my personal website is hosted in this fashion. You can literally just get a copy of everything. You can make your own edits and copy it, and that file is self-contained. It’s just an HTML file. Ideally, everything is inlined, maybe besides pictures or whatever… And so it’s a really portable way to –

What if also you could modify it and it would update it in your local storage, and so your vision of the actual website when you went there was custom to you?

That would be awesome, right? …if there’s ways to overwrite someone…

The only challenge is then if you update it, do I get some of the new pieces? Is there a merge functionality?

Yeah, I haven’t explored that yet, but that’d be really cool.

This would help me do my long trolls, because if I move that element and I screenshot it, fine. But if they want me to show it to them on my machine, I have to reload the page… If it would just reconstruct what I had done, then I could prove that it’s a real thing.

Once again, you could do this in a browser extension, almost certainly.

Yeah. No, for sure. And so yeah, that’s one of those additional primitives that I feel like we – again, it’s not meant for professional developers, or scaling applications, but what it’s meant for is like it’s meant for you, and the people that you want to build these one-off web apps or websites for. Just ways to simplify that; you don’t need any tooling, and all the authoring and editing happens live in the browser.

Increasingly, I’m seeing people do this just with chat LLM stuff. My neighbor had his seven-year-old wanted to build a website. And they went to ChatGPT, and his seven-year-old described in language what he wanted, and got a functioning HTML page that was designed, and they were able to tweak it. And if you do it in cloud, you can actually see it side by side, because it’ll show the artifact.

[00:56:19.01] Yeah. No, that’s for sure.

I was thinking along the same lines, because my wife has got obsessed with the DALL-E implementation inside ChatGPT, where it will just generate images for her… And she just – she’s a very visual, graphic person, but never has done the training of graphic design, you know? But now she can just describe what she is envisioning, and then it provides a close approximation, and she can say “No, show me seven more that look more like this.” And eventually, it lands on something that she’s happy with. And she’s just over the moon about this. And I’m thinking the same thing for websites. That’s why my eyes lit up when Kball said that, because it’s like “Yes, just allow me to describe what I’m interested in”, and that’s empowerment right there.

Now, Kball, can you take that from inside of cloud or wherever it is, and host it somehow, directly onto the web, so that you can share it?

I think that’s a good question. I don’t know that that exists today, though I think there’s a variation of this that should show up. I don’t know if y’all have played at all with WebSim. I think I referenced it in a previous one, but it’s like –

Yeah, we loaded up my review of TypeScript website. Yeah, yeah, yeah.

Live-generated websites.

This is a cool thing.

Something that probably doesn’t exist, or I haven’t seen yet, but should exist, which is essentially something in between, where you get this thing, but then you’re able to have a conversation to manipulate it. And you can see the beginnings of this in Claude’s artifacts, where you can render out a webpage, it’ll show it to you and you say “Oh no, swap this and that, change this around”, and they’ll render you the new version, but it doesn’t have the hosting piece, because it’s just there, and then you have to figure out hosting. I think it’s probably not that many steps further to get a quote-unquote no code, conversationally generated, live website.

Yeah. It’s like the new GeoCities, only you just say what you want, and it just puts a website up for you. I mean, it’s so cool.

One of the things I’ve been meaning to explore – I don’t know what version, but in Chrome there’s an experimental flag for… They’re essentially embedding some LLM in the browser, and it’ll be like window.prompt, or whatever. So imagine if you could just prompt – you’re not generating something side by side, you’re prompting in itself. And then imagine there’s some affordances where you could modify that existing website that you have already written in itself… Because it knows itself, you’re in that website and you can just be like “Hey, I need you to update this with some additional functionality, or redesign yourself.” And it could take itself, pass it to the prompt, and then update itself with the response from that prompt. That’s the kind of things that you can do here. And what’s important too is it’s a file that you own, or at least you can own if you save it. I think there’s something really – I don’t know, I think there’s something really powerful about that.

Well, the challenge there is the authoritarian web servers, isn’t it? Because everybody likes to create something new, and as soon as you create it, by the way, as soon as DALL-E comes up with an image that she likes, she can just air drop it onto someone else’s phone. It’s just an image, so you’re now you’re just file sharing. But as soon as you have an HTML file that you really like, and you’re a normal person, you ask the next question of “Well, how can I share this with the world?” And it’s like “Oh, things just got a whole lot more complicated.” For us, it’s easy. It’s like “Well, you just drop it on your S3 bucket”, and they’re like “S3, excuse me? What’s that?” And it’s like, of course, there are platforms, the old school GeoCities reference, there’s things Glitch, and other people trying to provide this. But if there was a seamless way of saying “I have now authored my own HTML file that I have here on my iPhone or my laptop, and I want to just single-click share it with the world.” Or no clicks. Just like “Okay, share this now.” That I think is something that is missing, unless it’s out there and we haven’t heard of it yet. But someone should build that. Come on, Chris. You need a job. Go build that sucker. [laughter]

[01:00:15.13] Oh, gosh… You’re giving me ideas… Yeah, I mean, the closest thing I could think of is Netlify’s drag and drop website. You just drop those files and it will publish it for you. But that’s a little different than what you’re talking about.

Yeah. I mean, there’s 17 different ways of getting that hosted on the internet somehow, and none of them are easy for people who aren’t like us. There’s just too much friction there. And so somebody who solves that friction, I think, unlocks a lot of potential, because so many people have creative ideas and taste, but they lack the skill to bring it into the world. That’s the barrier. Just like with photography; it used to be you have to learn all of the tools, all of the cameras, the lighting, the this, the that… The flash, shutter speed, all this kind of stuff in order to capture what you’re envisioning. And we’ve just been through progress, just lowering and lowering and lowering that bar, so that more people can take pictures that they like. That’s what at the end of the day you want. And finally, we get to a point where it’s like, anybody can take a pretty good picture. And that’s what we need with websites - anybody can make a pretty good website. Imagine if we could say that there’d be so many. The small web would be huge. There’d be so many small websites, I think.

You don’t feel the existing drag and drop no code tools do that?

Maybe they do and they’re just not mass market enough.

I mean, they feel clunky as heck to a developer. But if you want to set up a Squarespace or something like that, the barrier is shockingly low.

Yeah, and you’re talking 15 bucks a month. That’s a high barrier, isn’t it, Kball? I mean, that stops a lot of people in their tracks. I mean, fine if I’m going to start my own business. But not fine if I want to show my friends that we’re having a soccer party next week. I want a one-off soccer website that just… I don’t know.

It’s super-cheap, yeah. I mean, the question is what’s the incentive to build it then? If we make it so cheap, how is somebody – I mean, it could be a hobbyist project, but the hosting is going to cost something.

It’s just HTML, man. It’s just HTML. Come on, Kball, stand up a server.

CDNs are cheap these days.

I think you could find a middle ground, hopefully. I mean, obviously, you need to make the economics work. Nothing’s free out there. But you could find a middle ground, maybe.

I mean, a Netlify or someone else who has the upsells available, it could make sense. Vercel should build this. Netlify should build this. It’s the easy on-ramp to their tools… Though there is a question of “Do these free users or very cheap users - are they actually going to upsell to those things, or they just want the –”

Free hosting…

[01:02:57.03] Yeah. It’s actually – I think, to step back on this for a second, all of the capabilities exist in the world. Building this is not a massively hard problem. The question is, what’s going to fund it to be created? That might be a hobby project, but then even more importantly, maintain it over time. I think 80% of software expenses is maintenance. So how does this thing stay maintained over time? What’s paying for that to happen? I’m not sure the economics are there for it, unless you can get somebody who’s really excited about it. I mean, maybe the JSParty audience wants this to exist, and they’ll all throw in 50 bucks.

There you go.

If everybody threw in 50 bucks, that’s a lot of money to build this thing.

Right. Easier than that would be one sugar daddy, wouldn’t it? I mean, that’s what most people do, they just find one –

Are you volunteering?

Come on, I’m no sugar daddy. [laughter] I need one. But if you’re out there, if you’re a wayward billionaire who’s made their living and just loves JSParty, and would like to fund a platform for normal people to upload their HTML files seamlessly, reach out.

To generate an upload, right?

Oh, yeah.

Just to have this conversation, boom, boom, boom, there’s my website.

Right. It’s WebSim AI, but with real websites. WebSim is fake websites. They kind of do stuff, but they’re not for real.

It’s WebSim, plus I can iterate and modify it, probably conversationally, plus I can publish it to the web.

There you go. There’s your spec.

And I guess another way you could potentially monetize is you hook in with domains, and you sell domains, or you get a referral from selling domains.

True, true, true.

See, Chris, there’s lots of ways this dog can hunt, man, if you build it… I’m not sure if they’ll [unintelligible 01:04:44.13] or not, but it sounds cool.

Thanks. Yeah, I appreciate it. But that’s the vision, at least. We’ll see how it plays out.

Very cool. Anything else you want to talk about before we call it a show?

I mean, that’s the [unintelligible 01:04:58.06] but interconnected things that I’ve been thinking about and working on. So yeah, I appreciate the time and chatting with both of you. And I appreciate the ideas. Maybe I’ll let you know tomorrow if that hosting service will become a thing.

I was going to say, you’d build it that fast… We talked about it this morning, and by tomorrow, it’s live. Let’s go.

He’s got free time. He’s got free time. Very cool. What’s the easiest way for folks to connect with you? I did find your Quiver repo. I did not find the editable HTML one, so hook us up with that.

Cool. Yeah, they’re all on my GitHub. The easiest way is probably Twitter. My name is @ChrisShank23, I think.

Awesome.

Yeah, definitely feel free to DM or tweet at me. I’m happy to converse.

Love it. Kball, any final words before I hit that outro?

Do it.

Alright. On behalf of Kball, our guest, Chris Shank, and all the crazy things you can do when you can edit your HTML yourself and save it to yourself. It could be a cool world out there. I’m Jerod, this is JS Party, and we’ll talk to you all on the next one.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00