Improving GitHub code search
GitHub’s all new code search is in technology preview (sign up here for early access) and this 5-minute video does a great job of providing a rundown on how it works and why you’ll probably want to try it asap.
GitHub’s all new code search is in technology preview (sign up here for early access) and this 5-minute video does a great job of providing a rundown on how it works and why you’ll probably want to try it asap.
Today we’re joined by Jessica Lord, talking about the origins of Electron and her boomerang back to GitHub to lead GitHub Sponsors. We cover the early days of Electron before Electron was Electron, how she advocated to turn it into a product and make it a framework, how it’s used today, why she boomeranged back to GitHub to lead Sponsors, what’s next in funding open source creators, and we attempt to answer the question “what happens to open source once it’s funded?”
What if we could contribute to a project without having to create a pull request? Imagine all we have to do is to create an issue. Yes, you can with GitHub Actions and the new GitHub Issue Forms. Let’s get to it!
I created a GitHub action that auto-formats Changelog’s episode transcripts. Here’s how.
Thomas Dohmke will be CEO of GitHub. He served as GitHub’s Chief Product Officer, and according to his LinkedIn bio — Thomas led the acquisition of GitHub at Microsoft and the acquisitions of Dependabot, Semmle, and npm.
This morning, I shared the following post with Hubbers in response to Nat’s announcement about his next adventure. I am thrilled to take on the role of CEO to build the next phase of GitHub for our global community of software developers.
Exiting as CEO, Nat Friedman shared his thanks in a post titled “Thank you, GitHub”.
This morning, I sent the following post to the GitHub team. TL;DR: I’m moving on to my next adventure, and Thomas Dohmke (currently Chief Product Officer) will be GitHub’s next CEO. I will become Chairman Emeritus, which fulfills my lifelong ambition of having a title in Latin. My heartfelt thanks to every Hubber and every developer who makes GitHub what it is, every day.
On this special edition of The Changelog, we’re talking with Cory Wilkerson, Senior Director of Engineering at GitHub, about GitHub Codespaces. For years now, the possibility of coding in the cloud seemed so close, yet so far away for a number of reasons. According to Cory, the raw ingredients to make coding in the cloud a reality have been there for years. The challenge has really been how the industry thinks, and we are now at a place where the skepticism in cloud based workflows is “non-existent.”
After 15 months in preview, GitHub not only announced the availability of Codespaces for Teams and Enterprise — they also showcased their internal adoption, with 600 of their 1,000 engineers using it daily to develop GitHub.com.
On this episode, Cory shares the full backstory of that journey and a peek into the future where we’re all coding in the cloud.
This week we’re bringing JS Party to The Changelog — Nick Nisi and Christopher Hiller had an awesome conversation with Luis Villa, co-founder and General Counsel at Tidelift. They discuss GitHub Copilot and the implications of an AI pair programmer and fair use from a legal perspective.
The news is in the headline on this one, but here’s a bit more meat from the article:
Using rigorous and detailed scientific analysis, the study concluded that upon testing 1,692 programs generated in 89 different code-completion scenarios, 40 percent were found to be vulnerable.
On today’s show Adam is joined by John Nunemaker (an old friend). For some of you listening you might remember John’s appearance on The Changelog #11, which was basically forever ago. Or his company Ordered List — they made Gauges, Harmony, and Speaker Deck which was quite popular in its time — so much so that they attracted the attention of Chris Wanstrath, one of the co-founders of GitHub to acquire Ordered List. The rest as they say is history. Today, John and I go back through that history to see what it was like to be acquired by GitHub and how that single choice has forever changed his life.
Luis Villa of Tidelift joins the show to discuss GitHub Copilot and the implications of an AI pair programmer from a legal perspective.
The FSF is funding white papers on “philosophical and legal questions around Copilot”. In their post announcing the fund, Donald Robertson states:
The Free Software Foundation has received numerous inquiries about our position on these questions. We can see that Copilot’s use of freely licensed software has many implications for an incredibly large portion of the free software community. Developers want to know whether training a neural network on their software can really be considered fair use. Others who may be interested in using Copilot wonder if the code snippets and other elements copied from GitHub-hosted repositories could result in copyright infringement. And even if everything might be legally copacetic, activists wonder if there isn’t something fundamentally unfair about a proprietary software company building a service off their work.
One thing is for sure: there are many open questions that need answering. How we (as a community / industry) go about answering those questions is much less clear. But it’ll probably take place on blogs, forums, GitHub Issues, and even court rooms over the next decade.
One of our awesome Changelog++ members scratched their own itch:
When you upgrade to Changelog++ you’re given access to ad-free versions of episodes however they’re only available in one giant bucket feed instead of through individual show feeds. Though only around 5 new podcast episodes are published weekly, if you’re coming in as a new listener you’ll have a long backlog list with over one thousand shows. It’s easier to sift through older episodes when they’re organized by show, so that’s what this project provides: individual show feeds.
I love grassroots initiatives like this, but it’s motivating me to bring Changelog++ onsite so we can bake the functionality right in to our platform…
Commit groups sounds interesting to me. Anyone reading this familiar with Git innards? Is this doable?
You know the “group” facility of vector graphics programs? You draw a couple of shapes, you group them together, and then you can apply transformations to the entire group at once, operating on it as if it were an atomic thing. But when need arises, you can “ungroup” it and look deeper.
I’d love to see that same idea applied to Git commits. In Git, a commit group might just be a named and annotated range of commits: feature-a might be the same as 5d64b71..3db02d3. Every Git command that currently accepts commit ranges could accept group names. I envision groups to have descriptions, so that git log, git blame, etc could take –grouped or –ungrouped options and act appropriately.
Kathy Korevec has been putting a lot of thought into documentation as part of her work at GitHub:
Wouldn’t it be great if the docs knew that you were writing a Python app on a Windows machine and that you preferred watching videos instead of reading through text? I want you to find the answer to your questions in the docs, easily and efficiently. When you’re stuck on a problem and you turn to the docs, there’s a moment of magic as you find the solution, try it out and it works. In that moment you become unblocked, you learn something new and you can move on to keep building your application.
In this post, she outlines 10 guiding principles she developed after speaking with hundreds of developers about their struggles with documentation. She then shares how she’s putting those principles into action in/around GitHub. Good stuff.
The benefits of such a setup are numerous, especially for small sites and side projects:
Hosting a static website is much easier than a “real” server - there’s many free and reliable options (like GitHub, GitLab Pages, Netlify, etc), and it scales to basically infinity without any effort.
The how is also super interesting:
So how do you use a database on a static file hoster? Firstly, SQLite (written in C) is compiled to WebAssembly. SQLite can be compiled with emscripten without any modifications, and the sql.js library is a thin JS wrapper around the wasm code.
There’s more to the story, and the resulting solution is also open source.
The feed is hosted on GitHub Pages (which means it’s public to all) and is static until it gets rebuilt. Building is done periodically via a GitHub Action; configuration is via a YAML file (It’d be cooler if you could import an OPML instead). Even if it’s not something you’d use, I think this project is interesting for two reasons:
This is a solid move in protecting people’s privacy, especially for those completely unaware. Sounds like WordPress is considering the same and a helpful Hacker News commenter typed up how to accomplish it on a bevy of popular web servers.
Related: I just deployed this to changelog.com as well. 😎
In this post, Tomas Wróbel lays out 10 potential drawbacks to the typical PR flows:
Samanta de Barros:
If, like me, configuring GitHub Actions is not your thing and you find yourself wanting to try something before actually pushing it to GitHub (and having to see the effects on real-life), follow this step by step of how to run your GitHub Actions on your own computer.
I wouldn’t advise obsessing over your GitHub stats, but if you’re going to do it anyway… might as well do it with this rad looking terminal UI! 😆
Just add
1s
aftergithub
and pressEnter
in browser address bar for any repository you want to read. For example VS Code’s repo: https://github1s.com/microsoft/vscode
Nifty!
Daniel Stenberg answers critics who believe curl shouldn’t be hosted on GitHub (for various reasons) by asking himself the question: what happens if GitHub “takes the ball and goes home”?
No matter which service we use, there’s always a risk that they will turn off the light one day and not come back – or just change the rules or licensing terms that would prevent us from staying there. We cannot avoid that risk. But we can make sure that we’re smart about it, have a contingency plan or at least an idea of what to do when that day comes.
Whether or not you agree with Daniel’s GitHub-related conclusions, this statement is 💯% true and we should all be doing similar analyses before adopting any 3rd-party offering.
Okay this is pretty stinkin’ clever.
- GitHub Actions is used as an uptime monitor
- Every 5 minutes, a workflow visits your website to make sure it’s up
- Response time is recorded every 6 hours and committed to git
- Graphs of response time are generated every day
- GitHub Issues are used for incident reports
- An issue is opened if an endpoint is down
- People from your team are assigned to the issue
- Incidents reports are posted as issue comments
- Issues are locked so non-members cannot comment on them
- Issues are closed automatically when your site comes back up
- Slack notifications are sent on updates
- GitHub Pages are used for the status website
- A simple, beautiful, and accessible PWA is generated
- Built with Svelte and Sapper
- Fetches data from this repository using the GitHub API
In this episode we discuss Mislav’s experience building not one, but two Github CLIs - hub and gh. We dive into questions like, “What lead to the decision to completely rewrite the CLI in Go?”, “How were you testing the CLI, especially during the transition?”, and “What Go libraries are you using to build your CLI?”
Alex Ellis:
This post by a community member from India shows how to use GitHub actions to build, push and deploy to OpenFaaS anywhere - whether in the cloud or on an RPi at home. The best part is that this is a fully multi-arch setup, and uses the new Docker buildx with GitHub Actions.