Check your site against this to determine your accessibility compliance with the WCAG.
Duplicates taking up tons of space on your home NAS?
fclones quickly identifies duplicates, even when there’s 10s of thousands that you’re scanning over the network:
fclones treats your data seriously. You can inspect and modify the list of duplicate files before removing them. There is also a –dry-run option that can tell you exactly what changes on the file system would be made.
Also check out the algorithm used to detect duplicates.
Let’s say you’re on the go and you land on a particularly impressive website. You’d love to peak underneath the covers, but getting at the developer tools on a phone is a pain.
Instead: bookmark this site, copy/paste the URL, and voilá! 💁♀️
jq alternative we’ve discovered this week! (first here)
jqis hard to use. There are alternatives like
zq, but they still make you learn a new programming language. Im tired of learning new programming languages.
gqis not optimized for speed, flexibility or beauty.
gqis optimized for minimal learning/quick usage.
gqunderstands that you don’t use it constantly, you use it once a month and then forget about it. So when you come back to it,
gqwill be easy to relearn. Just use the builtin library just like you would any other go project and you’re done. No unfamiliar syntax or operations, or surprising limits. Thats it.
I don’t know if Go is a great fit for this use-case, but if you already know it well… makes sense.
If you’ve found the (excellent)
jq tool for working with JSON a bit unwieldy… check out
zq and see if you like its API any better. I wouldn’t put too much weight on the faster aspect, though:
We will cover zq’s performance in a future article, but to cut to the chase here, zq is almost always at least a bit faster than jq when processing JSON inputs
Almost always at least a bit faster is not something you’re likely to notice in practice.
Still in beta, but Fleet has a lot of promise. It boasts compile times up to 5x faster than
cargo. Here’s how:
Fleet works by optimizing your builds using existing tooling available in the Rust ecosystem, including seamlessly integrating sccache, lld, zld, ramdisks (for those using WSL or HDD’s) et al.
From the engineering team at Bloomberg:
It can track memory allocations in Python code, in native extension modules, and in the Python interpreter itself. It can generate several different types of reports to help you analyze the captured memory usage data. While commonly used as a CLI tool, it can also be used as a library to perform more fine-grained profiling tasks.
It has a lot of nice outputs so you can grok what’s going on.
I’ve been using asdf on my new machine to manage Elixir/Erlang and Node versions and it works like a charm. Highly recommend! There are some complaints in the comments of this post about it being slow, but I haven’t had that problem yet.
For now, [x]it! is merely a file format specification.
You don’t have to use a specific tool for
.xitfiles, the basic operations like creating items or checking them off can be done in every text editor. Tools can make the experience more convenient, though, and provide support for common use cases.
The cool thing is anybody/everybody can now develop integrations for their favorite tools.
Daniel Schwarz runs down 6 web browser hacks to aid your web development workflow.
- Activating design mode
- Applying a background to everything
- Simulating events
- Setting cookies
- Toggling classes
- Color widget bookmark
EaseProbe does 3 kinds of work:
- Probes via HTTP, TCP, arbitrary shell commands, and a set of native clients.
- Notifications via email, Slack, Discord, Telegram, or log file.
- Reports come in the form of daily, weekly, or monthly SLA reports.
Max Howell, creator of Homebrew, has gone back to his notes on brew2 to apply web3 concepts to help “distribute value to open source.” He’s calling this new brew tea.
Tools like Homebrew lie beneath all development tools, assisting developers to actually get development done. We know the graph of all open source, which means we’re uniquely placed to innovate in interesting and exciting ways. This is exactly what tea will do. We’re taking our knowledge of how to make development more efficient and throwing innovations nobody has ever really considered before.
With plans to move the package registry on-chain, Max lays out the numerous benefits due to “inherent benefits of blockchain technology”:
- Packages will be immutable (no more left-pad incidents)
- Packages will always be available (we’ll use decentralized storage)
- Releases will be signed by the maintainers themselves (rather than a middleman you are told you can trust)
- Tools can be built to fundamentally verify the integrity of your app’s open source constitution
- Token can flow through the graph
Max says “token flowing is where things get really interesting,” and goes on to say “with our system people who care about the health of the open source ecosystem buy some token and stake it.”
(Thanks to Omri Gabay for sharing this first in our community Slack)
ZFS has become very portable in recent years of its development, supporting six (6) operating systems: FreeBSD, Illumos, Linux, MacOS, NetBSD, and Windows. But what if you wanted to create a ZPool compatible with all of them? Which options and ZFS features should you choose?
If you haven’t yet, check out The Changelog #475 where I talk with Matt Ahrens (co-founder of the ZFS project) about making the ZFS file system.
Wireshark is a seriously cool piece of software for packet sniffing and analysis. Why might you want to use it on yourself?
This opens up possibilities to not only reverse engineer web app private APIs in a deeper way, but also to do the same kind of research against desktop apps for purposes such as data scraping, automation, vulnerability research and privacy analysis.
Schema changes are usually critical operations to perform on a high volume database. One thing off, and you are looking at an outage.
pg-osc makes it easy and safe to run any
ALTER statement on a production database table with no locking.
Say good bye to learning new tools just to work with a different data format.
Dasel uses a standard selector syntax no matter the data format. This means that once you learn how to use dasel you immediately have the ability to query/modify any of the supported data types without any additional tools or effort.
This project have started as an experiment to discover generics implementation. It may look like Lodash in some aspects. I used to code with the awesome go-funk package, but it uses reflection and therefore is not typesafe.
As expected, benchmarks demonstrate that generics will be much faster than implementations based on reflect stdlib package. Benchmarks also shows similar performances to pure
The purpose of this list is to track and compare tunneling solutions. This is primarily targeted toward self-hosters and developers who want to do things like exposing a local webserver via a public domain name, with automatic HTTPS, even if behind a NAT or other restricted network.
*We spoke with Alan Shreve about this decision back when he made it, if you’re curious about his thinking.
ssh a lot, you may get some serious value out of this suite of (currently 8) related tools and shortcuts.
ssh-ping checks if a host is reachable using ssh_config,
ssh-diff diffs a file over SSH, etc.
Garage is a distributed storage solution, that automatically replicates your data on several servers. Garage takes into account the geographical location of servers, and ensures that copies of your data are located at different locations when possible for maximal redundancy, a unique feature in the landscape of distributed storage systems.
It has an S3-compatible API and can be used as a storage backend for things like NextCloud, Matrix, and Mastodon. It’s being built by a non-profit in France that is “working to promote self-hosting and small-scale hosting.” Why do they do this?
self-hosting means running our own hardware at home, and providing 24/7 Internet services from there. We have many reasons for doing this. One is because this is the only way we can truly control who has access to our data. Another one is that it helps us be aware of the physical substrate of which the Internet is made: making the Internet run has an environmental cost which we want to evaluate and keep under control. The physical hardware also gives us a sense of community, calling to mind all of the people that could currently be connected and making use of our services, and reminding us of the purpose for which we are doing this.
This major undertaking was discussed by Matt Ahrens on our recent ZFS episode. How it works:
The feature reflows existing data, essentially rewriting it onto a new arrangement of disks – meaning the original group plus a newly added disk. In so doing, a new adjacent chunk of free space is created at the end of the logical RAID-Z group and thus at the end of each physical disk.
With a disclaimer:
While all capabilities of this feature have been implemented and all tests so far have been passed, there are still a few loose ends to tie up. Specifically, there is some code cleanup to do, some verbose logging to remove, some code documentation to write, and similar relatively minor tasks. We aim for this to be integrated by Q3.
It’s still in closed beta, but this looks like a really cool environment for data scientists and other folks who code to accomplish other goals vs code as craft. One cool thing you can do is take your Jupyter notebooks and convert them to PyFlow graphs (and vice versa).