Even if you start out small and later need to upscale, as long as your web application can run on the same machine as the database, which it can in 99% of the time, you can just upgrade the hardware to a beefier machine and keep business as usual.
The only time you need to consider a client-server setup is…
In my 15+ years of web development, there are very few things I can say are unequivocally a good idea. It almost always does depend.
Storing timestamps instead of booleans, however, is one of those things I can go out on a limb and say it doesn’t really depend all that much. You might as well timestamp it. There are plenty of times in my career when I’ve stored a boolean and later wished I’d had a timestamp. There are zero times when I’ve stored a timestamp and regretted that decision.
This week we’re talking with Ben Johnson. Ben is known for his work on BoltDB, his work in open source, and as a freelance Go developer. Late January when Ben open sourced his newest project Litestream in the readme he shared how the project was open source, but not open for contribution. His reason was to protect his mental health and the long term viability of the project. On this episode we talk with Ben about what that means, his thoughts on mental health and burnout in open source, choosing a license, and the details behind Litestream - a standalone streaming replication tool for SQLite.
This week Adam talks with Spencer Kimball, CEO and Co-founder of Cockroach Labs — makers of CockroachDB an open source cloud-native distributed SQL database. Cockroach Labs recently raised $160 million dollars on a $2 billion dollar valuation. In this episode, Spencer shares his journey in open source, startups and entrepreneurship, and what they’re doing to build CockroachCloud to meet the needs of applications that require massive scale and ultra-resilience.
ClickHouse has come out of seemingly nowhere to rival Elasticsearch as the database-related open source software project with the most active contributors…
ClickHouse is column-oriented and allows for analytics reports to be generated using SQL queries in real-time. ClickHouse’s rise in popularity began in 2016, which happens to be when Apache Spark’s peak.
I first heard of ClickHouse last year when I learned that our friends at Plausible use it for their analytics backend (teamed with Postgres for relational data).
Imagine a world where Git and MySQL got together and had a baby. They would name that baby, Dolt.
Dolt is a SQL database that you can fork, clone, branch, merge, push and pull just like a git repository. Connect to Dolt just like any MySQL database to run queries or update the data using SQL commands. Use the command line interface to import CSV files, commit your changes, push them to a remote, or merge your teammate’s changes.
All the commands you know for Git work exactly the same for Dolt. Git versions files, Dolt versions tables.
The authors also created DoltHub where you can host and share your Dolt databases.
This week we’re talking about the recent falling out between Elastic and AWS around the relicensing of Elasticsearch and Kibana. Like many in the community, we have been watching this very closely.
Here’s the tldr for context. On January 21st, Elastic posted a blog post sharing their concerns with Amazon/AWS misleading and confusing the community, saying “They have been doing things that we think are just NOT OK since 2015 and it has only gotten worse.” This lead them to relicense Elasticsearch and Kibana with a dual license, a proprietary license and the Sever Side Public License (SSPL). AWS responded two days later stating that they are “stepping up for a truly open source Elasticsearch,” and shared their plans to create and maintain forks of Elasticsearch and Kibana based on the latest ALv2-licensed codebases.
There’s a ton of detail and nuance beneath the surface, so we invited a handful of folks on the show to share their perspective. On today’s show you’ll hear from: Adam Jacob (co-founder and board member of Chef), Heather Meeker (open-source lawyer and the author of the SSPL license), Manish Jain (founder and CTO at Dgraph Labs), Paul Dix (co-founder and CTO at InfluxDB), VM (Vicky) Brasseur (open source & free software business strategist), and Markus Stenqvist (everyday web dev from Sweden).
With walrus, you can backup services like MySQL, PostgreSQL, Redis, etcd or a complete directory with a short interval and low overhead. It supports AWS S3, digitalocean spaces and any S3-compatible object storage service.
One of the main problems a database storage engine has to solve is how to deal with data in disk that is bigger than the available memory. A way databases solve this issue is through memory-mapped files and a system call called mmap. Discover what mmap is and how you can make use of it in Go.
So the Tailscale team were using a single text file as a database (as you do) and it worked great… until it didn’t.
Even with fast NVMe drives and splitting the database into two halves (important data vs. ephemeral data that we could lose on a tmpfs), things got slower and slower. We knew the day would come. The file reached a peak size of 150MB and we were writing it as quickly as the disk I/O would let us. Ain’t that just peachy?
So, migrate to MySQL or PostgreSQL, right? Maybe SQLite?
Nope, Crawshaw had other ideas.
I won’t ruin the surprise and tell you what they went with, but I will say it’s a widely deployed system amongst cloud natives…
From reading through the README, this seems like a nice balance between a full-blown ORM and hand-rolling all your own SQL. For example, this point from the The mapper function is the SELECT clause. section:
In sq whatever you SELECT is automatically mapped. This means you just have to write your query, execute it and if there were no errors, the data is already in your Go variables. No iterating rows, no specifying column scan order, no error checking three times. Write your query, run it, you’re done.
Graviton Database is simple, fast, versioned, authenticated, embeddable key-value store database in pure Go… Every write is tracked, versioned and authenticated with cryptographic proofs. Additionally it is possible to take snapshots of database. Also it is possible to use simple copy,rsync commands for database backup even during live updates without any possibilities of database corruption.
Still in Alpha, but a lot of work has been done and there are features a-plenty.
You can use this set of guidelines, fork them or make your own - the key here is that you pick a style and stick to it.
This is a great guide, but the 🔑 is definitely in the highlight above. Consistency is tantamount to readability.
Choosing a database is hard. They each have their pros and cons, and without much experience it is hard to determine which is the best fit for your project. In this episode Johan Brandhorst joins us to talk about Postgres. When is it a good fit? How well does it scale? What libraries exist in Go for using Postgres?
A lot of effort is put into the training of AI models, but, for those of us that actually want to run AI models in production, performance and scaling quickly become blockers. Nikita from MemSQL joins us to talk about how people are integrating ML/AI inference at scale into existing SQL-based workflows. He also touches on how model features and raw files can be managed and integrated with distributed databases.
With immudb you can track changes in sensitive data in your transactional databases and then record those changes permanently in a tamperproof immudb database. This allows you to keep an indelible history of sensitive data, for example debit/credit card transactions.
There are so many options for storing data these days. If you haven’t heard Go Time’s excellent episode on databases yet, Jaana does a great job of explaining some of the trade-offs.
Databases are tricky, especially at scale. In this episode Mat, Jaana, and Jon discuss different types of databases, the pros and cons of each, along with the many ways developers can have issues with databases. They also explore questions like, “Why are serial IDs problematic?” and “What alternatives are there if we aren’t using serial IDs?” while at it.
Jaana Dogan started with a draft and this tweet and ended up laying down some serious knowledge on databases.
A large majority of computer systems have some state and are likely to depend on a storage system. My knowledge on databases accumulated over time, but along the way our design mistakes caused data loss and outages. In data-heavy systems, databases are at the core of system design goals and tradeoffs. Even though it is impossible to ignore how databases work, the problems that application developers foresee and experience will often be just the tip of the iceberg.
This is a nice lessons learned post from one engineering team making a database switch.
Overall, I’m happy with how the effort turned out and with CockroachDB in general. Because it uses PostgreSQL’s wire protocol, existing PostgreSQL drivers should work as-is. But we did run into some challenges that are worth pointing out. Here’s a list of things you might want to consider…
I like the update at the end, which emphasizes the important of tests for making a switch of this magnitude:
The system that was migrated has solid tests and good coverage. While a lot of the differences we ran into are obvious (like lack of range types and triggers), others were more subtle (especially the odd on conflict behavior). Test coverage made a pretty significant impact in the speed of the migration and our confidence in pushing live.
SQLx is a modern SQL client built from the ground up for Rust, in Rust.
Truly Asynchronous. Built from the ground-up using async-std using async streams for maximum concurrency.
Type-safe SQL (if you want it) without DSLs. Use the
query!()macro to check your SQL and bind parameters at compile time. (You can still use dynamic SQL queries if you like.)
Pure Rust. The Postgres and MySQL/MariaDB drivers are written in pure Rust using zero unsafe code.
This is like JS Fiddle, but for SQL. Pick from MySQL, Postgres, or SQLite. Then load up some queries, run them, and share with others. Super cool 👍
Our approach comes from low-latency trading; QuestDB’s stack is engineered from scratch, zero-GC Java and dependency-free.
QuestDB ingests data via HTTP, PostgresSQL wire protocol, Influx line protocol or directly from Java. Reading data is done using SQL via HTTP, PostgreSQL wire protocol or via Java API. The whole database and console fit in a 3.5Mb package.
According to the great knowledge base in the sky, NewSQL is, “a class of relational database management systems that seek to provide the scalability of NoSQL systems for online transaction processing workloads while maintaining the ACID guarantees of a traditional database system.”
In an effort to make my team write better SQL, I went over reports written by non-developers and code reviews, and gathered common mistakes and missed optimization opportunities in SQL.
Dividing integers, accidentally counting nullable columns, column position in
GROUP BY and
ORDER BY, and 9 other common gotchas. Don’t get got!
Mat, Johnny, and Jaana are joined by Francesc Campoy to talk about Graph databases. We ask all the important questions — What are graph databases (and why do we need them)? What advantages do they have over relational databases? Are graph databases better at answering questions you didn’t anticipate? How is data structured? How do queries work? What problems are they good at solving? What problems are they not suitable for? And…since we had Francesc on the hot seat, we asked him about Just for Func and when it’s coming back.
One question I ask a lot of folks I interview is what
$PROJECT_X looks like three to five (sometimes 10) years from now. Very few people answer that question without some hemming and hawing.
Enter Andy Pavlo, Associate Professor of Databaseology at Carnegie Mellon, throwing his hat in the ring on the future of databases 50 years (!) from now:
The role of humans as database administrators will cease to exist. These future systems will be too complex for a human to reason about. DBMSs will finally be completely autonomous and self-healing. Again, the tighter coupling between programming frameworks and DBMSs will allow the system to make better decisions on how to organize data, provision resources, and optimize execution than human-generated planning.
That is just one of roughly eight things Andy predicts. Fun to think about, if nothing else.