My first database was a C64 datasette. In the 80s, you wrote the program in memory and then saved it on a magnetic tape. When you need it, you load it from the datasette (no auto-save feature available.) The data model was the C64 Basic variables types.
Fast-forward to the 90s, my next database was a Microsoft Access file. I used to work with Visual Basic to develop desktop applications. You could create tables, fields, and relationships and run join queries between tables. It was pretty powerful.
Around the 2000s, I jumped onto the web app bandwagon and switched to MySQL database. The quantum leap was that you could access the database from anywhere at practically no cost. That’s when most of my generation heavily learned the SQL language.
Circa the 2010s, big data hit the mainstream. Suddenly you could run analytics on gigabytes databases. Relational databases struggled with the scale, so the NoSQL databases came up. Since I was working with user events tracking, I used Cassandra (which was optimized for writing) and Elasticsearch to run analytics on aggregated data. Relationships and join queries were gone in favor of scalability.
The main restriction with NoSQL is that you need to model the tables/documents based on use cases, which puts to test the Product Manager’s creativity and your developers’ database skills. Now in the 2020s, we have options like Rockset, bridging the gap between Relational and NoSQL.
What’s around the corner? How will databases look in the 2030s? I think we are going back to the 80s. Databases will become less of a problem, and the data modeling will be pushed back to the programming language data types. New cross-database in the cloud ORMs will come up. Startups will return to the DBA-less paradise, pushing that complexity to third-party company providers.