SQL, Nosql, InMemory - To Reinvent the Wheel Is Not Always the Right Choice
In former times, over 50 years ago, a smart man named Edgar Frank "Ted" came up with the relational database design. What one man can do, others can study and not really understand but use very well. The interesting thing about relational databases is, that a dataset of one entity is an instance which can be related to the instance of another entity. This is interesting, because one could simply reference data directly, then the benefit of having an instance declared already as a certain type would be gone in further computation AND an index of an instance can be regarded as an entity subtype, this makes it possible to build ordered sets within entities, if respective indexing strategy is correctly implemented, like prefix indexes or unique data identifiers. The indices make it possible to run through data quickly, if you understand the instructions of a CPU and order of storage. In a Software often you would want to construct a type and reference instances. So this already was build and if used that way, SQL becomes extremly powerfull with Instance Indices. If a SQL Database is designed in the right way it can perform queries incredibly fast while as well have data ordered and distinct logically on a harddrive, which has its own nature. The database design and the respective order on the harddrive are the major factors to improve database requests. Then procedures can easily perform in milliseconds.
Modern approaches are to use key value stores and inmemory databases (in contrary to an application cache), can be faster but not necessarily. A key value pair with a reference in RAM is of course the fastest way to request a dataset. Having cheep machines with 128 gb RAM these days, appeal to simply store huge amounts of meta data in RAM. The danger about this is, if the underlying system is not smart enough, it will become slow. This can be e.g. due to thrashing, redundant allocations or dead references. The interesting and most common use for inmemory databases, is to use them as a cache for complex requests in dynamic websites. Documents which can be compiled would just be delivered through a filehandle and with ssd storage there is not much of a performance gain.
So generally, database design can be an easy task, but also very sophisticated. This counts especially for DAM, ERP, ECMS respectively BigData and structured data in a distributed system.