>>17790 (OP)
It's just how their field evolved, same as with math notation and other historical things.
Anyway, your latch, you may remember "combinatorial" and "sequential" logic disciplines.
When you're thinking about it from a combinatorial point of view, you pretend this is just simple timeless logical math, such as logical expressions like (A&~B)&(~A&B), you can just do your truth table or whatever, you can use all the usual logic theorems and methods (karnaugh maps, venn diagrams, or whatever else you may prefer) to find equivalent expressions.
So you're basically abstracting away everything else that happens in the actual reality of building the circuit and only care about its logical value, in the mathematical sense, you can use all kinds of transformation formulas and so on.
However, if we take a step back, we may realize that we're skipping many many details about the practical implementation of this, one detail is still quite relevant even if we still abstract from the rest of the analog matters: delay.
Let's take your NOR gate up there, or even the wire, in the case of the NOR gate, physically it could be made of transistors and wires (for example), each of those elements do not operate on 1s and 0s, they have voltages applied to some terminals and those voltages will cause changes at other terminals, since this is a physical process, it will take some time for the "output" to change, it doesn't go from 1 to 0 instantly, or from 0 to 1, it goes through intermediate rises and falls. What if your input wasn't even a full 1 or 0, but let's say 0.5 or 0.7 of that voltage? It will have a well determined behavior in most cases, the behavior sometimes won't even be that stable either! Even in the case of a simple wire, EM waves propagate at the speed of light, but for a long enough wire, you will start seeing all kinds of non-trivial (transmission line) behavior.
Note that a logic gate usually is not just your 2 inputs and one output, it will often also involve 2 power supply terminals for a positive (or negative) voltage source/supply and a drain/ground, in a typical gate implemented with let's say MOSFETs (as used in a typical IC), the voltage at the inputs is applied to the transistor's gates, the gate physically consists of a conductor and an oxide (insulator), then inbetween the source and drain we have a channel (source and drain are doped), the point here is that there's no(t much) current passing from the gate and (source/drain), the MOSFET gate mostly acts to create an electrical field which itself will pull or push charges in the channel, such that current can pass more easily or not pass at all, basically it's like if you had a resistor whose value you changed through the gate voltage (very simplified description, the actual channel is a doped semiconductor though, so the behavior is not the same as you would have if it was a metal wire), but in this simplified description, you could imagine that if you made the gate turn off, it's as if you set a huge gigaohm resistor there and prevented the circuit from completing, thus source and drain would not be connected (much), similarly if it was on, it was as if you connected the source and drain. In practice those transistors are not ideal and they don't conduct as well in the on and off state and in modern practice people use design methodologies like CMOS where even for a simple invertor (~ = NOT), you have 2 transistors (NMOS and PMOS), such that when the output is meant to be 0, it ends up taking it from the GND, and when it's 1 it takes it from the supply, with half of the circuit being "off" in typical operation - but you know, there will be an intermediate state, when one transistor opens and the other closes, and in that moment both will be conducting! In fact that's where most of the power dissipation comes in real chips - the fact that the state changes and for at least a moment it conducts - usually state will be made to change with the clock in typical synchronous circuit designs. In typical CMOS designs if inputs didn't change, you would typically not have power dissipation as long as the output itself as also connected as input to some other mosfet gate (thus it would hit the oxide and no current would "pass" through, only the electric field would change).
Okay, enough with the analog electronics, the important thing you may have noticed is that your gate will have unstable output for some time, meaning that inbetween you applying your input voltages, the output will not be correct/stable except after some time, also if the inputs themselves are not quite proper 0s or 1s, again, you may see weird behavior at the output - these are ultimately analog circuits!
So in the sequential logic discipline we still ignore most of the analog stuff but we don't forget that there's a delay until the output is stable.
Let's consider your simple SR latch. R - reset, S - set. Let's say S is 1, R is 0. the expected behavior is that the output (Q) must become 1 and ~Q must become 0 (as far as the definition of the SR latch is concerned)
We can look at the lower NOR: ~(1 OR X) = ~1 = 0, so we know that ~Q becomes 0 from this and is fed back to the input of R (again with a delay),
Let's consider the higher NOR now: ~(0 OR X) = ~X, so the output will be whatever ~Q was but negated, so it basically becomes Q. In the earlier case ~Q was 0, so Q is 1.
If you consider the opposite case with S = 0, R = 1, you'll get the same behavior but inverted (the circuit is symmetric), so now Q becomes 0 and ~Q becomes 1.
Let's now pretend that we just keep both S = 0, R = 0 after this initial setup (note that if you didn't do an initial set, the value at Q or ~Q could be random depending on whatever was floating at the time on the circuit):
the upper NOR and lower NOR both are ~(0 OR X) = ~X, where X was what was on the opposite wire, in the case where Q was 1 from being set, the value is fed onto the lower latch, and the output becomes 0 which is ~Q, ~Q itself is 0 and is fed to the first latch where it becomes 1, basically Q=1, ~Q=0 and it stays that even if inputs are 0. If you had set the circuit to something else, it would remain as that!
Note that the case where S = 1, R = 1 is not "allowed" (it would "try" to set Q=~Q=0 which doesn't match the definition of the latch), but also it won't be stable if you later changed both to 0, 0 to keep the value - in practical reality will depend on which signal came first, and there will always be some minute time difference or behavior related to the actual hardware this is implemented in and may settle to some output like 0 or 1.
Remember though that the actual values of 1 and 0, come from the power supply, they are not "passed" from the inputs, so the latch *will* often have some value at the end (some voltage), even if it was never set - it will usually settle to a value (altough metastable states where it's something intermediate are possible sometimes).
If you wanted this behavior to be stable, there are other latches that take more area/parts, for example the gated SR latch that has an Enable signal that lets it shut off the inputs to 0 (just AND's both), gated D latches, JK latches and others (for example the JK latch will flip Q if you set both inputs to 1).
In practice these memory elements are used together with a clock (something that constantly oscillates between 0 and 1 at a given frequency), the idea is that you measure the entire circuit, combinatorial and sequential parts's delay, and then you set the clock so that it's at least that delay, so that it's "stable" and the memory elements (registers) get changed at each clock cycle (the combinatorial elements also have delay, they won't have the correct value until the entire circuit's delay has elapsed (slowest path)), you may also have various state machines to execute conditional logic depending on the state you're in and so on, thus break down more complex computations into "simple" operations.
Also, in real designs there's often a very common signal called RST(reset), that will initialize all registers to known values, so usually your registers and circuit can be started in a predictable state and lead forward in time in a predictable state as well.
Of some entertainment may be to consider people that hack consoles and other hardware devices with simple means like glitching - consider what happens when your clock changes much faster than the output can propagate! (similarly for supply signals) The circuit only works within very strict timing constraints, if you violate them, you will achieve all kinds of undefined or unexpected outputs! And if that happened to be during some computation that someone wanted to disturb (for example some security check or cryptographic operation), well you can guess what would happen!
Also, in typical digital circuit designs, designers can usually work with mostly the logical behavior, delay and a few things like high-impendance and some signal strength abstractions (weak/strong and similar rule of the thumbs), and this is enough to design very complex complete digital chips (like CPUs or GPUs), usually done in some RTLs (register transfer languages) like Verilog or VHDL or some custom HDLs.
Meanwhile the people that design the standard cells (such as your gates and SRAM and flipflops and more) will be working at the layout level where they start with simple transistor schematics and move to the actual physical layout of the cell (each layer of that part of the chip, the actual masks), they can simulate these based on characteristics provided/measured by the foundry for their process. While the actual process engineers that design the fab's process will be the ones to tweak how these layers work (by changing how much to dope, how much to bake, what to deposit and etch and in what order and so on) such that particular devices like transistors and wires may be constructed reliably (transistors don't exist as anything but an abstraction though when certain layers intersect and are marked in the right way). Those designing the standard cells may choose to design cells that are low power or high density or high speed or other design targets, usually they trade off one for the other, like the cells used in a mobile phone will be different from those used in a GPU, the earlier ones will usually be thinner and operate at lower voltages and usually dissipate less power.
I feel like I've gone off on too many tangents but I hope it was of some use to you OP.