The Geekbench score doesn't really show much regression regardless of the OS, as most areas hit by the patches aren't tested by Geekbench, and if some are, the performance penalties don't impact the total score in a meaningful way. For instance, if you run 200 different performance benchmarks and one of them reports a drop by 99%,, the total score will still be within 1% of its prior score. However, the part of the processor that does so poorly now is the bottleneck to many system activities. The remaining parts of the processor aren't specialized enough to efficiently cover for the aspect that took a major performance hit, and they can do it with, let's say, 30% of the efficiency. Because they still perform as well as they did, the score didn't change, but they need to spend 3 times as much time covering for the part that did take a performance hit in many real life scenarios (as also measured by other, more specialized benchmarks, or even general system tests).
Things that suffered the most are OS independent, so apart from minor differences, both, Windows and Mac, will see a fairly similar hit to performance. The processor, for instance, regardless of the OS, gets instructed to obtain data from the hard drive/SSD. The processor, therefore, does its work before the OS gets the data back, and the work that the CPU does between receiving the request and providing the data back to the OS was one of the things that took a major hit. It's the way the CPU handles its tasks, and it happens completely outside of the OSs jurisdiction - the OS is not aware what the CPU is doing internally, it can do nothing to help or interfere, and that's the layer that's slowed down by the patches (Microcode, which was applied to the processor, not the OS). The OS only knows that it took the processor longer to provide it the requested data back. Furthermore, the processors have something called "Speculative execution", which basically tries to guess what it will be asked to do next based on what it's just accomplished, and prepare some extra work beforehand to gain some performance advantage. Intel's was very aggressive and could be fooled (a hacker could feed the processor some given stimulus to make the processor guess a certain, malicious way). Since the "guessing mechanism" is embedded in the hardware, Intel's only "fix" was to disable that part entirely, making the processor perform more grunt work in real-time instead. Fixing it for real requires a hardware redesign - a future gen chip could have a more secure speculative execution engine, like AMD does, that wouldn't have to be disabled, because it couldn't be fooled so easily, bringing back the performance advantage of speculative execution. You can't get that performance back through software, let alone the OS (which can only be optimized by sending less requests to the CPU, but doesn't have much control over how the CPU performs its tasks).
In terms of games, they barely took any hit, as the CPU activity in games is very, very simple, and is limited mostly to feeding frames to the GPU to process. That has almost nothing to do with Meltdown or Spectre. Furthermore, on laptops with low power GPUs, the FPSs in games are limited purely by the GPU, which didn't take any performance hit. Basically, even if the CPU became slower, a CPU could still provide frames to the GPU faster than the relatively weaker GPU would take to render them. Basically, a 7700 is capable of throwing 140 frames per second to a GPU. If a patch slowed down the CPU and it can only pump out 120 frames per second now, the GPU still can't render more than 30-40, so you still wouldn't notice any difference. This and the fact that games don't do much in terms of speculative execution or real time heavy data manipulation (so the CPU hit in this regard was tiny) means that you likely wouldn't notice any difference, except the loading screens could last longer.