Resources Are not All Created Equally

[00:00:00] Mark Tomlinson: Number three is that all resources across all of these various computers are not created equally, and this is where our job to explore performance is to find that imbalance between Computer A and it has plenty of power and computer B that doesn't have enough to keep up. So if we try to run them at the same time, guess what happens? One of them falls behind. So this third concept is that resources are not created equally, is easily demonstrated in a simple image like this. I'll take network, CPU and disk. [00:00:35][35.5]

[00:00:36] We have iO, we have data that can move, we have CPU, data that can be manipulated, and we have disk where data can be stored. And if you find the bottleneck in here, what's interesting, the first thing that should strike you is that it's pretty clear most people say, look, point number two, the CPU is my bottleneck right here. Right. Pull up my pointer. If we look at point number two, if look at that, that's the least amount of movement. We cannot handle that much. The CPU is under-resourced here. It's not created equally. Right. [00:01:10][34.8]

[00:01:11] So I have a one arrow, whereas upstream I have three arrows and then I have downstream two arrows. Right. So I can only handle one thing at a time in the CPU. So clearly if I had more CPU, the demand for that is going to be three. So I actually need a CPU that can handle three arrows. And so if I actually gave three arrows here at step number two, then I could have three things come in from the network and then I could have three things processed by the CPU. And then look what happens. We get all the way to store that information on the disk and we can only handle two things. [00:01:54][43.5]

[00:01:55] So I actually have two bottlenecks. The second the first bottleneck we find in the CPU. If we fix that, then we can go. Next thing to fix would be the disk. And this concept, as far as I'm concerned, is the transference of resource inequality. Right. So you can actually take the inequal CPU and once you solve for that, you're transferring that inequality to the next resource upstream or downstream, usually downstream, because we're passing all of that demand from the frontend so that makes sense. Cool. So there's a couple other fancy things you can learn about bottlenecks like in fluid dynamics. [00:02:38][43.4]

[00:02:39] So if any of you are really want to go nuts, the solution to a fluid dynamics problems typically involves calculating various properties of the fluids, such as velocity, pressure, density, temperature as functions of space and time. And here's all of the equations that you'll need to figure out how much flow from P1. We've got one arrow coming in that's under volume one that goes into pressure 2 against the cylinder for volume 2. So you could move this cylinder back and forth and move this cylinder back and forth. [00:03:08][28.4]

[00:03:08] And you get an idea of how to calculate that there is a bottleneck because we can only push so much volume from one side to put pressure on the next cylinder in to create volume 2. And this is something I've never had to use in all of my twenty five years of doing computer work, even though it's fascinating and interesting and I draw on the same concepts in my work. What I usually deal with is something that I would get from Raj Jain's performance analysis or practical performance analyst book, Practical Performance Analysis, and that's just looking at serial bottlenecks versus parallel bottlenecks. So this is a first example of a serial bottleneck where we have the source making requests coming in a block to source A and then source B becomes a bottleneck and therefore source C over here would never get overloaded. Right. [00:04:05][57.1]

[00:04:06] So it's much like the network talking to the CPU, talking to the disk is a very similar image. Since we're blocked on source B or component B, system B, System C would never get overloaded. And in parallel, you can see some other interesting things happen where we have the same demand source A and we've thrown hardware at the problem. We're going to give twice as much hardware and we're still going to be bottlenecked on System B, but if we move the function for System C in parallel, now we can do system C or step C without being bottleneck. We've actually moved it out of a serial path so that B can be slow, but the traffic can get to C and depends on the precursors and such. But this is another way to visually understand serial bottlenecks versus parallel bottlenecks. [00:04:06][0.0]

[242.6]

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}