Are pages loaded into RAM cyclically, or on a lowest-index basis?
For example, if the page table of process 0 looks like this: [99, 2, 4, 99], this means that page 0 and 3 are in disk while 1 and 2 are in RAM.
his may happen if pages 0,1,2 were loaded into RAM, then page 0 was swapped out for another process's page. In this case, if the process is requested again, would you reload page 0 or would you load page 3? What would the next page after that be?
Also, once the RAM is full (all 8 frames occupied by process pages) it will always stay full, as we have no way to evict a page without replacing it. Any new process request would replace a page based on LRU.
Say that the RAM frames look like this [2,2,1,0,0,0,3,0], so process 0 has all 4 pages in RAM, process 2 has 2 pages in RAM, process 1 and 3 have 1 page in RAM.
With the LRU policy, won't this mean that process 0 will always occupy 4 pages in RAM? Since all processes have at least 1 page in RAM, then we always use a local LRU policy, and hence the number of frames a process occupies stays constant. Is this how its supposed to work?
Are pages loaded into RAM cyclically, or on a lowest-index basis?
For example, if the page table of process 0 looks like this: [99, 2, 4, 99], this means that page 0 and 3 are in disk while 1 and 2 are in RAM.
his may happen if pages 0,1,2 were loaded into RAM, then page 0 was swapped out for another process's page. In this case, if the process is requested again, would you reload page 0 or would you load page 3?
Cyclically, so it would be Page 3.
What would the next page after that be?
Amitava said you can only load the process once, so there would be no next page.
Also, once the RAM is full (all 8 frames occupied by process pages) it will always stay full, as we have no way to evict a page without replacing it. Any new process request would replace a page based on LRU.
Say that the RAM frames look like this [2,2,1,0,0,0,3,0], so process 0 has all 4 pages in RAM, process 2 has 2 pages in RAM, process 1 and 3 have 1 page in RAM.
With the LRU policy, won't this mean that process 0 will always occupy 4 pages in RAM? Since all processes have at least 1 page in RAM, then we always use a local LRU policy, and hence the number of frames a process occupies stays constant. Is this how its supposed to work?
Are pages loaded into RAM cyclically, or on a lowest-index basis?
For example, if the page table of process 0 looks like this: [99, 2, 4, 99], this means that page 0 and 3 are in disk while 1 and 2 are in RAM.
his may happen if pages 0,1,2 were loaded into RAM, then page 0 was swapped out for another process's page. In this case, if the process is requested again, would you reload page 0 or would you load page 3?
Cyclically, so it would be Page 3.
What would the next page after that be?
Amitava said you can only load the process once, so there would be no next page.
Also, once the RAM is full (all 8 frames occupied by process pages) it will always stay full, as we have no way to evict a page without replacing it. Any new process request would replace a page based on LRU.
Say that the RAM frames look like this [2,2,1,0,0,0,3,0], so process 0 has all 4 pages in RAM, process 2 has 2 pages in RAM, process 1 and 3 have 1 page in RAM.
With the LRU policy, won't this mean that process 0 will always occupy 4 pages in RAM? Since all processes have at least 1 page in RAM, then we always use a local LRU policy, and hence the number of frames a process occupies stays constant. Is this how its supposed to work?
Yes.
Hi, based on the current implementation, once every process has at least one page in RAM and the RAM is full, the number of pages for each process effectively gets "locked in" because the local LRU will prioritize evicting a page from the same process. This means that if each process has at least one page in RAM, it will keep evicting its own pages, thus maintaining a fixed number of pages for each process in RAM. If the goal is to allow processes to use more or less RAM dynamically based on their needs and activity, a global LRU policy would be the better option. Isn't that the case in real world memory management?
Hi, based on the current implementation, once every process has at least one page in RAM and the RAM is full, the number of pages for each process effectively gets "locked in" because the local LRU will prioritize evicting a page from the same process. This means that if each process has at least one page in RAM, it will keep evicting its own pages, thus maintaining a fixed number of pages for each process in RAM. If the goal is to allow processes to use more or less RAM dynamically based on their needs and activity, a global LRU policy would be the better option. Isn't that the case in real world memory management?
Simulations are a model or abstraction of the "real world", they do not 100% match what is done in practice. How much a simulation matches the real world depends on the level of detail necessary (as per the specification). With all simulations you need to be aware of the assumptions made when it was created in order to get accurate results. In this case we are assuming all process don't have high memory requirements.
But yes, in practice making a completely local algorithm isn't possible as not all processes have the same memory requirements, so global LRU is usually better (also although locally a page might be the least used, globally it may be much more recent). However, just because you use a global policy doesn't necessarily mean each page will be given the right amount of memory for its needs. You need another algorithm that can approximate how many frames a process needs (Page Fault Frequency).