Hi, based on the current implementation, once every process has at least one page in RAM and the RAM is full, the number of pages for each process effectively gets "locked in" because the local LRU will prioritize evicting a page from the same process. This means that if each process has at least one page in RAM, it will keep evicting its own pages, thus maintaining a fixed number of pages for each process in RAM. If the goal is to allow processes to use more or less RAM dynamically based on their needs and activity, a global LRU policy would be the better option. Isn't that the case in real world memory management?
Simulations are a model or abstraction of the "real world", they do not 100% match what is done in practice. How much a simulation matches the real world depends on the level of detail necessary (as per the specification). With all simulations you need to be aware of the assumptions made when it was created in order to get accurate results. In this case we are assuming all process don't have high memory requirements.
But yes, in practice making a completely local algorithm isn't possible as not all processes have the same memory requirements, so global LRU is usually better (also although locally a page might be the least used, globally it may be much more recent). However, just because you use a global policy doesn't necessarily mean each page will be given the right amount of memory for its needs. You need another algorithm that can approximate how many frames a process needs (Page Fault Frequency).