The key to using memory efficiently is virtual memory management. Consider both
Windows and a UNIX/Linux operating system. Compare and contrast how each implements
virtual memory. Describe how each one handles page faults, page sizes and how
it reconciles thrashing issues. Cite your sources.
Windows | Linux | |
For each process, the operating system maps few addresses The addresses that are not mapped on physical RAM are | Implementation of virtual memory | Linux supports virtual memory that is, using a disk as an extension of RAM so that the effective size of usable memory grows correspondingly. The kernel will write the contents of a currently unused block of memory to the hard disk so that the memory can be used for another purpose. When the original contents are needed again, they are read back into memory. This is all made completely transparent to the user; programs running under Linux only see the larger amount of memory available and don't notice that parts of them reside on the disk from time to time. Of course, reading and writing the hard disk is slower (on the order of a thousand times slower) than using real memory, so the programs don't run as fast. The part of the hard disk that is used as virtual memory is called the swap space. Linux can use either a normal file in the file system or a separate partition for swap space. A swap partition is faster, but it is easier to change the size of a swap file (there's no need to repartition the whole hard disk, and possibly install everything from scratch). When you know how much swap space you need, you should go for a swap partition, but if you are uncertain, you can use a swap file first, use the system for a while so that you can get a feel for how much swap you need, and then make a swap partition when you're confident about its size. |
If a page is not mapped onto physical RAM, Windows NT marks the page Any access to this page causes a page fault, and the page fault handler To be more specific, when the page contains DLL code or executable module When the page contains data, it is brought in from the swap file. Windows NT needs to keep track of free physical RAM so that it can allocate This information is maintained in a kernel data structure called the The PFD also maintains a FIFO list of in-memory pages so that it can If there is pressure on space in RAM, then parts of code and data that The page file can thus be seen as an overflow area to make the RAM behave When Windows NT addresses an invalid page (that is, during the course A page fault results in switching immediately to the pager. The pager then loads the page into memory and, upon return, the processor This is a relatively fast process, but accumulating many page faults | Page Faults | A process has a memory map (page table) that contains page table entries Memory is managed and assigned to processes in chunks called pages.Initially Any access to memory areas not mapped by the page table result in the The operating system must provide a page fault handler that has to deal On startup a process must set up its own memory areas and therefore In order to allow access to pages of memory never accessed before, the This process is critical: Page fault performance determines how quickly Three means of optimizing page fault performance are: First, one may avoid the use of the page table lock through atomic operations Second, the page fault handler may analyze the access pattern of the Finally, if zeroed pages are available then the page fault handler may |
Windows will expand a file that starts out too small and may shrink This will give all the benefits claimed for a ‘fixed’ page As well as providing for contingencies, like unexpectedly opening a very There is no downside in having potential space available. For any given workload, the total need for virtual addresses will not Therefore in a machine with small RAM, the extra amount represented by Unfortunately the default settings for system management of the file How big a file will turn out to be needed depends very much on your Simple word processing and e-mail may need very little but large graphics | Page Sizes | Most modern operating systems have their main memory divided into pages. It allows better utilization of memory. A page is a fixed length block To make this translation easier, virtual and physical memory are divided These pages are all the same size, they need not be but if they were Linux on Alpha AXP systems uses 8 Kbyte pages and on Intel x86 systems In this paged model, a virtual address is composed of two parts; an offset If the page size is 4 Kbytes, bits 11:0 of the virtual address contain Each time the processor encounters a virtual address it must extract Kernel swap and allocates memory using pages. |
The problem of many page faults occurring in a short time, called “page Trashing is a scheme in virtual memory where the processes spends most When you trash something on the flash it If you trash multiple files with the same name they get duplicate names The actual trash location is a merger of all the trash, and name conflicts The merging is dynamic, so if you show the trash and plug in a flash | Trashing issues | The cache does not actually buffer files, but blocks, which are the This way, also directories, super blocks, other filesystem bookkeeping The effectiveness of a cache is primarily decided by its size. A small cache is next to useless: it will hold so little data that all The critical size depends on how much data is read and written, and how The only way to know is to experiment. If the cache is of a fixed size, it is not very good to have it too To make the most efficient use of real memory, Linux automatically uses |
Sources
http://www.windowsitlibrary.com/Content/356/04/3.html
http://www.tldp.org/LDP/sag/html/vm-intro.html
http://gentoo-wiki.com/FAQ_Linux_Memory_Management#Virtual_Memory_Area
http://oss.sgi.com/projects/page_fault_performance/#pagefault
http://www.linuxhq.com/guides/TLK/mm/memory.html
No comments:
Post a Comment