To share a folder on a Windows 7 machine you will need to allow for the files to be shared.
You can do this under “Network and Sharing Center” and then under “Advanced Sharing Center”.
So far, so good…
But if you don’t want users from the XP devices to get an popup asking for credentials, you will need to disable the “Password Protect Sharing”, again under “Advanced Sharing Center”.
My issue was that when I switched this setting from “On” (it’s default” to “Off” and saved it.
I still got a popup from the XP workstation, reboot did not help.
When I got back to take a look at the “Password Protect Sharing” setting I just changed, it was set back to “On” again! Whatever I did, it always came back to “On” again… I couldn’t let it stick to “Off”.
The cause and solution was fairly simple; the guest account on my Windows 7 machine was disabled… but also there was a password configured.
Last few months I read and hear more and more that Memory Overcommit and Dynamic Memory do the same thing, or even that they are the same thing.
This is NOT TRUE! This post is intended to clarify this once and for all.
Memory ballooning kernel driver
Both Memory Overcommit and Dynamic Memory make use of this technology.. So yes, there is a common ground between them… because in the end they both provide the VM the ability to make use of more memory.
What ballooning does, is offering more memory addresses to the kernel of the OS inside the VM which it can use. So you’re not giving the VM more memory, but provide the VM a way to make use of more memory.
There are multiple technologies that enable a virtual machine to make use of more memory when it is requested. One of those technologies is named Memory Overcommit.
This technology makes use of an underlying technology named “Idle Memory Tax”, IMT in short. The first time I heard about this term I had no clue what it was, so I started investigating.
Let’s say that you have a server with 13GB RAM, where 1 GB is reserved for the host which means that 12 GB RAM is available for virtual machines. This 12 GB RAM exists out of (12*1024) 12.288 shares of memory.
What IMT basically does, is assigning a higher value to available (free) memory shares. This is possible because Memory Overcommit treats every memory share as a separate resource… and can exchange those between virtual machines.
Furthermore, there are 3 pillars that Memory Overcommit works with:
1. Assign more memory to virtual machines then the host actually has.
2. Assign memory shares to multiple virtual machines by a hash.
3. Compress the memory on the host by saving double memory blocks only once since the content is the same.
What is the end result?
When a virtual machine asks for more memory, Memory Overcommit will look for a different virtual machine that has memory shares which is doesn’t use at the moment. These shares will be re-allocated to the virtual machine who asked for more memory. So in fact 1 memory share will be allocated to 2 virtual machines, therefor both will be able to see it.
Suppose you have a host with 6GB of RAM, 4 virtual machines with 1 GB of RAM each and Memory Overcommit set to provide 1GB of RAM extra to each virtual machine when needed.
When all 4 virtual machines will ask for more memory because of elevated workload Memory Overcommit will provide it, when available, as requested. This will make a sum of (4VM * 1GB RAM) + (4VM * 1GB Memory Overcommit RAM) = 8GB RAM.
This should be impossible, since there is only 6GB of RAM in the host. Hence the name “overcommit”, you promise more then you are able to provide.
The added value?
Then a virtual machine requires more memory and other virtual machines have memory “to spare”, then Memory Overcommit will enable that virtual machine to make use of that available memory inside the other virtual machines.
Another technology that enables a virtual machine to make use of more memory when requested, is Dynamic Memory. Dynamic Memory also sees memory shares as separate resources to provide to virtual machines, but the big difference is that dynamic memory provides those memory shares only once; to one virtual machine.
How is this possible you ask? Every virtual machine requires a minimal amount of memory to work properly. You can configure this on every virtual machine. Because you can provide them with less memory by default, more memory will be left unused in the host. This free memory will be assigned to one big “pool”; all the available memory on the host which is free (minus a small amount reserved for the host) belongs to this “pool”.
Furthermore, there are 3 pillars that Dynamic Memory works with:
1. Provide more memory when the virtual machines asks for it.
2. Every memory share is unique and can only be assigned once at a time.
3. The sum of all maximal assigned memory can never exceed the amount which the host has physically because every memory share is unique and will only be assigned once per share.
What is the end result?
When a virtual machine asks for more memory, Dynamic Memory will look at the host for free memory shares in the pool. When those are available, they will be assigned to the virtual machine. When the virtual machine has not used the earlier assigned memory for some time, the memory shares will be assigned back to the pool again so other virtual machines may make use of it.
Suppose you have a host with 6GB of RAM, 4 virtual machines with 1GB of RAM each and Dynamic Memory configured to provide a maximum of 2GM of RAM each.
Yes, this would mean a total of (4VM * 1GB) = 4GB! This would mean that 2GB of RAM will be in the “pool” on the host to dynamically provide to the virtual machines when requested.
So you can run, compared to Memory Overcommit, 1 virtual machine more on this host. 1 virtual machine out of 3 is an increase in virtual machine density of 33% !!
The added value?
The situation I have used as an example with Dynamic Memory is standard, nothing done with specific fine-tuning or something like that, so just imagine what would happen when when you would…
For example, in my demo environment you will encounter the following:
This is an enormous increase in density of your virtual machines!
What makes the difference?
Trusting the OS within the VM.
Memory Overcommit does not trust the virtual machine. What I mean with that is the that Memory Overcommit will look at the information available on the host and not inside the virtual machine. Dynamic Memory on the other hand does trust the virtual machine and gets the required information from inside the virtual machines by using the Dynamic Memory Virtual Service Consumer driver which is installed along with the Integration Services of Hyper-V.
This allows Dynamic Memory to communicate with the operating system within the virtual machine. Memory added, memory removed; by using this integration Dynamic Memory will know what to do when an action is required.
In simple terms, Dynamic Memory gets the information directly from the source and Memory Overcommit from an intermediary.
For several years now, companies are using Memory Overcommit in their production environments so it is a “proven technology”.
Dynamic Memory however is a rather new technology and is now getting the chance to prove itself. I believe that since Service Pack 1 for Windows Server 2008 R2 it will succeed in that.
Second Level Paging.
This underlying technology is used by Memory Overcommit, but not by Dynamic Memory.
To understand Second Level Paging, we must first understand the concept of “paging” itself. When we do not look at virtualization just yet, the paging would be the expansion with physical memory by using virtual memory which basically is a paging-file on the hard disk.
When the physical memory is all used, the virtual machine can fall back to the virtual memory.
But what about virtualized environments? Because that is the place where both Memory Overcommit and Dynamic Memory are used…
When you make a schematic drawing from a virtualized environment, you will get the following:
When the total sum of the extra memory assigned to the virtual machines exceeds the amount of memory in the physical machine we are speaking about a concept names “oversubscription”.
This results in that when the virtual machines see that the host has no more available memory, they will start swapping to the virtual memory on the disk.
Second level Paging provides the hypervisor the feature for the virtual machines to start swapping on the virtual memory on the host, instead of inside the virtual machine.
Yes, this is slower and it is possible that this would have a negative effect on the performance.
There also is a risk that certain memory blocks will be swapped by Second Level Paging to the virtual memory on the host, but you would want those to stay inside the virtual machine for performance reasons. Because Memory Overcommit does not trust the virtual machine, he will not know which memory blocks to or not to swap; it can not tell the difference.
To avoid such situations that could cause such a negative effect on the performance, Dynamic Memory does not make use of Second Level Paging; all the “swapping” to the virtual memory is done inside the virtual machine.
With both Memory Overcommit as with Dynamic Memory the best practice is to provide the virtual machines with enough memory to avoid immediate “swapping” to the virtual memory.
The reason? The performance of the physical memory is way higher then that of the virtual memory which basically is the hard disk.
Which of these 2 technologies is the best, is impossible to say already.
My personal preference should be obvious by now; Dynamic Memory.
Both technologies accomplish the same when you only look at the fact that when a virtual machine requires more memory, it will be provided.
Memory Overcommit has the advantage to be a “proven technology”, but the disadvantage that it cannot communicate with the kernel of the operating system inside the virtual machines.
Dynamic Memory has the advantage that the risk of swapping memory blocks to the host by Second Level Paging is not present; all the swapping is done within the virtual machine.
Dynamic Memory aims to make more efficient use of the resources per datacenter and host, instead of the host and VM level as Memory Overcommit does.
What I furthermore see is that, with Dynamic Memory, you can design your virtual machines based on a baseline, instead of the peaks.
Eventually you will be able to put more virtual machines on less hosts, which would enable you to shut those obsolete hosts down when not needed; green IT.
Because Dynamic Memory does not make use of Second Level Paging, and provides memory to virtual machines from the pool when the machines ask for it, it will not have a negative impact on the performance… because with too little resources (also with memory) performance will be lost.