What an error huh? So one night some time ago (November 30th, 2010) I was in the process of doing some physical to virtual system migrations and ran into this odd error:
"File *.vmdk is larger than the maximum size supported by the datastore."
I validated that there was enough space on the datastore itself; which there was with nearly 1 TB for a system of a several hundred GB total. Thinking that this might just be a fluke I attempted to run the conversion again ending with the same result. Naturally, this meant I most certainly had a configuration or setting problem popping up and I needed to research the cause.
Further research allowed me to determine that the cause was the block size with which I'd created the datastore initially. Using the default 1MB block size only allows for partitions of up to 456GB on VMFS2 and 256GB on VMFS3 systems. It was clear that my system's disks exceeded this amount. The specific information about block sizes and disk sizes can be found in this VMWare article: http://kb.vmware.com/kb/1003565
While this might seem like a straight forward resolution the problem is that the block size selected upon the initial creation of the datastore is not changable once the datastore has been provisioned. This means that in order to change the block size the datastore would need to be removed and re-provisioned from scratch, removing all data stored there. I was able to work around this problem by moving all my data off of the datastore, re-provisioning it and moving all my data back onto the datastore after. In the event that this is not possible, your disk (and thus partition size within your guest OS) cannot be more than 256GB.
Unfortunately, this is a potential problem that you won't find until after you experience it for the first time and there is no in place fix for this problem either way, since it took me a little while to track down this information I thought I would post it anyhow to hopefully help others who could be having the same trouble.