Dear community,
this may be a pretty odd question, so before posting the question some background regarding the idea behind it:
we are currently thinking about simulation and test scenarios for our internal programs and thought about virtually overclocking one of the used VMware sessions (Windows XP running on a Windows 7 host) to get test results quicker than we'd normally get when sticking to realtime processing. Our applications process data in a realtime manner, data is cyclically read from a file, preprocessed by a Windows application and forwarded to another application using a DCOM interface. An additional program provides demo data, creates the file and adds data comparable to data we would normally get from our real servers.
The idea of using some kind of a time lapse hack arised because we had some problems with a VMware client's Windows clock not being synchronous with the host's Linux clock, each hour about 10 minutes difference where showing up on the Win session.
Time to get to the point: is there some way to trick a Windows VMware client to run multiple times faster than in realtime?
Thanks for your ideas and knowledge in advance!
Regards, Sascha