I saw a tidbit the other day about a cloud computing concept called Following the Moon. Apparently this concept had been brought up sometime last year after a study was published espousing its merits as a cost saving method for the enterprise. In a nutshell Following the Moon (FTM) involves offloading processing power to data centers where the electricity may be theoretically cheaper at night.
At the moment there are no variable priced cloud computing services out there that shift pricing based on time. Amazon has demand based pricing in some of their data centers for their cloud offerings, but as of yet, no time based. The theory is that at night, your lack of sunlight reduces your demand for electricity, as you don’t need to cool your data center as much. Also the demand for server capacity is generally higher when more users are connected (daytime). So the logic goes, pricing should drop when the temperature does as well.
Many argue that this offloading of processing would increase latency, and offset the savings in cost by an increase in time. This is a shortsighted approach in my opinion, as these people are thinking purely of on-demand dynamic data needs, I am thinking of big dataset crunching needs. These needs are less dynamic, but still require massive computing power. So lets say you need to compile massive datasets, why not shift the computing of them to lower-power-cost areas or lower-demand areas as the day goes by. Especially for parallel/super-computing tasks, you are talking about instant savings if someone like Amazon were to step up and offer such a service. Say daytime processing costs $0.15/hr, and night time drops to $0.10/hr. Over a thousand instances you’re talking about saving well over a thousand dollars a month (assuming you can shift to night pricing continuously, i.e. 3 jumps throughout the day).
High capacity users doing data modeling or genome processing could be using 10-20k instances (an instance is a virtual machine in the cloud), and saving 10s of thousands a month this way. Amazon would also effectively impede anyone else from competing in this, as the cost of implementing it would be astronomical (they already have the geo-distribution of data centers + the cost/price points). This could also be a huge boon for startups looking to shave a few expenses and making their data acquisition costs lower.
Brian –
I think that one of the most practical things that an IT Architect can do at this point is to think about the sort of latency requirements needed by applications, and try to design large chunks of work that are suitable for high-latency processing.
We may not be able to take advantage of this concept now, but we can start getting ready for it in the future.
D
Thats sort of what I was thinking. distributing work to where latency isn't an issue, and reserving low-latency processing power for more time-sensitive computations (i.e. dynamic content on sites)