Act of solving serious Energy Problems of modern Data center goes hand in hand with today's and future Big Businesses of DataCenter / Cloud Computing

The most expanding growth for invested money

Data center energy efficiency has become increasingly important to IT departments. And with those departments footing the bill on rising energy costs, reducing usage at the server level can help lower energy bills. Sourcerers Energy (SGE) explores how to achieve much greater efficiency in the data center through wide range fixes which leads into excellent win-win situation for data center and our investors and our environment.


Virtualization of software and servers creates ways to create more effective automated control of business process. The on-demand deployment model depends on the implementation of cloud computing. The ability to deploy virtual application images on any platform at any time has increased significantly. Business software as a service SaaS applications and cloud computing models have matured and adoption has become an issue for every IT department. Private cloud systems provide security, response time, and service availability.

Applications, platforms, and infrastructure are evolving separately. SaaS software as a service application is widely known by the computing model. Platform as a service (PaaS) and infrastructure as a service (IaaS) complement SaaS as compelling aspects of cloud computing and infrastructure services. An organization’s application development team and the application portfolio need to be managed as a piecemeal part of the IT infrastructure. It is generally managed on an application by application basis. Applications represent a major source of IT value and are a large IT cost component.

Markets depend on virtualization to make information technology delivery a utility. On demand systems scale to meet the needs of users and users only pay for the capacity they use. Strategies relate to different ways to position software, hardware and services for the most effective product set.

Let us easy explain this new technology trend:

To get a short but clear overview, please check out the inserted video below.



Cloud Computing

Cloud computing has emerged as the latest buzzword for the IT industry. Not only is it revolutionizing the IT industry, but it is also giving a new dimension to IT services being offered by vendors. Cloud computing services can be considered as combination of grid computing, utility computing, virtualization, clustering services . The cloud service environment has forced both service providers and users to realign their operational and business strategies with respect to IT decision making.
As per our findings, the cloud computing service market will witness phenomenal growth in the near future, driven by scalability of IT resources, cost reduction, improved computation capacity and other factors identified. The market is anticipated to more than double by 2014 as compared to 2009 levels and is likely to be dominated by companies from the U.S and European regions.
Cloud computing delivers software, platform, and IT infrastructure services via a shared network. In this model, businesses access resources such as hosted software and applications remotely, i.e., via the internet. The model not only obviates the need for making capital investments in servers and storage, but also results in zero operational expenses for running data centers.

Cloud computing not only reduces business costs, but also makes applications accessible from any location, and reacts swiftly to changes in business needs. While interoperability and data security issues may hinder market growth, the future of cloud computing seems promising with IT giants such as IBM, Google, Microsoft, and actively developing new solutions to address existing issues.

The global cloud computing market is expected to grow from $37.8 billion in 2010 to $121.1 billion in 2015 at a CAGR of 26.2% from 2010 to 2015. SaaS is the largest segment of the cloud computing services market, accounting for 73% of the market’s revenues 2010. The major SaaS-providers include Adobe Web Connect, Google Mail, Cisco WebEx, and Yahoo Mail. Content, communications, and collaboration (CCC) accounts for about 30% of the SaaS market revenues.

Cloud Computing's Bright Earnings Outlook


Dell Inc. said it will spend $1 billion this year to build data centers and move deeper into the business of "cloud computing" services, in its latest tack away from its core business of selling personal computers.

The Round Rock, Texas, company said Thursday it plans to open 10 new data centers world-wide over the next 24 months. Its customers will be able to run programs and store information in the Dell-operated centers, accessing them over the Internet and eliminating the cost of operating the equipment themselves.

The market for such cloud services is expected to grow to $102.1 billion ...

IBM Unveils Cloud-Computing Initiative Aimed At Large Companies

NEW YORK (Dow Jones)--International Business Machines Corp. (IBM) unveiled new software to help large companies move some of their operations onto the Internet, the latest move by the technology company to capitalize on the fast-growing trend of cloud computing.

IBM on Thursday announced a new service--dubbed "Smartcloud"--that will enable clients to use a Web-based interface to install applications and configure databases on a platform provided by the company. The initiative is part of IBM's push to expand into the cloud, one of the fastest growing areas in information technology.

IBM's move further highlights how businesses are showing a greater interest ...

IBM has produced a number of new services aimed at tackling cooling and power problems in data centers. The aim is to help control the problems of energy demand, especially in data centers using high density blade servers.

IBM said it has drawn on the expertise of more than 450 IBM site and facilities experts worldwide, who have designed and built more than 2.8 million square meters of data center raised floor and more than 400 data centers in its own facilities worldwide, to come up with the new services.

"IBM's Site and Facilities Services team designed and installed a state-of-the-art data center for our rapidly-growing business," said Rick Siner, director of technical services at Priority Health, a US health plan company. "IBM was very easy to work with. We had access to the exact level of resources that we needed, and now, we have the capacity and infrastructure in place to support future growth, on demand."

"CIOs are facing a power and cooling crisis in their data centers," said IBM VP Steven Sams. "Based on our extensive customer engagements, many data center facilities need to expand, renovate or relocate to meet capacity and operational needs. IBM has a global team of Site and Facilities Services experts in place to help customers assess their risk, determine a path to success, and implement an optimal solution."

The popularity of the on-demand deployment model has increased significantly. Systems provide security, response time, and service availability. SaaS software as a service application is widely known by the computing model illustrates. Business applications and computing models have matured and adoption has become an issue for every IT department. Platform as a service (PaaS) and infrastructure as a service (IaaS) have joined SaaS as compelling aspects of cloud computing applications and infrastructure services.

The IBM mainframe has the reliability, scalability, security, large block of memory, shared workload capability, and remote support capability needed in cloud computing. These are called the ity features. IBM mainframe leads enterprise cloud computing. IBM mainframe strategy seeks to permit users to utilize data, applications and services from any device and from any location based on open standards.


Cisco virtualization is delivered through Unified Computing. As a premier networking company, Cisco has designed a compelling architecture that bridges the silos in the data center. A unified architecture uses industry standard technologies.

Key to Cisco's approach is the ability to unite compute, network, storage access, and virtualization resources. A single energy efficient system can reduce IT infrastructure costs and complexity. It is used to extend capital assets and improve business agility. Hewlett Packard High-performance computing (HPC) markets are powered by the adoption of Linux clusters. High-performance computing (HPC) markets are powered by the adoption of Linux clusters. Cluster complexity is rampant hardware parallelism: systems averaging thousands of processors, each of them a multi-core chip whose core count doubles every 18.24 months.
Hardware parallelism trend the additional issues of third-party software costs, weak interconnect performance, the difficulty of scaling many applications beyond a single node, storage and data management, power, cooling, and facility space,. Cluster complexity quickly begins to skyrocket. Hewlett-Packard (HP) has the HPC cluster market share leadership position. Competitive advantage has been achieve principally by working to alleviate cluster complexity through a coordinated strategy of investment and innovation, HPC-centric product planning and design, external partnerships, application expertise and focus.

System integration, sales, and support are part of the HPC cluster solution. HP have dominated the market by focusing on alleviating this complexity for datacenter administrators and end users. HP’s broad product portfolio for HPC also leverages the company’s innovations for the mainstream enterprise IT market and advances from HP Labs.
HP has taken a customer-centric, technology-agnostic approach that offers buyers a wide range of technology and product choices. The company has amassed the inhouse domain expertise needed to act as a trusted advisor to HPC users. HP has an innovative approach to the HPC market. The company is positioned to sustain its strong presence. The ability to exploit near-term growth trends depends on continuing to grow out the cluster capability leveraging virtualization.
The major management objectives for this critical area of applications implementation include improving service-oriented architecture (SOA) adoption, increasing Software Development Life Cycle (SDLC) efficiency, improving cost management, and reducing ineffective spending.
The fundamental aspect of cloud applications implementation relates to flexibility. The ability to be responsive to changing market conditions is central to the modern IT management task. The desire for systems that support flexibility is anticipated to spur rapid growth of cloud computing. Cloud computing markets at $20.3 billion in 2009 are anticipated to reach $100.4 billion by 2016.

“Some medium-sized companies had experienced a seven-fold increase in power
requirements from 1998 to 2005. And over that time, the electrical costs to run those data centers had grown from $10,000 per month to $40,000 per month.”

- Wall Street Journal (November 2005)



Companies are finding that they need higher-performance servers to meet increasing demands of new applications. At the same time, server consolidation and the move from stand alone servers to rack-mounted and blade servers is concentrating systems into smaller spaces. These trends are driving up electricity usage in data centers-- to a point of exasperation for many IT managers and corporate executives.
Specifically, the power required to run servers has increased on average from 1 kilowatt (kW) per rack in 2000 to 6.8 kW per rack in 2006, according to ID C. The amount of electricity needed to power cooling systems for these servers has shot up in a similar fashion.

As a result, companies are finding that the amount they pay for electricity is skyrocketing. In a July 2006 CIO Insight magazine article, Adam Braunstein, a senior research analyst at the Robert Frances Group, noted that up to 40 percent of the operating costs of a building that houses a data center could be power and cooling- related expenses.

Such costs have led Microsoft and Yahoo! to build new server farm data centers in Washington State’s Grant County, because of the abundance of relatively cheap hydro-electric power there. A 2006 National Public Radio story discussing the Microsoft and Yahoo! location choice noted that electricity in that region cost about half the national average. Most companies don’t have the luxury of moving their data centers to places where electricity is cheaper. Instead, they are trying to find ways to cut their electricity costs. Accomplishing this requires a strategic approach to dealing with power issues.

Consequences of Staying the Course If companies continue to install ever more powerful servers into their equipment racks without making changes, problems can arise. First, costs are spiraling out of control.
A November 2005 Wall Street Journal article about the growing power requirements of today’s data centers noted that some medium-sized companies had experienced a seven-fold increase in power requirements from
1998 to 2005. And over that time, the electrical costs to run those data centers had grown from $10,000 per month to $40,000 per month. Some larger companies are paying hundreds of thousands of dollars per month for electricity to operate their data centers.
If nothing is changed, the power consumption per system is going to rise, thus increasing the costs for electricity. The consequences of this consumption growth were quantified last year by Google engineer Luiz André Barroso.

In an article in the Association for Computing Machinery’s Queue magazine, Barroso noted that if server power consumption grew from 2005 levels by 20 percent per year, the electricity to run a server over its four-year lifetime would exceed the cost of the server. Second, heating and associated cooling become more serious issues. In general, higherperformance processors consume more electricity. Most of this electricity is converted to heat.
As systems are packed closer together in racks or a blade chassis, equipment fans and forced cold air from floor vents have a harder time removing the heat.
As heat builds up, equipment temperature rises. This leads to increased equipment failure. In fact, the failure rate doubles for every rise of 18 degrees Fahrenheit, according to studies done by the high-performance computing researchers at Los Alamos National Laboratory.

This increased failure rate due to heating has also been noted by the Uptime Institute, a group that focuses on improving uptime management in data center facilities.

New Thinking Required

To deal with growing power and cooling issues, companies must change their approach to data center layout and management. To start, these issues must be taken into account even before equipment is purchased. Similar to the way one might compare cars based on the EPA ’s mile-per-gallon fuel economy ratings, companies must compare server power consumption before making a purchase.

However, there are two problems here. There is no equivalent to the EPA ’s MPG rating for servers, because power consumption depends on performance. Run a server for maximum performance, consumption is high; run a server to be energy- efficient, performance is lower.
But efforts in this area are starting to pick up. The EPA has developed a Server Energy Measurement Protocol that is designed to provide a standard reference for measuring power requirements for servers at different load levels. Companies can use these measurements
to evaluate existing systems. Some companies addressing this problem by working with industry groups such as the Standard Performance Evaluation Corp. to develop benchmarks for measuring energy efficiency (see sidebar: New Benchmarks on the Way).
Another problem is that companies have simply not bothered looking at the energy consumption of servers. Last year, 59 percent of the IT managers surveyed by AF - COM , a data center managers’ association, said their biggest worry is that computer equipment is purchased without concern for power and cooling. This behavior obviously must change, given the growing attention being focused on controlling energy costs.
Companies now must take energy efficiency into account just as they would server performance, memory, and storage capacity. Once equipment is purchased and installed,
it needs to be monitored constantly.

This is also an area where current IT departments are falling behind. A 2006 CIO Insight magazine survey of 195 companies found that only 28 percent measure the energy consumption of their servers at least once per year. Again, this behavior must change.
Additionally companies now need to look at issues like air flow through equipment and racks and spot heat removal. The key is to not look at all of these issues as individual factors, but to adopt a holistic approach to energy efficiency in the data center.

This holistic approach to energy efficiency and reduced power consumption must include standards-based technologies that create end-to-end solutions spanning from the desktop to the data center.


Sourcerers Energy Ltd. has developed an excellent solution for the mentioned issue. We have set up a powerful win-win-Solution for all involved. For more detailed information please send us a first Email.