Cloud infrastructure efficiency boosted by new research

Cloud infrastructure efficiency boosted by new research

Cloud infrastructure efficiency boosted by new research

A new technology development, designed by computer scientists based in California, could boost the efficiency of the cloud computing infrastructure dramatically.

The work, which was carried out by scientists based at the University of California, San Diego, could see the infrastructure powering the cloud run up to 20 per cent more efficiently. The new model – which has already been put into use at search giant Google – was introduced to global cloud experts at the recent EEE International Symposium on High Performance Computer Architecture conference, held last month in China.

In order to develop the new model, the computer scientists analysed a spectrum of Google web services, alongside gathering live data from Google's real-time computers. The data was then put to use in a series of experiments carried out on an isolated server.

Jason Mars and Lingjia Tang, faculty members at the Department of Computer Science and Engineering at UC San Diego's Jacobs School of Engineering, explained to R&D the importance of the two-tier development process. “These problems can seem easy to solve when looking at just one server. But solutions do not scale up when you're looking at hundreds of thousands of servers,” Mr Mars said.

Ms Tang explained: “If we can bridge the current gap between hardware designs and the software stack and access this huge potential, it could improve the efficiency of web service companies and significantly reduce the energy footprint of these massive-scale data centres.”

The application was found to be running far more efficiently when using data that it had accessed from the local server, rather than from remote locations. So, whilst data location was deemed key to the level of efficiency of the cloud, competition for shared resources within a server, especially caches, was also found to play a part. “Where your data is versus where your apps are matters a lot, but it's not the only factor,” Mr Mars confirmed.

If an application running on a particular core is attempting to collect data running on another core, that application is going to be slowed. The researchers' development – which has been called the NUMA score – is based on the issue of “distance between execution and data.” That score can work out how well random-access memory is applied in warehouse-scale computers, and optimising the score can result in major efficiency improvements.

Businesses looking to move services into the cloud, should evaluate the performance of their existing network and consider setting up a leased line or MPLS network.

Contact us

hSo ISO 9001 Seal
hSo ISO 14001 Seal
hSo ISO 20000 Seal
hSo ISO 27001 Seal
Cyber Essentials logo
Internet Service Providers Association logo
Internet Telephony Service Providers Association logo
LINX logo
RIPE logo
AWS Partner Network logo
Microsoft Partner logo
Crown Commercial Service supplier logo