Welcome to the Most Reliable Essay writing Service Online

Clusters, warehouse scale computers

by | May 24

5/20/2020 6.7. Clusters, warehouse scale computers, and other message-passing multiprocessorshttps://learn.zybooks.com/zybook/CUNYCSCI343SmithThompsonSpring2020/chapter/6/section/7 1/7You have unveri¦ed email(s). Please click on your name in the topright corner and browse to your pro¦le to send another veri¦cationemail.Message passing: Communicating between multiple processors by explicitly sending andreceiving information.Send message routine: A routine used by a processor in machines with private memories topass a message to another processor.Receive message routine: A routine used by a processor in machines with private memoriesto accept a message from another processor.6.7 Clusters, warehouse scale computers,and other message-passing multiprocessorsThe alternative approach to sharing an address space is for the processors to each have theirown private physical address space. The ¦gure below shows the classic organization of amultiprocessor with multiple private address spaces. This alternative multiprocessor mustcommunicate via explicit message passing, which traditionally is the name of such style ofcomputers. Provided the system has routines to send and receive messages, coordination isbuilt in with message passing, since one processor knows when a message is sent, and thereceiving processor knows when a message arrives. If the sender needs con¦rmation that themessage has arrived, the receiving processor can then send an acknowledgment messageback to the sender.Figure 6.7.1: Classic organization of a multiprocessor with multipleprivate address spaces, traditionally called a message-passingmultiprocessor (COD Figure 6.13).Note that unlike the SMP in COD Figure 6.7 (Classic organization of a shared memorymultiprocessor), the interconnection network is not between the caches and memorybut is instead between processor-memory nodes.5/20/2020 6.7. Clusters, warehouse scale computers, and other message-passing multiprocessorshttps://learn.zybooks.com/zybook/CUNYCSCI343SmithThompsonSpring2020/chapter/6/section/7 2/7There have been several attempts to build large-scale computers based on high-performancemessage-passing networks, and they do offer better absolute communication performancethan clusters built using local area networks. Indeed, many supercomputers today use customnetworks. The problem is that they are much more expensive than local area networks likeEthernet. Few applications today outside of high performance computing can justify the highercommunication performance, given the much higher costs.Hardware/Software InterfaceComputers that rely on message passing for communication rather than cachecoherent shared memory are much easier for hardware designers to build (seeCOD Section 5.8 (A common framework for memory hierarchy)). There is anadvantage for programmers as well, in that communication is explicit, whichmeans there are fewer performance surprises than with the implicitcommunication in cache-coherent shared memory computers. The downsidefor programmers is that it’s harder to port a sequential program to a messagepassing computer, since every communication must be identi¦ed in advance orthe program doesn’t work. Cache-coherent shared memory allows the hardwareto ¦gure out what data needs to be communicated, which makes porting easier.There are differences of opinion as to which is the shortest path to highperformance, given the pros and cons of implicit communication, but there is noconfusion in the marketplace today. Multicore microprocessors use sharedphysical memory and nodes of a cluster communicate with each other usingmessage passing.Feedback?5/20/2020 6.7. Clusters, warehouse scale computers, and other message-passing multiprocessorshttps://learn.zybooks.com/zybook/CUNYCSCI343SmithThompsonSpring2020/chapter/6/section/7 3/7Clusters: Collections of computers connected via I/O over standard network switches toform a message-passing multiprocessor.Some concurrent applications run well on parallel hardware, independent of whether it offersshared addresses or message passing. In particular, task-level parallelism and applications withlittle communication—like Web search, mail servers, and ¦le servers—do not require sharedaddressing to run well. As a result, clusters have become the most widespread example todayof the message-passing parallel computer. Given the separate memories, each node of a clusterruns a distinct copy of the operating system. In contrast, the cores inside a microprocessor areconnected using a high-speed network inside the chip, and a multichip shared-memory systemuses the memory interconnect for communication. The memory interconnect has higherbandwidth and lower latency, allowing much better communication performance for sharedmemory multiprocessors.The weakness of separate memories for user memory from a parallelprogramming perspective turns into a strength in system dependability (seeCOD Section 5.5 (Dependable memory hierarchy)). Since a cluster consists ofindependent computers connected through a local area network, it is mucheasier to replace a computer without bringing down the system in a cluster than in a sharedmemory multiprocessor. Fundamentally, the shared address means that it is di¨cult to isolate aprocessor and replace it without heroic work by the operating system and in the physical designof the server. It is also easy for clusters to scale down gracefully when a server fails, therebyimproving dependability. Since the cluster software is a layer that runs on top of the localoperating systems running on each computer, it is much easier to disconnect and replace abroken computer.Given that clusters are constructed from whole computers and independent, scalable networks,this isolation also makes it easier to expand the system without bringing down the applicationthat runs on top of the cluster.Their lower cost, higher availability, and rapid, incremental expandability make clusters attractiveto service Internet providers, despite their poorer communication performance when comparedto large-scale shared memory multiprocessors. The search engines that hundreds of millions ofus use every day depend upon this technology. Amazon, Facebook, Google, Microsoft, andothers all have multiple datacenters each with clusters of tens of thousands of servers. Clearly,the use of multiple processors in Internet service companies has been hugely successful.“ Seymour Cr Anyone can build a fast CPU. The trick is t ay, considered the father of the super o build a fast system. computer.Warehouse-scale computersInternet services, such as those described above, necessitated the construction of newbuildings to house, power, and cool 100,000 servers. Although they may be classi¦ed as just5/20/2020 6.7. Clusters, warehouse scale computers, and other message-passing multiprocessorshttps://learn.zybooks.com/zybook/CUNYCSCI343SmithThompsonSpring2020/chapter/6/section/7 4/7large clusters, their architecture and operation are more sophisticated. They act as one giantcomputer and cost on the order of $150M for the building, the electrical and coolinginfrastructure, the servers, and the networking equipment that connects and houses 50,000 to100,000 servers. We consider them a new class of computer, called Warehouse-ScaleComputers (WSC).Hardware/Software InterfaceThe most popular framework for batch processing in a WSC is MapReduce[Dean, 2008] and its open-source twin Hadoop. Inspired by the Lisp functions ofthe same name, Map ¦rst applies a programmer-supplied function to eachlogical input record. Map runs on thousands of servers to produce anintermediate result of key- value pairs. Reduce collects the output of thosedistributed tasks and collapses them using another programmer-de¦nedfunction. With appropriate software support, both are highly parallel yet easy tounderstand and to use. Within 30 minutes, a novice programmer can run aMapReduce task on thousands of servers.For example, one MapReduce program calculates the number of occurrences ofevery English word in a large collection of documents. Below is a simpli¦edversion of that program, which shows just the inner loop and assumes just oneoccurrence of all English words found in a document:map(String key, String value):// key: document name// value: document contentsfor each word w in value:EmitIntermediate(w, “1”); // Produce list of all wordsreduce(String key, Iterator values):// key: a word// values: a list of countsint result = 0;for each v in values:result += ParseInt(v); // get integer from key-value pairEmit(AsString(result));The function EmitIntermediate used in the Map function emits each word inthe document and the value one. Then the Reduce function sums all the valuesper word for each document using ParseInt() to get the number ofoccurrences per word in all documents. The MapReduce runtime environmentschedules map tasks and reduce tasks to the servers of a WSC.5/20/2020 6.7. Clusters, warehouse scale computers, and other message-passing multiprocessorshttps://learn.zybooks.com/zybook/CUNYCSCI343SmithThompsonSpring2020/chapter/6/section/7 5/7At this extreme scale, which requires innovation in power distribution, cooling, monitoring, andoperations, the WSC is a modern descendant of the 1970s supercomputers—making SeymourCray the godfather of today’s WSC architects. His extreme computers handled computationsthat could be done nowhere else, but were so expensive that only a few companies could affordthem. This time the target is providing information technology for the world instead of highperformance computing for scientists and engineers. Hence, WSCs surely play a moreimportant societal role today than Cray’s supercomputers did in the past.While they share some common goals with servers, WSCs have three major distinctions:1. Ample, easy parallelism: A concern for a server architect is whether the applications in thetargeted marketplace have enough parallelism to justify the amount of parallel hardwareand whether the cost is too high for su¨cient communication hardware to exploit thisparallelism. A WSC architect has no such concern. First, batch applications likeMapReduce bene¦t from the large number of independent data sets that needindependent processing, such as billions of Web pages from a Web crawl. Second,interactive Internet service applications, also known as Software as a Service (SaaS), canbene¦t from millions of independent users of interactive Internet services. Reads andwrites are rarely dependent in SaaS, so SaaS rarely needs to synchronize. For example,search uses a read-only index and email is normally reading and writing independentinformation. We call this type of easy parallelism Request-Level Parallelism, as manyindependent efforts can proceed in parallel naturally with little need for communication orsynchronization.2. Operational Costs Count: Traditionally, server architects design theirsystems for peak performance within a cost budget and worry aboutenergy only to make sure they don’t exceed the cooling capacity of theirenclosure. They usually ignored operational costs of a server, assumingthat they pale in comparison to purchase costs. WSC have longerlifetimes—the building and electrical and cooling infrastructure are oftenamortized over 10 or more years—so the operational costs add up: energy, powerdistribution, and cooling represent more than 30% of the costs of a WSC over 10 years.3. Scale and the Opportunities/Problems Associated with Scale: To construct a single WSC,you must purchase 100,000 servers along with the supporting infrastructure, whichmeans volume discounts. Hence, WSCs are so massive internally that you get economy ofscale even if there are not many WSCs. These economies of scale led to cloud computing,as the lower per unit costs of a WSC meant that cloud companies could rent servers at apro¦table rate and still be below what it costs outsiders to do it themselves. The §ip sideof the economic opportunity of scale is the need to cope with the failure frequency ofscale. Even if a server had a Mean Time To Failure of an amazing 25 years (200,000hours), the WSC architect would need to design for 5 server failures every day. CODSection 5.15 (Fallacies and pitfalls) mentioned annualized disk failure rate (AFR) was5/20/2020 6.7. Clusters, warehouse scale computers, and other message-passing multiprocessorshttps://learn.zybooks.com/zybook/CUNYCSCI343SmithThompsonSpring2020/chapter/6/section/7 6/7Software as a service (SaaS) : Rather than selling software that is installed and run oncustomers’ own computers, software is run at a remote site and made available over theInternet typically via a Web interface to customers. SaaS customers are charged based onuse versus on ownership.measured at Google at 2% to 4%. If there were 4 disks per server and their annual failurerate was 2%, the WSC architect should expect to see one disk fail every hour. Thus, faulttolerance is even more important for the WSC architect than the server architect.The economies of scale uncovered by WSC have realized the long dreamed of goal ofcomputing as a utility. Cloud computing means anyone anywhere with good ideas, a businessmodel, and a credit card can tap thousands of servers to deliver their vision almost instantlyaround the world. Of course, there are important obstacles that could limit the growth of cloudcomputing—such as security, privacy, standards, and the rate of growth of Internet bandwidth—but we foresee them being addressed so that WSCs and cloud computing can §ourish.To put the growth rate of cloud computing into perspective, in 2012 AmazonWeb Services announced that it adds enough new server capacity every day tosupport all of Amazon’s global infrastructure as of 2003, when Amazon was a$5.2Bn annual revenue enterprise with 6000 employees.Now that we understand the importance of message-passing multiprocessors,especially for cloud computing, we next cover ways to connect the nodes of aWSC together. Thanks to Moore’s Law and the increasing number of cores per chip, we nowneed networks inside a chip as well, so these topologies are important in the small as well as inthe large.ElaborationThe MapReduce framework shu©es and sorts the key-value pairs at the end ofthe Map phase to produce groups that all share the same key. These groups arethen passed to the Reduce phase.ElaborationAnother form of large scale computing is grid computing, where the computersare spread across large areas, and then the programs that run across them mustcommunicate via long haul networks. The most popular and unique form of grid5/20/2020 6.7. Clusters, warehouse scale computers, and other message-passing multiprocessorshttps://learn.zybooks.com/zybook/CUNYCSCI343SmithThompsonSpring2020/chapter/6/section/7 7/7computing was pioneered by the SETI@home project. As millions of PCs are idleat any one time doing nothing useful, they could be harvested and put to gooduses if someone developed software that could run on those computers and thengave each PC an independent piece of the problem to work on. The ¦rst examplewas the Search for ExtraTerrestrial Intelligence (SETI), which was launched at UCBerkeley in 1999. Over 5 million computer users in more than 200 countries havesigned up for SETI@home, with more than 50% outside the US. By the end of2011, the average performance of the SETI@home grid was 3.5 PetaFLOPS.Check yourself1. True or false: Like SMPs, message-passing computers rely on locks for synchronization.2. True or false: Clusters have separate memories and thus need many copies of theoperating system.Answer: 1. False. Sending and receiving a message is an implicit synchronization, as well as away to share data. 2. True.
The post Clusters, warehouse scale computers appeared first on My Assignment Online.



The Service Is Okay. I won’t Complain

The writers on the website are courteous and the customer service responds quickly. My paper was handled well; They promised...

Writers here are very professional and are native British

Thanks a lot for helping me out with my PhD Thesis, Writers here are very professional and are native British!!

Jack, The United Kingdom

Very Very Helpful, and On time.

Very Very Helpful, and On time.

Adelio M, Spain

I scored complete A’s in 3 out of four courses

I scored complete A’s in 3 out of four courses

Anonymous, Illinoi Chicago

CLICK HERE  To order your paper

About Essaysmiths Assignment writing service

We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.

How It Works

To make an Order you only need to click on “Order Now” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Are there Discounts?

All new clients are eligible for upto 20% off in their first Order. Our payment method is safe and secure.

 CLICK HERE to Order Your Assignment


Recently Posted Questions.

Order your Assignment today and save 15% with the discount code ESSAYHELP