The Artima Developer Community
Sponsored Link

The Desktop as a Grid Service
Test-Driving Sun's Prototype Display Grid
by Frank Sommers
February 8, 2006

<<  Page 2 of 5  >>

Advertisement

Desktop Complexity

Server specialization is the exact opposite of today's desktop computing environments. Instead of specializing on a handful of services, desktop operating systems today compete in offering an increasing number of increasingly complex features. Such features are created in response to a sophisticated and demanding user base: Most desktop users are no longer satisfied with being able to compose simple documents in a text editor, but want their computers to be able to access services on the Web, manage digital photos and music, and, more recently, to serve as home entertainment hubs. Mobile computers impose additional demands on a desktop environment, such as the ability to offer access to wireless hotspots, and to provide security and recovery features.

Those capabilities are provided by cooperating tasks, such as user authentication, file system access, window management, and so forth. Each task contributing to a desktop session, in turn, is often represented by an independent operating system process, such as the window manager process, the file system mount daemon, the user authenticator, and even the processes that forward mouse and keyboard input to the operating system.

In a traditional desktop environment, those processes must be installed, configured, and managed, on a local computer. As a result, instead of specialization, today's PCs are the result of integration: of bundling a myriad of services and associated software on each desktop. Such integration efforts have enabled users with richer features, but only at the cost of leaving users with an increasingly complex desktop environment to manage. A recent ZDNet UK article quotes research showing that

a PC can cost up to 25 times its purchasing price over a five-year period, particularly when calls to help desks escalate due to bad desktop management. An average call querying the desktop lasts 17 minutes, of which nine are spent simply identifying hardware and software[12].

The Desktop as a Grid Service

Although the desktop paradigm has come to represent access to a single computer, the processes providing a desktop session's capabilities can be distributed to servers on a grid. For instance, one server may perform user authentication, another may offer the user access to a filesystem, and yet another can provide the window manager. In that manner, a desktop user session lends itself to grid-based distribution.

Such distribution pushes the complexity of running and managing the services that make up a desktop session to the network, alleviating users from desktop management chores. Hence, a grid-based desktop transforms the problem of software installation and maintenance to that of provisioning networked services.

A key problem of provisioning the services of a grid-enabled desktop is deciding how much complexity to leave on a user's computer and, concomitantly, what responsibilities to move to specialized servers. The assumption is that users, in general, are bad at managing desktop complexity, whereas dedicated servers provisioned via grid middleware can excel at that task.

Distributed desktop platforms in use today can be categorized according to their distribution of computational responsibilities between client and server. Among the most popular distributed desktop environments today include X Windows[13], Citrix's MetaFrame product[14] and GoToMyPC[15], Microsoft's RDP-based remote desktop[16], the Virtual Network Computer (VNC)[17], the research prototype THINC[18], and Sun's Sun Ray product line[19].

Some, like X windows, require much client-side resources and maintain lots of computational state at the client. Others, such as VNC, are implemented in software, and run as applications on top of a full-fledged OS. Sun's Sun Ray, the focus of this article, represents another extreme with no client-side state, and very minimal client-side computing.

To appreciate the available distribution choices, it is helpful to illustrate the key components of a non-distributed desktop residing on a user's PC:

Figure 2. Components of a desktop display subsystem

To keep the above illustration simple, the diagram limits user input and output to a display device, such as a CRT or LCD monitor, and does not consider keyboard and mouse input, or other peripherals. The user's display typically connects to the PC hardware via a VGA or DVI connector. The display hardware inside a PC comprises a set of graphics chips, including a memory area reserved for buffering the complete bitmapped image raster that's sent to the monitor. Such a frame buffer, or video memory, may be part of a dedicated video adapter, but in some cases the buffer shares the PC's main memory. The raster image stores the display information with a specified resolution and color depth. Images defined in the RGB color space, for instance, typically require at least 3 bytes of memory per pixel, one byte each for red, green, and blue. Thus, a 24-bit color frame at a resolution of 1280x1024 pixels requires 3.84 MB of buffer memory.

One way to divide on the network the key building blocks of a desktop session is along client-server lines. That is the route the X Window system chose1. In dividing the responsibilities of client and server, X Windows starts with the perspective of the local display. In X terminology, the local display is the server, providing services to X clients connecting to that server across the network. X's clients are the remote applications a user wishes to access. The X server accepts graphical output requests from a remote client, and processes those request by displaying the remote program's window, for instance. In addition, the server on the local host forwards keyboard and mouse events to the client.

The X Windows specifications define a communication protocol between client and server. The consequence of this client-server architecture is that server (the user's local host) and client must both run appropriate portions of the X Windows code so that they can communicate via X protocol messages.

Figure 3: Client-server distribution in the X Windows system

An X client or, more commonly, an X server can be implemented in either software or hardware. The advantage of a hardware-based implementation is that no software needs to be installed on a user's workstation. Instead, users can take advantage of a thin client that requires no or little maintenance. An X Windows-based thin client runs the X server code, and the user can invoke applications executing on remote servers to connect back to the thin client's display.

The main disadvantage of an X Windows thin client has to do with the size of an X server infrastructure. A thin client running an X server needs not only code to interpret X protocol commands, but also code and additional artifacts to be able to act on those commands and create a functional display. That includes the widget toolkit used to draw windows, the fonts to display text, as well as a window manager - code that can add up to significant size.

While a thin client can download all that code from a centralized server at boot time, processing and managing that code still requires significant CPU power and memory. That increases the cost of a thin-client device, and also puts the burden of managing not only X clients, but also the downloadable boot X server on an administrator's shoulders. In addition, a thin client provider would have to perform network bandwidth, latency, and error recovery optimizations in the context of the X protocol, effectively limiting client-server performance.

Figure 4: An X-based thin client

<<  Page 2 of 5  >>


Sponsored Links



Google
  Web Artima.com   
Copyright © 1996-2014 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use - Advertise with Us