Blog campur-campur

Virtualization

Fahmi Rizwansyah says:

Virtualization Management
Virtualization technologies can deliver sea-changing benefits to your organization. But, as Thomas Bittman, Gartner analyst, adroitly noted, "Virtualization without good management is more dangerous than not using virtualization in the first place."
As an organization's computing environment gets more virtualized, it also gets more abstract. Increasing abstraction can increase complexity, making it harder for IT staff to control their world and undermining the benefits of virtualization.


Integrating physical and virtual management enables you to realize the full promise of virtualization while minimizing its risks. An integrated approach is critical because all IT infrastructures—even those with a significant amount of virtualization—include both virtual and physical components. Even if you have a management system that effectively handles virtualized systems, if it doesn't manage physical systems you will still have to manage many separate "islands"—and you will consume much more time and resources than you would with a management system that can handle all your assets. By using comprehensive virtualization management technology, you keep complexity at a minimum and streamline operations. A common virtualization management environment reduces training, ensures uniform policy application and simplifies maintenance.

Effective physical and virtual management can optimize the benefits of using virtualization technologies. This includes monitoring and managing hardware and software in a distributed environment. By allowing operations staff to monitor both the software running on physical machines and the physical machines themselves, it lets them know what's happening in their environment. It also lets them respond appropriately, running tasks and taking other actions to fix problems that occur.

Another unavoidable concern for people who manage virtualized, distributed environments is installing software and managing how that software is configured. While it's possible to perform these tasks by hand, end-to-end virtualization management technology can automate and accelerate this process.

Tools that work in both the physical and virtual worlds are highly effective and attractive. Yet think about an environment that has dozens or even hundreds of VMs installed. How are these machines built, changed and depreciated? And how are other VM-specific management functions performed? Addressing these questions requires a comprehensive toolset that includes managing virtualized hardware. Among other benefits, it can help operations staff choose workloads for virtualization, create the VMs that will run those workloads, and transfer the applications to their new homes.

Server Virtualization
For most IT people, the word "virtualization" conjures up thoughts of running multiple operating systems on a single physical machine. This is hardware virtualization, and while it's not the only important kind of virtualization, it is unquestionably the most visible today.
The core idea of hardware virtualization is simple: Use software to create a virtual machine that emulates a physical computer. This creates a separate OS environment that is logically isolated from the host server. By providing multiple VMs at once, this approach allows running several operating systems simultaneously on a single physical machine.

Rather than paying for many under-utilized server machines, each dedicated to a specific workload, server virtualization allows consolidating those workloads onto a smaller number of more fully-used machines. This implies fewer people to manage those computers, less space to house them, and fewer kilowatt hours of power to run them, all of which saves money.
Server virtualization also makes restoring failed systems easier. VMs are stored as files, and so restoring a failed system can be as simple as copying its file onto a new machine. Since VMs can have different hardware configurations from the physical machine on which they're running, this approach also allows restoring a failed system onto any available machine. There's no requirement to use a physically identical system.

Application Virtualization
In a physical environment, every application depends on its OS for a range of services, including memory allocation, device drivers, and much more. Incompatibilities between an application and its operating system can be addressed by either server virtualization or presentation virtualization. But for incompatibilities between two applications installed on the same instance of an OS, you need application virtualization.
Applications installed on the same device commonly share configuration elements, yet this sharing can be problematic. For example, one application might require a specific version of a dynamic link library (DLL) to function, while another application on that system might require a different version of the same DLL. Installing both applications creates a situation, where one of them overwrites the version required by the other causing the application to malfunction or crash. To avoid this, organizations often perform extensive compatibility testing before installing a new application, an approach that's workable but quite time-consuming and expensive.

Application virtualization solves this problem by creating application-specific copies of all shared resources. The configurations an application might share with other applications on its system—registry entries, specific DLLs, and more—are instead packaged with it and execute in the machine's cache, creating a virtual application. When a virtual application is deployed, it uses its own copy of these shared resources.
Application virtualization makes deployment significantly easier. Since applications no longer compete for DLL versions or other shared aspects of their environment, there's little need to test new applications for conflicts with existing applications before they're rolled out. And these virtual applications can run alongside ordinary, installed applications—so not everything needs to be virtualized, although doing so avoids many problems and decreases the time end-users spend with the helpdesk trying to resolve them. An effective application virtualization solution also enables you to manage both virtual applications and installed applications from a common interface.

Storage Virtualization
Generally speaking, storage virtualization refers to providing a logical, abstracted view of physical storage devices. It provides a way for many users or applications to access storage without being concerned with where or how that storage is physically located or managed. It enables the physical storage in an environment to be shared across multiple application servers, and physical devices behind the virtualization layer to be viewed and managed as if they were one large storage pool with no physical boundaries.

Virtualizing storage networks enables two key additional capabilities:
* The ability to mask or hide volumes from servers that are not authorized to access those volumes, providing an additional level of security.
* The ability to change and grow volumes on the fly to meet the needs of individual servers.

Essentially, anything other than a locally attached disk drive might be viewed in this light. Typically, storage virtualization applies to larger SAN (storage area network) arrays, but it is just as accurately applied to the logical partitioning of a local desktop hard drive, redundant array of independent disks (RAID), volume management, virtual memory, file systems and virtual tape. A very simple example is folder redirection in Windows, which lets the information in a folder be stored on any network-accessible drive. Much more powerful (and more complex) approaches include SANs. Large enterprises have long benefited from SAN technologies, in which storage is uncoupled from servers and attached directly to the network. By sharing storage on the network, SANs enable highly scalable and flexible storage resource allocation, high efficiency backup solutions, and better storage utilization.

Sources: http://www.microsoft.com/virtualization/products.mspx
Cheers, frizzy2008.

Load Balancing

Fahmi Rizwansyah says:

From Wikipedia, the free encyclopedia

In computer networking, load balancing is a technique to spread work between two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, and minimize response time. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch).
It is commonly used to mediate internal communications in computer clusters, especially high-availability clusters.


For Internet services
One of the most common applications of load balancing is to provide a single Internet service from multiple servers, sometimes known as a server farm. Commonly load-balanced systems include popular web sites, large Internet Relay Chat networks, high-bandwidth File Transfer Protocol sites, NNTP servers and DNS servers.
For Internet services, the load balancer is usually a software program which is listening on the port where external clients connect to access services. The load balancer forwards requests to one of the "backend" servers, which usually replies to the load balancer. This allows the load balancer to reply to the client without the client ever knowing about the internal separation of functions. It also prevents clients from contacting backend servers directly, which may have security benefits by hiding the structure of the internal network and preventing attacks on the kernel's network stack or unrelated services running on other ports.
Some load balancers provide a mechanism for doing something special in the event that all backend servers are unavailable. This might include forwarding to a backup load balancer, or displaying a message regarding the outage.
An alternate method of load balancing which does not necessarily require a dedicated software or hardware node, is called round robin DNS. In this technique, multiple IP addresses are associated with a single domain name (i.e. www.example.org); clients themselves are expected to choose which server to connect. Unlike the use of a dedicated load balancer, this technique is not "transparent" to clients, because it exposes the existence of multiple backend servers. The technique has other advantages and disadvantages, depending on the degree of control over the DNS server and the granularity of load balancing which is desired.
A variety of scheduling algorithms are used by load balancers to determine which backend server to send a request to. Simple algorithms include random choice or round robin. More sophisticated load balancers may take into account additional factors, such as a server's reported load, recent response times, up/down status (determined by a monitoring poll of some kind), number of active connections, geographic location, capabilities, or how much traffic it has recently been assigned. High-performance systems may use multiple layers of load balancing.
In addition to using dedicated hardware load balancers, software-only solutions are available, including open source options. Examples of the latter include the Apache web server's mod_proxy_balancer extension and the Pound reverse proxy and load balancer.


Persistence
An important issue when operating a load-balanced service is how to handle information that must be kept across the multiple requests in a user's session. If this information is stored locally on one back end server, then subsequent requests going to different back end servers would not be able to find it. This might be cached information that can be recomputed, in which case load-balancing a request to a different back end server just introduces a performance issue.
One solution to the session data issue is to send all requests in a user session consistently to the same back end server. This is known as "persistence" or "stickiness". A large downside to this technique is its lack of automatic failover: if a backend server goes down, its per-session information becomes inaccessible, and sessions depending on it are lost.
Assignment to a particular server might be based on a username, client IP address, or random assignment. Due to DHCP, Network Address Translation, and web proxies, the client's IP address may change across requests, and so this method can be somewhat unreliable. Random assignments must be remembered by the load balancer, which creates a storage burden. If the load balancer is replaced or fails, this information can be lost, and assignments may need to be deleted after a timeout period or during periods of high load, to avoid exceeding the space available for the assignment table. The random assignment method also requires that clients maintain some state, which can be a problem, for example when a web browser has disabled storage of cookies. Sophisticated load balancers use multiple persistence techniques to avoid some of the shortcomings of any one method.
Another solution is to keep the per-session data in a database. Generally this is bad for performance since it increases the load on the database: the database is best used to store information less transient than per-session data. (Interestingly, to prevent a database from becoming a single point of failure, and to improve scalability, the database is often replicated across multiple machines, and load balancing is used to spread the query load across those replicas.)
Fortunately there are more efficient approaches. In the very common case where the client is a web browser, per-session data can be stored in the browser itself. One technique is to use a browser cookie, suitably time-stamped and encrypted. Another is URL rewriting. Storing session data on the client is generally the preferred solution: then the load balancer is free to pick any backend server to handle a request.

Load balancer features
Hardware and software load balancers can come with a variety of special features.

* Asymmetric load: A ratio can be manually assigned to cause some backend servers to get a greater share of the workload than others. This is sometimes used as a crude way to account for some servers being faster than others.
* Priority activation: When the number of available servers drops below a certain number, or load gets too high, standby servers can be brought online
* SSL Offload and Acceleration: SSL applications can be a heavy burden on the resources of a Web Server, especially on the CPU and the end users may see a slow response (or at the very least the servers are spending a lot of cycles doing things they weren't designed to do). To resolve these kinds of issues, a Load Balancer capable of handling SSL Offloading in specialized hardware may be used. When Load Balancers are taking the SSL connections, the burden on the Web Servers is reduced and performance will not degrade for the end users.
* Distributed Denial of Service (DDoS) attack protection: load balancers can provide features such as SYN cookies and delayed-binding (the back-end servers don't see the client until it finishes its TCP handshake) to mitigate SYN flood attacks and generally offload work from the servers to a more efficient platform.
* HTTP compression: reduces amount of data to be transferred for HTTP objects by utilizing gzip compression available in all modern web browsers
* TCP offload: different vendors use different terms for this, but the idea is that normally each HTTP request from each client is a different TCP connection. This feature utilizes HTTP/1.1 to consolidate multiple HTTP requests from multiple clients into a single TCP socket to the back-end servers.
* TCP buffering: the load balancer can buffer responses from the server and spoon-feed the data out to slow clients, allowing the server to move on to other tasks.
* HTTP caching: the load balancer can store static content so that some requests can be handled without contacting the web servers.
* Content Filtering: some load balancers can arbitrarily modify traffic on the way through.
* HTTP security: some load balancers can hide HTTP error pages, remove server identification headers from HTTP responses, and encrypt cookies so end users can't manipulate them.
* Priority queuing: also known as rate shaping, the ability to give different priority to different traffic.
* Content aware switching: most load balancers can send requests to different servers based on the URL being requested.
* Client authentication: authenticate users against a variety of authentication sources before allowing them access to a website.
* Programmatic traffic manipulation: at least one load balancer allows the use of a scripting language to allow custom load balancing methods, arbitrary traffic manipulations, and more.
* Firewall: Direct connections to backend servers are prevented, for security reasons

In telecommunications
Load balancing can be useful when dealing with redundant communications links. For example, a company may have multiple Internet connections ensuring network access even if one of the connections should fail.
A failover arrangement would mean that one link is designated for normal use, while the second link is used only if the first one fails.
With load balancing, both links can be in use all the time. A device or program decides which of the available links to send packets along, being careful not to send packets along any link if it has failed. The ability to use multiple links simultaneously increases the available bandwidth.
Major telecommunications companies have multiple routes through their networks or to external networks. They use more sophisticated load balancing to shift traffic from one path to another to avoid network congestion on any particular link, and sometimes to minimize the cost of transit across external networks or improve network reliability.

Relationship with failover
Load balancing is often used to implement failover — the continuation of a service after the failure of one or more of its components. The components are monitored continually (e.g., web servers may be monitored by fetching known pages), and when one becomes non-responsive, the load balancer is informed and no longer sends traffic to it. And when a component comes back on line, the load balancer begins to route traffic to it again. For this to work, there must be at least one component in excess of the service's capacity. This is much less expensive and more flexible than failover approaches where a single "live" component is paired with a single "backup" component that takes over in the event of a failure. In a RAID disk controller, using RAID1 (mirroring) is analogous to the "live/backup" approach to failover, where RAID5 is analogous to load balancing failover.

Network Load Balancing Services (NLBS)
is a proprietary Microsoft implementation of clustering and load balancing that is intended to provide high availability and high reliability, as well as high scalability. NLBS is intended for applications with relatively small data sets that rarely change (one example would be web pages), and do not have long-running-in-memory states. These types of applications are called stateless applications, and typically include Web, File Transfer Protocol (FTP), and virtual private networking (VPN) servers. Every client request to a stateless application is a separate transaction, so it is possible to distribute the requests among multiple servers to balance the load. One attractive feature of NLBS is that all servers in a cluster monitor each other with a heartbeat signal, so there is no single point of failure.

Configuration Tips:
* The network load balancing service requires for all the machines to have the correct local time. Ensure the Windows Time Service is properly configured on all hosts to keep clocks synchronized. Unsyncronized times will cause a network login screen to pop up which doesn't accept valid login credentials.
* The server console can't have any network card dialogue boxes open when you are configuring the "Network Load Balancing Manager" from your client machine.
* You have to manually add each load balancing server individually to the load balancing cluster after you've created a cluster host.
* To allow communication between servers in the same NLB cluster, each server requires the following registry entry: a DWORD key named "UnicastInterHostCommSupport" and set to 1, for each network interface card's GUID (HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\WLBS\Parameters\Interface\{GUID})
* NLBS may conflict with some Cisco routers, which are not able to resolve the IP address of the server and must be configured with a static ARP entry.

Cheers, frizzy2008.

Tren visitor

Fahmi Rizwansyah says:

Dunia maya memang keras ya sob...


Dasar memang modalku cuma blogwalking, jadi nasibnya ya seperti ini. Kalo lagi rajin dan konsentrasi, mestinya bisa dapat tinggi. Tapi waktu harus share dengan keluarga dan meninggalkan aktifitas blogging, tren visitor langsung menukik tajam.
Gak papa, teteup semangat, setelah ini aku akan menganalisa http://xsemua.blogspot.com/ dan http://jokosupriyanto.com/. Mengapa mereka bisa sebegitu hebat trafficnya di http://www.indotopblog.com/.
Harus belajar lagi ni!!! mohon bantuannya yaa para bloggerous.

Cheers, frizzy.

Pagerank from Google Official Site

Fahmi Rizwansyah says:

Technology Overview <--original posting, translated by google.
Tinjauan teknologi

We stand alone in our focus on developing the "perfect search engine," defined by co-founder Larry Page as something that, "understands exactly what you mean and gives you back exactly what you want."
Kami berdiri sendiri dalam kami fokus pada pengembangan "mesin pencari sempurna," ditetapkan oleh pendiri bersama Larry Page sebagai sesuatu yang "mengerti apa yang Anda maksud dan memberikan kembali apa yang Anda inginkan."
To that end, we have persistently pursued innovation and refused to accept the limitations of existing models.
Untuk itu, kami telah diikuti persistently inovasi dan menolak untuk menerima keterbatasan model yang ada.


As a result, we developed our serving infrastructure and breakthrough PageRank™ technology that changed the way searches are conducted.
Akibatnya, kami mengembangkan infrastruktur kami melayani dan terobosan teknologi PageRank ™ yang mengubah cara pencarian dilakukan.

From the beginning, our developers recognized that providing the fastest, most accurate results required a new kind of server setup.
Dari awal, kami mengakui bahwa pengembang menyediakan tercepat dan paling akurat hasil diperlukan baru jenis server.
Whereas most search engines ran off a handful of large servers that often slowed under peak loads, ours employed linked PCs to quickly find each query's answer.
Sedangkan kebanyakan mesin pencari berlari off segelintir dari server besar yang sering melambat di bawah beban puncak, kami bekerja terhubung ke komputer dengan cepat menemukan jawaban dari setiap pencarian.
The innovation paid off in faster response times, greater scalability and lower costs.
Inovasi yang dibayar di respon kali lebih cepat, lebih rendah biaya dan skalabilitas.
It's an idea that others have since copied, while we have continued to refine our back-end technology to make it even more efficient.
Ada ide yang lain ada sejak disalin, sementara kami telah berlanjut untuk memperbaiki kami kembali akhir teknologi agar lebih efisien.

The software behind our search technology conducts a series of simultaneous calculations requiring only a fraction of a second.
Perangkat lunak di balik teknologi pencarian kami melakukan serangkaian kalkulasi serentak hanya memerlukan beberapa detik.
Traditional search engines rely heavily on how often a word appears on a web page.
Tradisional mesin pencari sangat bergantung pada seberapa sering sebuah kata muncul di halaman web.
We use more than 200 signals, including our patented PageRank™ algorithm, to examine the entire link structure of the web and determine which pages are most important.
Kami menggunakan lebih dari 200 sinyal, kami termasuk paten algoritma PageRank ™, untuk memeriksa seluruh struktur link di web dan menentukan halaman yang paling penting.
We then conduct hypertext-matching analysis to determine which pages are relevant to the specific search being conducted.
Kami kemudian melakukan hypertext-matching analisis untuk menentukan halaman mana yang relevan dengan pencarian khusus yang dilakukan.

By combining overall importance and query-specific relevance, we're able to put the most relevant and reliable results first.
Dengan menggabungkan seluruh kepentingan dan relevansi permintaan khusus, kami dapat menempatkan yang paling relevan dan dapat dipercaya hasil pertama.

PageRank Technology: PageRank reflects our view of the importance of web pages by considering more than 500 million variables and 2 billion terms.
Teknologi PageRank: PageRank mencerminkan pandangan kami akan pentingnya halaman web dengan mempertimbangkan lebih dari 500 juta variabel dan 2 miliar istilah.
Pages that we believe are important pages receive a higher PageRank and are more likely to appear at the top of the search results.
Halaman yang kami percaya adalah penting halaman menerima PageRank yang lebih tinggi dan lebih mungkin untuk muncul di bagian atas hasil pencarian.

PageRank also considers the importance of each page that casts a vote, as votes from some pages are considered to have greater value, thus giving the linked page greater value.
PageRank juga mempertimbangkan pentingnya setiap halaman yang membuat suara, karena dinilai dari beberapa halaman dianggap memiliki nilai lebih besar, sehingga memberikan nilai lebih besar halaman yang terhubung.
We have always taken a pragmatic approach to help improve search quality and create useful products, and our technology uses the collective intelligence of the web to determine a page's importance.
Kami selalu melakukan pendekatan pragmatis pencarian untuk membantu meningkatkan kualitas dan menciptakan produk yang bermanfaat, dan kami akan menggunakan teknologi intelijen kolektif dari web untuk menentukan kepentingan halaman.

Hypertext-Matching Analysis: Our search engine also analyzes page content.
Hypertext-Pencocokan Analisis: Kami mencari mesin juga menganalisis konten halaman.
However, instead of simply scanning for page-based text (which can be manipulated by site publishers through meta-tags), our technology analyzes the full content of a page and factors in fonts, subdivisions and the precise location of each word.
Namun, daripada hanya sekadar membaca sepintas teks berdasarkan halaman (yang dapat dimanipulasi oleh penerbit situs melalui meta-tag), teknologi kami menganalisis keseluruhan konten halaman dan berbagai faktor berupa font, subdivisi, dan lokasi dari setiap kata.
We also analyze the content of neighboring web pages to ensure the results returned are the most relevant to a user's query.
Kami juga menganalisis konten halaman web di sekitarnya untuk memastikan hasil yang paling relevan dengan permintaan pengguna.

Our innovations don't stop at the desktop.
Inovasi kami tidak berhenti pada desktop.
To give people access to the information they need, whenever and wherever they need it, we continue to develop new mobile applications and services that are more accessible and customizable.
Untuk memberikan akses masyarakat ke informasi yang mereka butuhkan, kapanpun dan dimanapun mereka memerlukannya, kami terus mengembangkan aplikasi baru ponsel dan layanan yang lebih mudah dan dapat diatur.
And we're partnering with industry-leading carriers and device manufacturers to deliver these innovative services globally.
Dan kami bermitra dengan industri terkemuka operator dan produsen perangkat ini inovatif untuk memberikan layanan global.
We're working with many of these industry leaders through the Open Handset Alliance to develop Android, the first complete, open, and free mobile platform, which will offer people a less expensive and better mobile experience.
Kami bekerja dengan banyak pemimpin industri ini melalui Open Handset Alliance untuk mengembangkan Android, pertama selesai, terbuka, dan bebas platform mobile, yang akan menawarkan orang-orang yang lebih mahal dan kurang pengalaman mobile.

Life of a Google Query
The life span of a Google query normally lasts less than half a second, yet involves a number of different steps that must be completed before results can be delivered to a person seeking information.



Cheers, frizzy.