Blog campur-campur
Showing posts with label future computing. Show all posts
Showing posts with label future computing. Show all posts

Enterprise Network Security

Fahmi Rizwansyah says:

SECURE AND PROTECT YOUR NETWORK FROM WEB-BASED THREATS

The effectiveness of enterprise network security in an organization becomes very clear after web-based attacks. Firewalls and antivirus software alone cannot protect a network against complex malicious code that threatens the IT infrastructure. Firewalls can detect web traffic, but most do not include a method to monitor the specific information in transfer. Most antivirus solutions are only effective against specific threats after they have occurred. These reactive solutions are insufficient because they do not protect against unknown future attacks.

Unfortunately, organizations cannot predict when or where the next threat will strike. The solution to this shortfall is to plan ahead to protect against new and emerging threats. Organizations need to enhance their existing security defenses with a solution that offers true content management. Security software complements firewalls and antivirus solutions by providing the most comprehensive enterprise network security solution. The three points of policy enforcement -- at the internet gateway, on the network, and on the desktop — create the multilayered, content-level protection required for the employee computing environment.

Web Security Gateway
Secure Web traffic while enabling the Web-based tools and applications.
Web 2.0 threat protection
Visibility and control with Web-based GUI
Real-time threat detection and site categorization
Enterprises with 500+ employees.

Web Security
Web security, reputation, and filtering protects against known and new Web threats.
Real-time protection against spyware, malicious mobile code, and other threats
Instantly actionable data with an intuitive dashboard
Web-based GUI reduces management costs
Enterprises up to 250,000+ users, with networks of virtually any configuration.

Web Filter
Improves productivity, reduces legal liability, and optimizes IT resources while allowing flexible Internet use policies.
Control Internet access
Prevent inappropriate content from entering the network
Limit bandwidth-intensive sites and applications
Enterprises from 50 to 250,000+ users, with networks of virtually any configuration.

Cheers, frizzy.

Contract Management Software

Fahmi Rizwansyah says:

Contract Management Software allows organizations to effectively manage the various types of contracts they engage in, including: buy side, sell side, and non monetary.

In general, the benefits of implementing Contract Management Software are seen in the reduction in time and effort and improvements in the contracting process. Specific areas that see improvement are:

* Improved information related to contracts and the activities governed by those contracts - better information and more of it.
* Streamlined processes that result in reduced operating expenses.
* Maximum realization of revenue and/or cost savings potential by maximizing the benefits of each contract through event management and performance and compliance monitoring.
* Maximum involvement of stakeholders through an online, paperless contracting process.
* Improved relationships with all stakeholders including staff, customers and suppliers.
* Strategic sourcing benefits - maximize buying power through better managed contracts.
* Business Intelligence through proper analysis of information about contracts and contracted activities.
* ensure visibility is available across all contracts to the authorized people,
* notify stakeholders of impending trigger points ensuring that contract management is pro-active, not re-active,
* validate payments, deliverables, commitments and compliance terms that are established in the actual contract, and
* ensure compliance to negotiated terms and conditions including rates, discounts, and rebates.

It delivers extensive benefit in each of these areas out of the box, but when coupled with business process improvement that is aimed at maximizing the functionality of the contract management process, organizations further improve their strategic benefits significantly.

Contract management software suppliers include: Accruent, Advanced Software Concepts, ARM Group, Blueridge Software, Capterra, CMSI, CobbleStone Systems, Covigna, Determine, diCarta, Ecteon, Exari, FieldCentrix, I-many, Ketera, Omniware, Open Windows Contracts, Procure, Salesforce.com, SAP and UpsideSoft.

General Features
* End-to-end Contract Management Solution
* Extremely flexible with user defined business rules and customized workflow
* Ability to manage various types of contracts
* Corporate repository of clauses, templates, and management indicator
* Completely Web-based
* Ensures contract and RFx (i.e. RFP, RFI, RFQ, etc.) visibility, monitoring, & reporting for all stakeholders
* Personal 'Dashboard'
* User-role-based functional view and navigation control

Advanced Workflow Management
* Template-based (static), business rules-based (dynamic) and organizational hierarchy-based workflow determination capabilities are supported
* Workflow determination and management can be applied in most modules of the system

Procurement Management
* Template-based RFx creation
* Dynamic workflow to manage RFx creation
* Structured RFx management
* Vendor access and participation

Request Processing
* Highly configurable, ‘wizard-like’ interface to capture user requests for:
o new RFXs and associated procurement activities
o new contracts,
o change orders,
o supplements,
o renewals, and
o terminations.

Contract Creation
* Template-based contract creation
* Dynamic workflow to manage contract creation
* Related documents can be scanned and attached
* Online negotiation
* Contract calculations
* PDF support
* Rich text editor

Contract Management
* Notification and alerts for required tasks and events
* Performance monitoring
* Compliance monitoring
* Renewal, amendments and change management

Financial & Budget Monitoring and Management
* Manage contract commitments
* Automate payments
* Manage holdbacks
* Integration with UpsideBilling

Integration with Other Systems
* Interface with any ERP/Financial system
* Fills in the gap between ERP and CRM systems
* Integrates other systems used in business processes

Management Information
* Business Intelligence Support
* Management Reporting
* Ad Hoc Reporting

Reports can also be provided in Crystal Web format and includes the data dictionary so that external reporting using Crystal Reports can be performed.

IT Knowledges

Fahmi Rizwansyah says:

, Development, Networking, Exchange, AS/400, DataCenter, Security, SQL Server, Database, Lotus Domino, Storage, SAP, Oracle, Outlook, Servers, DataManagement, Tech support, CIO, Exchange 2003, Windows Server 2003, Desktops, SQL, RPG, Hardware, Management, OS, Active Directory, Routers, Linux, Windows XP, Mobile, SQL Server 2005, DNS, Wireless, Lotus Notes, VPN, DB2, VoIP, CRM, Outlook 2003, iSeries, DHCP, Switches, Backup & recovery, RPGLE, Firewalls, Microsoft Excel, Cisco, Virtualization, Networking services, Visual Basic, Exchange 2007, Network security, Application development, Cabling, Microsoft Access, SQL Server 2000, Career development, Intrusion management, Windows, Incident response, Forensics, Software, SAP careers, Exchange security, Instant Messaging, Windows 2000 Server,

Microsoft Office, Hubs, DB2 Universal Database, Certifications, Outlook 2007, CLP, Help Desk, Encryption, ABAP, Printing, Basis, Backup, Network protocols, Web development, Disaster Recovery, Bandwidth, Availability, Application security, Data analysis, secure coding, Oracle 10g, CL, Network connectivity, Outlook Web Access, Compliance, Java, Policies, Network management software, OWA, Network monitoring, LotusScript, COBOL, Crystal Reports, IT careers, i5, Domain Controller, TCP, VB.NET, Single sign-on, Biometrics, Channel, AS/400 printing, VB, Software testing, Viruses, VMware, Identity & Access Management, provisioning, SAP development, OS/400, Security tokens, Exchange Server, Digital certificates, Risk management, ERP, Security Program Management, Desktop management applications, Oracle 9i, Installation, SAN, Spyware, Outlook calendar, Outlook error messages, Web security, SBS 2003, PC/Windows Connectivity, BlackBerry, Trojans, Lotus Notes 6.x, Project management, Hacking, Performance management, Security management, Vista, VBA, Group Policy, RPG/400, worms, SMTP, Email, configuration, R/3, User Permissions, Current threats, backdoors, Exchange 2000, human factors, Printers, Access, Software Quality Assurance, Browsers, Ethernet, Database programming, Network design, Access control, IT architecture, Patch management, SharePoint, Platform Security, SSL/TLS, patching, vulnerability management, Mainframe, PEN testing, Platform Issues, Remote management, VBScript, RPG ILE, Programming Languages, SQL Server errors, filtering, ASP.NET, Domino Designer, Stored Procedures, Oracle development, Spam, Network testing, Database Management Systems, Visual Basic 6, Migration, MySAP, C, XML, IBM, Lifecycle development, RAID, Networking Equipment, FTP, Web services, SBS, Storage products and equipment, Training, Ping, E-mail applications, JavaScript, Remote access, Excel 2003, LAN, Microsoft Word, SQL Query, Administration, Network, Excel macros, Exchange error messages, Vendors, Third-party services, MySQL, Windows Vista, Interoperability, Calendar, Business/IT alignment, Web site design & management, NetWeaver, CCNA, SQL Server database, V5R4, Oracle error messages, Systems management software, E-business, Vendor support, Oracle 8i, Lotus Notes 7.x, Network applications management, Networking certifications, NIC, Performance/Tuning, Router configuration, IP addressing, Unix, Microsoft Systems Management Server, IPv4, Public folders, .PST files, SAP certifications, Exchange migration, Query, Data warehousing applications, POP3, SQL Database, AS/400 Query, AS/400 backup, Spool files, Windows 2000 desktop, Lotus development, AS/400 errors, LDAP, Software testing tools, SAP HR, SAP BI, Domain management, Implementation, PHP, Storage management, Visual Basic for Applications, Wireless networking, Excel 2007, VMware ESX, Visual Basic .NET, PL/SQL, SAP BW, Access 2007, IFS, VB 6, Exchange 5.5, ISA Server, EDI, 390, Windows Security, IIS, T-SQL, Software Quality, GAL, VLAN, Outlook Express, Windows client administration and maintenance, Upgrades / implementations, Protocol analysis, Web development tools, RPGILE, Physical files, Windows Server 2008, GPO, Symantec, standards, Oracle administration, ODBC, Call Centers, Antivirus, V5R3, IT jobs, Hard drives, ASP, Virtual Machines, C#, SQL Server backup, Lotus Sametime, 3Com, Distribution Lists, SQL Server migration, Access 2003, Data center operations, Data center design, Security products, Wireless routers, HTML, Data mining/analysis, NAS, BlackBerry email, AS/400 administration, AS/400 security, Synchronization, Architecture/Design, Outlook contacts, Bind, SQL Server upgrades, CL/400, Lotus error messages, NetBIOS, Domain, Enterprise Desktop, Lotus Notes 8.x, Exchange administration, PDF, Tools, mySAP Financials, Cisco Routers, Global Address List, Oracle Forms, Careers, SQL Server performance, tips and tricks, Network administration, Auditing, iNotes, Scripting, Outsourcing, Outlook meetings, Exchange user settings, SAP FI, Subfile, Financials, SIP, Disk drives, Registry, Laws, Regulations, DB2/400, Email forwarding, Lotus Notes Database, Oracle Reports, IPv6, PTFs, J2EE, Network performance, Budgeting, Lotus Domino Server 6.x, SAP CRM, SAP APO, ED, SSIS, Password, Information risk management, TR, NFS, mySAP Human Resources, FI, CO, Microsoft Outlook, TCP/IP, CICS, SAP Transaction Codes, WAN, DDS, Cisco certifications, Domino 6.5, Workflow, Billing Support Systems, Business Information Warehouse, Lotus, Planning, Address book, Billing and customer care, WINS, Shared folders, Backup Exec, RAID 5, SAP MM, Delphi, Application software, Dell, Operating system platforms, Fault isolation, SQL Server stored procedures, SharePoint 2007, Lotus Notes 6.5, AS/400 development, SQL Server 2008, SQL Server tables, Veritas, Lotus email, mySAP Accounting, SQL Server administration, Service and support, Tape Backups, ECC6, MCSE, AS/400 FTP, Database connectivity, IP address, SAP FICO, Cisco switches, ROI & cost justification, VB.NET 2005, Avaya, z/OS, Benchmarking, CLLE, Internet Explorer, Oracle import/export, AS/400 performance, Subnets, Arrays, MPLS, Frame Relay, Windows Mobile, CSV, Mapped drives, BES, Non-Delivery Reports, Office 2003, iSCSI, BlackBerry Enterprise Server, Lotus Notes Calendar, Power management, mySAP Enterprise, zSeries, General Directories, Vulnerability Assessment & Audit, Hewlett-Packard, Wireless Access Points, EMC, Wi-Fi, Software development, SQL Server Reporting Services, SQL statements, Lotus Domino Server 7.x, Vista compatibility, Red Hat Enterprise Linux, mySAP Application Link Enabling, SQL Server security, Network Interface Cards, Exchange Server ActiveSync, McAfee, ActiveX, PIX, SELECT statement, Ubuntu Linux, iSeries development, Routing and switching, BRMS, Oracle SQL, Lotus Agents, ComboBox, Visual Basic 2005, .NET, Computer Associates, Nortel, Mobile synchronization, Strategic Enterprise Management, Intel, Server management, Oracle Application Server, Juniper Networks, Domino Server, mySAP Supplier Relationship Management, AS/400 Client Access, AS/400 careers, V5R2, AS/400 user profiles, ISP, CL programming, Word 2007, Telecom, RPG IV, mySAP CRM, Integration/Connectivity, Novell NDS, JSP, Web, SCSI, User access, Exchange 2003 SP2, Access Database

Cheers, frizzy.

VoIP

Fahmi Rizwansyah says:

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Cheers, frizzy.

Virtualization...continue

Fahmi Rizwansyah says:

Following are some (possibly overlapping) representative reasons for and benefits of virtualization.


* Virtual machines can be used to consolidate the workloads of several under-utilized servers to fewer machines, perhaps a single machine (server consolidation). Related benefits (perceived or real, but often cited by vendors) are savings on hardware, environmental costs, management, and administration of the server infrastructure.
* The need to run legacy applications is served well by virtual machines. A legacy application might simply not run on newer hardware and/or operating systems. Even if it does, if may under-utilize the server, so as above, it makes sense to consolidate several applications. This may be difficult without virtualization as such applications are usually not written to co-exist within a single execution environment (consider applications with hard-coded System V IPC keys, as a trivial example).
* Virtual machines can be used to provide secure, isolated sandboxes for running untrusted applications. You could even create such an execution environment dynamically - on the fly - as you download something from the Internet and run it. You can think of creative schemes, such as those involving address obfuscation. Virtualization is an important concept in building secure computing platforms.
* Virtual machines can be used to create operating systems, or execution environments with resource limits, and given the right schedulers, resource guarantees. Partitioning usually goes hand-in-hand with quality of service in the creation of QoS-enabled operating systems.
* Virtual machines can provide the illusion of hardware, or hardware configuration that you do not have (such as SCSI devices, multiple processors, ...) Virtualization can also be used to simulate networks of independent computers.
* Virtual machines can be used to run multiple operating systems simultaneously: different versions, or even entirely different systems, which can be on hot standby. Some such systems may be hard or impossible to run on newer real hardware.
* Virtual machines allow for powerful debugging and performance monitoring. You can put such tools in the virtual machine monitor, for example. Operating systems can be debugged without losing productivity, or setting up more complicated debugging scenarios.
* Virtual machines can isolate what they run, so they provide fault and error containment. You can inject faults proactively into software to study its subsequent behavior.
* Virtual machines make software easier to migrate, thus aiding application and system mobility.
* You can treat application suites as appliances by "packaging" and running each in a virtual machine.
* Virtual machines are great tools for research and academic experiments. Since they provide isolation, they are safer to work with. They encapsulate the entire state of a running system: you can save the state, examine it, modify it, reload it, and so on. The state also provides an abstraction of the workload being run.
* Virtualization can enable existing operating systems to run on shared memory multiprocessors.
* Virtual machines can be used to create arbitrary test scenarios, and can lead to some very imaginative, effective quality assurance.
* Virtualization can be used to retrofit new features in existing operating systems without "too much" work.
* Virtualization can make tasks such as system migration, backup, and recovery easier and more manageable.
* Virtualization can be an effective means of providing binary compatibility.
* Virtualization on commodity hardware has been popular in co-located hosting. Many of the above benefits make such hosting secure, cost-effective, and appealing in general.
* Virtualization is fun.
* Plenty of other reasons ...

Variations
Generically speaking, in order to virtualize, you would use a layer of software that provides the illusion of a "real" machine to multiple instances of "virtual machines". This layer is traditionally called the Virtual Machine Monitor (VMM).

There are many (often intertwined) high-level ways to think about a virtualization system's architecture. Consider some scenarios:
A VMM could itself run directly on the real hardware - without requiring a "host" operating system. In this case, the VMM is the (minimal) OS.

A VMM could be hosted, and would run entirely as an application on top of a host operating system. It would use the host OS API to do everything. Furthermore, depending on whether the host and the virtual machine's architectures are identical or not, instruction set emulation may be involved.
From the point of view of how (and where) instructions get executed: you can handle all instructions that execute on a virtual machine in software; you can execute most of the instructions (maybe even some privileged instructions) directly on the real processor, with certain instructions handled in software; you can handle all privileged instructions in software ...
A different approach, with rather different goals, is that of complete machine simulation. SimOS and Simics, as discussed later, are examples of this approach.
Although architectures have been designed explicitly with virtualizationWhy Virtualization in mind, a typical hardware platform, and a typical operating system, both are not very conducive to virtualization.

As mentioned above, many architectures have privileged and non-privileged instructions. Assuming the programs you want to run on the various virtual machines on a system are all native to the architecture (in other words, it would not necessitate emulation of the instruction set). Thus, the virtual machine can be run in non-privileged mode. One would imagine that non-privileged instructions can be directly executed (without involving the VMM), and since the privileged instructions would cause a trap (since they are being executed in non-privileged mode), they can be "caught" by the VMM, and appropriate action can be taken (they can be simulated by the VMM in software, say). Problems arise from the fact that there may be instructions that are non-privileged, but their behavior depends on the processor mode - these instructions are sensitive, but they do not cause traps.

Cheers, frizzy.

Virtualization

Fahmi Rizwansyah says:

Virtualization Management
Virtualization technologies can deliver sea-changing benefits to your organization. But, as Thomas Bittman, Gartner analyst, adroitly noted, "Virtualization without good management is more dangerous than not using virtualization in the first place."
As an organization's computing environment gets more virtualized, it also gets more abstract. Increasing abstraction can increase complexity, making it harder for IT staff to control their world and undermining the benefits of virtualization.


Integrating physical and virtual management enables you to realize the full promise of virtualization while minimizing its risks. An integrated approach is critical because all IT infrastructures—even those with a significant amount of virtualization—include both virtual and physical components. Even if you have a management system that effectively handles virtualized systems, if it doesn't manage physical systems you will still have to manage many separate "islands"—and you will consume much more time and resources than you would with a management system that can handle all your assets. By using comprehensive virtualization management technology, you keep complexity at a minimum and streamline operations. A common virtualization management environment reduces training, ensures uniform policy application and simplifies maintenance.

Effective physical and virtual management can optimize the benefits of using virtualization technologies. This includes monitoring and managing hardware and software in a distributed environment. By allowing operations staff to monitor both the software running on physical machines and the physical machines themselves, it lets them know what's happening in their environment. It also lets them respond appropriately, running tasks and taking other actions to fix problems that occur.

Another unavoidable concern for people who manage virtualized, distributed environments is installing software and managing how that software is configured. While it's possible to perform these tasks by hand, end-to-end virtualization management technology can automate and accelerate this process.

Tools that work in both the physical and virtual worlds are highly effective and attractive. Yet think about an environment that has dozens or even hundreds of VMs installed. How are these machines built, changed and depreciated? And how are other VM-specific management functions performed? Addressing these questions requires a comprehensive toolset that includes managing virtualized hardware. Among other benefits, it can help operations staff choose workloads for virtualization, create the VMs that will run those workloads, and transfer the applications to their new homes.

Server Virtualization
For most IT people, the word "virtualization" conjures up thoughts of running multiple operating systems on a single physical machine. This is hardware virtualization, and while it's not the only important kind of virtualization, it is unquestionably the most visible today.
The core idea of hardware virtualization is simple: Use software to create a virtual machine that emulates a physical computer. This creates a separate OS environment that is logically isolated from the host server. By providing multiple VMs at once, this approach allows running several operating systems simultaneously on a single physical machine.

Rather than paying for many under-utilized server machines, each dedicated to a specific workload, server virtualization allows consolidating those workloads onto a smaller number of more fully-used machines. This implies fewer people to manage those computers, less space to house them, and fewer kilowatt hours of power to run them, all of which saves money.
Server virtualization also makes restoring failed systems easier. VMs are stored as files, and so restoring a failed system can be as simple as copying its file onto a new machine. Since VMs can have different hardware configurations from the physical machine on which they're running, this approach also allows restoring a failed system onto any available machine. There's no requirement to use a physically identical system.

Application Virtualization
In a physical environment, every application depends on its OS for a range of services, including memory allocation, device drivers, and much more. Incompatibilities between an application and its operating system can be addressed by either server virtualization or presentation virtualization. But for incompatibilities between two applications installed on the same instance of an OS, you need application virtualization.
Applications installed on the same device commonly share configuration elements, yet this sharing can be problematic. For example, one application might require a specific version of a dynamic link library (DLL) to function, while another application on that system might require a different version of the same DLL. Installing both applications creates a situation, where one of them overwrites the version required by the other causing the application to malfunction or crash. To avoid this, organizations often perform extensive compatibility testing before installing a new application, an approach that's workable but quite time-consuming and expensive.

Application virtualization solves this problem by creating application-specific copies of all shared resources. The configurations an application might share with other applications on its system—registry entries, specific DLLs, and more—are instead packaged with it and execute in the machine's cache, creating a virtual application. When a virtual application is deployed, it uses its own copy of these shared resources.
Application virtualization makes deployment significantly easier. Since applications no longer compete for DLL versions or other shared aspects of their environment, there's little need to test new applications for conflicts with existing applications before they're rolled out. And these virtual applications can run alongside ordinary, installed applications—so not everything needs to be virtualized, although doing so avoids many problems and decreases the time end-users spend with the helpdesk trying to resolve them. An effective application virtualization solution also enables you to manage both virtual applications and installed applications from a common interface.

Storage Virtualization
Generally speaking, storage virtualization refers to providing a logical, abstracted view of physical storage devices. It provides a way for many users or applications to access storage without being concerned with where or how that storage is physically located or managed. It enables the physical storage in an environment to be shared across multiple application servers, and physical devices behind the virtualization layer to be viewed and managed as if they were one large storage pool with no physical boundaries.

Virtualizing storage networks enables two key additional capabilities:
* The ability to mask or hide volumes from servers that are not authorized to access those volumes, providing an additional level of security.
* The ability to change and grow volumes on the fly to meet the needs of individual servers.

Essentially, anything other than a locally attached disk drive might be viewed in this light. Typically, storage virtualization applies to larger SAN (storage area network) arrays, but it is just as accurately applied to the logical partitioning of a local desktop hard drive, redundant array of independent disks (RAID), volume management, virtual memory, file systems and virtual tape. A very simple example is folder redirection in Windows, which lets the information in a folder be stored on any network-accessible drive. Much more powerful (and more complex) approaches include SANs. Large enterprises have long benefited from SAN technologies, in which storage is uncoupled from servers and attached directly to the network. By sharing storage on the network, SANs enable highly scalable and flexible storage resource allocation, high efficiency backup solutions, and better storage utilization.

Sources: http://www.microsoft.com/virtualization/products.mspx
Cheers, frizzy2008.

Load Balancing

Fahmi Rizwansyah says:

From Wikipedia, the free encyclopedia

In computer networking, load balancing is a technique to spread work between two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, and minimize response time. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch).
It is commonly used to mediate internal communications in computer clusters, especially high-availability clusters.


For Internet services
One of the most common applications of load balancing is to provide a single Internet service from multiple servers, sometimes known as a server farm. Commonly load-balanced systems include popular web sites, large Internet Relay Chat networks, high-bandwidth File Transfer Protocol sites, NNTP servers and DNS servers.
For Internet services, the load balancer is usually a software program which is listening on the port where external clients connect to access services. The load balancer forwards requests to one of the "backend" servers, which usually replies to the load balancer. This allows the load balancer to reply to the client without the client ever knowing about the internal separation of functions. It also prevents clients from contacting backend servers directly, which may have security benefits by hiding the structure of the internal network and preventing attacks on the kernel's network stack or unrelated services running on other ports.
Some load balancers provide a mechanism for doing something special in the event that all backend servers are unavailable. This might include forwarding to a backup load balancer, or displaying a message regarding the outage.
An alternate method of load balancing which does not necessarily require a dedicated software or hardware node, is called round robin DNS. In this technique, multiple IP addresses are associated with a single domain name (i.e. www.example.org); clients themselves are expected to choose which server to connect. Unlike the use of a dedicated load balancer, this technique is not "transparent" to clients, because it exposes the existence of multiple backend servers. The technique has other advantages and disadvantages, depending on the degree of control over the DNS server and the granularity of load balancing which is desired.
A variety of scheduling algorithms are used by load balancers to determine which backend server to send a request to. Simple algorithms include random choice or round robin. More sophisticated load balancers may take into account additional factors, such as a server's reported load, recent response times, up/down status (determined by a monitoring poll of some kind), number of active connections, geographic location, capabilities, or how much traffic it has recently been assigned. High-performance systems may use multiple layers of load balancing.
In addition to using dedicated hardware load balancers, software-only solutions are available, including open source options. Examples of the latter include the Apache web server's mod_proxy_balancer extension and the Pound reverse proxy and load balancer.


Persistence
An important issue when operating a load-balanced service is how to handle information that must be kept across the multiple requests in a user's session. If this information is stored locally on one back end server, then subsequent requests going to different back end servers would not be able to find it. This might be cached information that can be recomputed, in which case load-balancing a request to a different back end server just introduces a performance issue.
One solution to the session data issue is to send all requests in a user session consistently to the same back end server. This is known as "persistence" or "stickiness". A large downside to this technique is its lack of automatic failover: if a backend server goes down, its per-session information becomes inaccessible, and sessions depending on it are lost.
Assignment to a particular server might be based on a username, client IP address, or random assignment. Due to DHCP, Network Address Translation, and web proxies, the client's IP address may change across requests, and so this method can be somewhat unreliable. Random assignments must be remembered by the load balancer, which creates a storage burden. If the load balancer is replaced or fails, this information can be lost, and assignments may need to be deleted after a timeout period or during periods of high load, to avoid exceeding the space available for the assignment table. The random assignment method also requires that clients maintain some state, which can be a problem, for example when a web browser has disabled storage of cookies. Sophisticated load balancers use multiple persistence techniques to avoid some of the shortcomings of any one method.
Another solution is to keep the per-session data in a database. Generally this is bad for performance since it increases the load on the database: the database is best used to store information less transient than per-session data. (Interestingly, to prevent a database from becoming a single point of failure, and to improve scalability, the database is often replicated across multiple machines, and load balancing is used to spread the query load across those replicas.)
Fortunately there are more efficient approaches. In the very common case where the client is a web browser, per-session data can be stored in the browser itself. One technique is to use a browser cookie, suitably time-stamped and encrypted. Another is URL rewriting. Storing session data on the client is generally the preferred solution: then the load balancer is free to pick any backend server to handle a request.

Load balancer features
Hardware and software load balancers can come with a variety of special features.

* Asymmetric load: A ratio can be manually assigned to cause some backend servers to get a greater share of the workload than others. This is sometimes used as a crude way to account for some servers being faster than others.
* Priority activation: When the number of available servers drops below a certain number, or load gets too high, standby servers can be brought online
* SSL Offload and Acceleration: SSL applications can be a heavy burden on the resources of a Web Server, especially on the CPU and the end users may see a slow response (or at the very least the servers are spending a lot of cycles doing things they weren't designed to do). To resolve these kinds of issues, a Load Balancer capable of handling SSL Offloading in specialized hardware may be used. When Load Balancers are taking the SSL connections, the burden on the Web Servers is reduced and performance will not degrade for the end users.
* Distributed Denial of Service (DDoS) attack protection: load balancers can provide features such as SYN cookies and delayed-binding (the back-end servers don't see the client until it finishes its TCP handshake) to mitigate SYN flood attacks and generally offload work from the servers to a more efficient platform.
* HTTP compression: reduces amount of data to be transferred for HTTP objects by utilizing gzip compression available in all modern web browsers
* TCP offload: different vendors use different terms for this, but the idea is that normally each HTTP request from each client is a different TCP connection. This feature utilizes HTTP/1.1 to consolidate multiple HTTP requests from multiple clients into a single TCP socket to the back-end servers.
* TCP buffering: the load balancer can buffer responses from the server and spoon-feed the data out to slow clients, allowing the server to move on to other tasks.
* HTTP caching: the load balancer can store static content so that some requests can be handled without contacting the web servers.
* Content Filtering: some load balancers can arbitrarily modify traffic on the way through.
* HTTP security: some load balancers can hide HTTP error pages, remove server identification headers from HTTP responses, and encrypt cookies so end users can't manipulate them.
* Priority queuing: also known as rate shaping, the ability to give different priority to different traffic.
* Content aware switching: most load balancers can send requests to different servers based on the URL being requested.
* Client authentication: authenticate users against a variety of authentication sources before allowing them access to a website.
* Programmatic traffic manipulation: at least one load balancer allows the use of a scripting language to allow custom load balancing methods, arbitrary traffic manipulations, and more.
* Firewall: Direct connections to backend servers are prevented, for security reasons

In telecommunications
Load balancing can be useful when dealing with redundant communications links. For example, a company may have multiple Internet connections ensuring network access even if one of the connections should fail.
A failover arrangement would mean that one link is designated for normal use, while the second link is used only if the first one fails.
With load balancing, both links can be in use all the time. A device or program decides which of the available links to send packets along, being careful not to send packets along any link if it has failed. The ability to use multiple links simultaneously increases the available bandwidth.
Major telecommunications companies have multiple routes through their networks or to external networks. They use more sophisticated load balancing to shift traffic from one path to another to avoid network congestion on any particular link, and sometimes to minimize the cost of transit across external networks or improve network reliability.

Relationship with failover
Load balancing is often used to implement failover — the continuation of a service after the failure of one or more of its components. The components are monitored continually (e.g., web servers may be monitored by fetching known pages), and when one becomes non-responsive, the load balancer is informed and no longer sends traffic to it. And when a component comes back on line, the load balancer begins to route traffic to it again. For this to work, there must be at least one component in excess of the service's capacity. This is much less expensive and more flexible than failover approaches where a single "live" component is paired with a single "backup" component that takes over in the event of a failure. In a RAID disk controller, using RAID1 (mirroring) is analogous to the "live/backup" approach to failover, where RAID5 is analogous to load balancing failover.

Network Load Balancing Services (NLBS)
is a proprietary Microsoft implementation of clustering and load balancing that is intended to provide high availability and high reliability, as well as high scalability. NLBS is intended for applications with relatively small data sets that rarely change (one example would be web pages), and do not have long-running-in-memory states. These types of applications are called stateless applications, and typically include Web, File Transfer Protocol (FTP), and virtual private networking (VPN) servers. Every client request to a stateless application is a separate transaction, so it is possible to distribute the requests among multiple servers to balance the load. One attractive feature of NLBS is that all servers in a cluster monitor each other with a heartbeat signal, so there is no single point of failure.

Configuration Tips:
* The network load balancing service requires for all the machines to have the correct local time. Ensure the Windows Time Service is properly configured on all hosts to keep clocks synchronized. Unsyncronized times will cause a network login screen to pop up which doesn't accept valid login credentials.
* The server console can't have any network card dialogue boxes open when you are configuring the "Network Load Balancing Manager" from your client machine.
* You have to manually add each load balancing server individually to the load balancing cluster after you've created a cluster host.
* To allow communication between servers in the same NLB cluster, each server requires the following registry entry: a DWORD key named "UnicastInterHostCommSupport" and set to 1, for each network interface card's GUID (HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\WLBS\Parameters\Interface\{GUID})
* NLBS may conflict with some Cisco routers, which are not able to resolve the IP address of the server and must be configured with a static ARP entry.

Cheers, frizzy2008.

Cloud Computing

Fahmi Rizwansyah says:

by some resources

Cloud computing provides a cost-effective architecture that has enabled new business models including Platform-as-a-Service (Paas) and Software-as-a-Service (SaaS). The financial crisis might spell good news for cloud providers up and down the stack. According recent articles, IDC predicts that the current economic crisis in the U.S. will contribute to cloud computing growth over the next five years and spending on IT cloud services will reach $42 billion by 2012. Frank Gens, senior vice president and chief analyst at IDC believes, "The disruptive vectors of the market will be among the highest growth sectors in 2009 as their advantages are magnified in a down economy, and suppliers who slow down their transformation will limit long-term viability and miss near-term growth."


John Horrigan at Pew Research offered this look at cloud adoption in the consumer space, which has been driving the growth of the big public platforms long before the economic downturn. As IT organizations are pressured to find yet more efficiency it will be interesting to see how quickly they find the confidence in providers to follow consumers to the cloud. At a snap poll of attendees conducted this week at Gartner’s Data Center Conference the results appear promising.

Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT's existing capabilities.

Cloud computing is at an early stage, with a motley crew of providers large and small delivering a slew of cloud-based services, from full-blown applications to storage services to spam filtering. Yes, utility-style infrastructure providers are part of the mix, but so are SaaS (software as a service) providers such as Salesforce.com. Today, for the most part, IT must plug into cloud-based services individually, but cloud computing aggregators and integrators are already emerging.

InfoWorld talked to dozens of vendors, analysts, and IT customers to tease out the various components of cloud computing. Based on those discussions, here's a rough breakdown of what cloud computing is all about:

1. SaaS
This type of cloud computing delivers a single application through the browser to thousands of customers using a multitenant architecture. On the customer side, it means no upfront investment in servers or software licensing; on the provider side, with just one app to maintain, costs are low compared to conventional hosting. Salesforce.com is by far the best-known example among enterprise applications, but SaaS is also common for HR apps and has even worked its way up the food chain to ERP, with players such as Workday. And who could have predicted the sudden rise of SaaS "desktop" applications, such as Google Apps and Zoho Office?

2. Utility computing
The idea is not new, but this form of cloud computing is getting new life from Amazon.com, Sun, IBM, and others who now offer storage and virtual servers that IT can access on demand. Early enterprise adopters mainly use utility computing for supplemental, non-mission-critical needs, but one day, they may replace parts of the datacenter. Other providers offer solutions that help IT create virtual datacenters from commodity servers, such as 3Tera's AppLogic and Cohesive Flexible Technologies' Elastic Server on Demand. Liquid Computing's LiquidQ offers similar capabilities, enabling IT to stitch together memory, I/O, storage, and computational capacity as a virtualized resource pool available over the network.

3. Web services in the cloud
Closely related to SaaS, Web service providers offer APIs that enable developers to exploit functionality over the Internet, rather than delivering full-blown applications. They range from providers offering discrete business services -- such as Strike Iron and Xignite -- to the full range of APIs offered by Google Maps, ADP payroll processing, the U.S. Postal Service, Bloomberg, and even conventional credit card processing services.

4. Platform as a service
Another SaaS variation, this form of cloud computing delivers development environments as a service. You build your own applications that run on the provider's infrastructure and are delivered to your users via the Internet from the provider's servers. Like Legos, these services are constrained by the vendor's design and capabilities, so you don't get complete freedom, but you do get predictability and pre-integration. Prime examples include Salesforce.com's Force.com, Coghead and the new Google App Engine. For extremely lightweight development, cloud-based mashup platforms abound, such as Yahoo Pipes or Dapper.net.

5. MSP (managed service providers)
One of the oldest forms of cloud computing, a managed service is basically an application exposed to IT rather than to end-users, such as a virus scanning service for e-mail or an application monitoring service (which Mercury, among others, provides). Managed security services delivered by SecureWorks, IBM, and Verizon fall into this category, as do such cloud-based anti-spam services as Postini, recently acquired by Google. Other offerings include desktop management services, such as those offered by CenterBeam or Everdream.

6. Service commerce platforms
A hybrid of SaaS and MSP, this cloud computing service offers a service hub that users interact with. They're most common in trading environments, such as expense management systems that allow users to order travel or secretarial services from a common platform that then coordinates the service delivery and pricing within the specifications set by the user. Think of it as an automated service bureau. Well-known examples include Rearden Commerce and Ariba.

7. Internet integration
The integration of cloud-based services is in its early days. OpSource, which mainly concerns itself with serving SaaS providers, recently introduced the OpSource Services Bus, which employs in-the-cloud integration technology from a little startup called Boomi. SaaS provider Workday recently acquired another player in this space, CapeClear, an ESB (enterprise service bus) provider that was edging toward b-to-b integration. Way ahead of its time, Grand Central -- which wanted to be a universal "bus in the cloud" to connect SaaS providers and provide integrated solutions to customers -- flamed out in 2005.

Another cloud computing resources:
  1. http://en.community.dell.com/blogs/cloudcomputing/default.aspx
  2. http://www.microsoft.com/presspass/press/2008/oct08/10-27PDCDay1PR.mspx
  3. http://www.microsoft.com/azure/default.mspx
  4. http://en.wikipedia.org/wiki/Cloud_computing
other resources can be found by searching with keyword "Cloud Computing"

Ini bukan gaya-gayaan sob, tapi ini adalah tren komputer ke depan, dimana kita harus bisa dan siap mengadopsinya.
Cheers, frizzy2008.