Share via


Microsoft Windows 2000 Overview

 

Microsoft Corporation

1998

Summary: With the Microsoft® Windows® 2000 operating system, Microsoft is building a model for full-scale Internet and intranet computing that is much simpler to use and manage than anything out there today. Within corporations and on the Internet, Information Systems groups will be able to scale their service offerings, increase robustness, and ensure availability using "building blocks" of servers without worrying about how to manage individual machines.

Contents

Introduction
Windows 9X and Windows NT: Integrating the Network and the Operating System
Making It Easier to Use and Manage the Network
Windows DNA: The Foundation for Next-Generation Applications
For More Information

Introduction

The computing industry has long hailed client/server computing as a promising alternative to mainframes. The idea was that inexpensive servers would replace the centralized model of mainframe computing, particularly in building line-of-business solutions. Mainframes were, in a word, expensive—to buy, to install, and to maintain. A single machine meant a single point of failure, requiring that users always share resources, even if they were doing unrelated tasks; it also meant that to add more users (to "scale") you eventually had to replace your mainframe with an even bigger (and more expensive) machine.

As client/server computing gained momentum, however, it introduced a new set of costs. Instead of working together as a single unit, servers were actually deployed like individual, small-scale mainframes; as a result, the more servers there were, the harder it was to manage a network. Writing applications that insulated users and administrators from dealing with multiple servers was difficult, in particular because the operating system and network services were not completely integrated. Users looking for a specific piece of information had to know on which machine it was physically stored. In reality, the user and administrator experience under the client/server model became more complicated than it had been under the centralized mainframe model.

Even so, the transition to client/server produced many benefits—in particular, efficiency gains and increased flexibility in the use of resources. For example, client/server broke the management load into smaller pieces. Users had their own desktop machines for independent tasks like writing letters that didn't burden shared computing resources the way they did on the mainframe. Yet users could access shared resources, like printers, file servers, electronic mail, and databases, through servers on the network.

Windows 9x and Windows NT: Integrating the Network and the Operating System

The client/server model drove the first steps taken to integrate the network into the operating system. It motivated the designers of Microsoft Windows® and Microsoft Windows NT operating systems to solve many important problems associated with networked PCs. For example:

  • Managing security on servers is logically centralized, which greatly simplifies administration. Many administrative functions can be executed once for an entire group or "domain" of servers, making it unnecessary to manage all aspects of the network environment on a per-server basis.
  • The operating system simplifies the naming of resources for users, administrators, and programmers. All applications can use the same naming conventions, defined by the operating system, for users, groups, distribution lists, domains, printers, workstations, and servers. Users and administrators benefit directly from this simplification.
  • Applications can fully integrate with operating system security services. To control access to their services and resources, applications can take advantage of Windows NT security services. This frees the user from having to log on more than once and frees administrators from adding users to multiple application-specific security databases. Programmers, in turn, do not need to create a custom database of users and perform authentication and access validation on their own.
  • Accessing low-level network protocols in programs is now much easier. Much of the complexity of communications protocols is hidden from sight: Users don't need to know whether they're running on TCP/IP, IPX/SPX, or some other protocol. To access the network, applications, and services, users can simply rely on well-defined system services such as Distributed COM (DCOM) or Remote Procedure Calls (RPC), without concern for protocol specifics.

Windows has made tremendous progress toward becoming a great platform for distributed computing. However, computing on a network of PCs is not yet where it could be: as simple in concept as computing on a single large machine, but more economical in its execution. Distributed computing has promised to make servers work together seamlessly as a unit, yielding better fault-tolerance and easier, less-expensive scalability than can be achieved with a physically centralized computing environment. Businesses need the best of both the centralized and decentralized computing worlds. They need the benefits of the client/server model at a drastically reduced cost.

Establishing the ideal distributed computing environment requires completely integrating key networking services into the operating system. This will solve a number of difficult problems that users and administrators currently face in dealing with networked PCs:

  • Finding information is too hard. As information becomes more dispersed, and as the control and management of information and resources becomes more decentralized, users are having a harder time finding what they're looking for. People spend too much time trying to understand and navigate the physical structure of the network, which has little to do with the information they are trying to find. What's more, they must sift through an overwhelming volume of information to find the exact pieces they're seeking.
  • There are too many directories and security systems to manage. Because the operating systems do not yet offer server applications a generalized, shared "schema" (see Endnote 1), many applications invent their own directory-like services or create application-specific add-on directories outside the one provided by the system. This leads to management headaches. For example, the administrator bears the burden of a system in which each application (database, file system, mail system, and security system) handles replication of information independently.
  • The cost of managing a large distributed network grows at least as fast as the network. The task of managing an ever-growing number of clients and servers is becoming more complex and costly. It's unrealistic to expect that the same model used for managing smaller networks that use one or two servers will work for large corporate networks or for a massive, public network like the Internet. The way networks of workstations and servers are managed must fundamentally change.
  • Servers and networks of servers are still not robust enough. Individual servers may behave like miniature mainframes, but they are more susceptible to certain kinds of failure. Networked servers can and must achieve a very high level of availability at a significantly lower cost than mainframes.
  • Large-scale public and enterprise networks need more flexible and comprehensive security options. Intranet, extranet (between businesses), and Internet network scenarios each have different security requirements. Security in a very large company needs to scale well to a large number of "domains." While many intranets are not as restrictive internally as public networks, they do need protection from outside access, and they require strong resistance to attacks. Between businesses and on the Internet, security needs to be very restrictive and it needs to scale well to a large number of users.
  • Incremental scaling is still too hard. When a server becomes overloaded, it is not always obvious how to apply additional servers to share the load effectively. Moving specific functions (for example, the database on a Web server) to another machine will not divide the processing load optimally or evenly. Instead of placing the responsibility of load-balancing on administrators, the ideal distributed environment should "auto-magically" readjust computing load when a server is added to the network.

Solving these problems requires a rock-solid foundation for building and running distributed services and applications. The Windows NT operating system is that foundation. Windows 2000 includes a number of enhanced features that will yield immediate benefits for end users and administrators:

  • Directory Services
  • Distributed Security and File System
  • Centralized Management Infrastructure
  • Power management and Plug and Play
  • Support for scalable hardware and Very Large Memory

Complementary technologies include the following:

  • Clustering
  • Message Queuing
  • Component and Transaction Services

Making it Easier to Use and Manage the Network

In a distributed environment, Windows 2000 will be more reliable, more scalable, and substantially easier to deploy, manage, and use than any distributed system created to date.

  • The Directory Services in Windows 2000 make it easy to find information. Administrators, users, and applications can find information about people, printers, files, and other shared resources in a single place—the Active Directory—even if the resources reside on many different physical servers. Objects are assigned to logical workgroups or other organizational units rather than to individual servers. Therefore, users won't have to change how they find and name objects like files when administrators move them to different physical servers. This "location independence" is fundamental to making distributed computing simple to use and easy to manage.

    As with other system services, Windows 2000 defines standard interfaces to the Active Directory (referred to as ADSI) so that other directories can integrate with it. In fact, the Active Directory can present information from multiple sources as a single system directory object. Some of those sources can even be dynamic—for example, the printer status or the contents of the print queue.

    The Windows NT native protocol for directory access is the industry-standard Lightweight Directory Access Protocol (LDAP), which allows for extensive interoperability with directory services from other vendors.

    Figure 1. The Windows 2000 Directory Services provide a logically centralized location for finding shared resources. This insulates users and administrators from having to navigate the physical structure of the network. Instead, the Directory gives them a unified view of the network that doesn't change as the physical network structure evolves.

  • The Directory Services remove the need for numerous application-specific directories, making it easier to manage the network. Even the most demanding applications and services can take advantage of the Directory because it is extensible. Applications can add or change information in existing object "schema" or add new object classes. As applications and services evolve to make use of the Directory, it will naturally become the backbone for network management, especially in large enterprises.

    The Directory Services in Windows 2000 combine the best features of Domain Name Services (DNS) and X.500. For example, since the Directory uses DNS as the global backbone namespace, it uses DNS to look up LDAP services. It also integrates DNS with directory storage and replication. Though the Directory uses the X.500 data model, the implementation is more light-weight—Windows 2000 uses the LDAP protocol instead of DAP, and it uses a combination of Kerberos and public key security that is tightly integrated with Windows NT and Microsoft BackOffice® family services, including file and print.

  • The Microsoft Management Console greatly simplifies management of network services and applications. Administrators will be able to manage all services, applications, users, and resources in an enterprise network through the Microsoft Management Console (MMC), a comprehensive tool that ships with Windows 2000. MMC operates over the Directory. Applications and services can expose information and operations to the Windows NT management framework as Directory objects. Special-purpose management tools can integrate with the Microsoft Management Console user interface through extensions called "snap-ins."

    Figure 2. The Microsoft Management Console allows administrators to manage all system services and applications from a single, customizable, and comprehensive tool.

    The management console serves as the host for scripts written in any language that can operate over any combination of objects in the Directory, regardless of what vendor, service, or application created those objects, and regardless of where those objects physically reside. In addition, administrators can create, save, and exchange any number of console configurations, which enables very concrete delegation of responsibilities and tasks. The goal is to let administrators create a single, customized view of all management activities.

    Figure 3. The Microsoft Management Console snap-ins allow the administrator to create a single, custom view for all management activities.

  • The Zero Administration Initiative for Windows allows for completely centralized management of all workstations in an enterprise. Instead of forcing the administrator to manage all enterprise workstations remotely, one at a time, and in real-time, Windows 2000 and Windows 98 take a different approach. All configuration and state information for users, applications, and machines is centralized so administrators can more easily access and manipulate it. Once user and machine group profiles are established, administrators don't need to upgrade the operating system manually or install applications onto individual machines. Instead, a policy dictates which users require which application software, and which computers require which operating system software. The system then updates application and system software automatically—and keeps it up-to-date, even if policy evolves.

    Storing all workstation configuration and state information on a central server means that users can effectively "roam" from one physical workstation to the next. When they log on to any machine with the same user name and password, they see the same desktop configuration, set of applications, and group of files. Administrators can also replace a user's client machine without backup and restore operations—the system simply regenerates the correct software installations based on the user's profile and the requirements of the hardware. This functionality is called IntelliMirror™ management technology. In general, Windows 2000 keeps each user's local data in sync with the network server. If the network goes down while the user is editing a document, the user can still work on a local copy. When the user's machine connects back to the network, the operating system reconciles any changes made to the local data.

    Installing and upgrading software will become much easier as applications take advantage of installer technology that will be an integral part of the operating system itself. Using third-party authoring tools, application vendors define a "package" that gives the Windows 2000 installation service (MSI) names of files, shortcuts, lists of features and related components, and any information that needs to be transferred into the user's registry settings. Developers can create setup packages to run from CD-ROMs, an internal network, or the Internet. If it is ever necessary to uninstall the application, the system will refer to the application's package to determine which files to remove from the user's hard drive and how to reset the registry. This is a better method than using log files, which can get corrupted or deleted. In addition, it ensures that applications will not leave files on the hard drive that no longer need to be there.

  • New system and network services will greatly increase resource availability and system robustness. All Windows 2000 services are designed to exploit distributed computing to enhance the system's resistance to failure. No single failure in the network will cause the core Windows NT distributed services to fail.

  • Clustering for Windows NT, which debuted as part of Windows NT 4.0, Enterprise Edition, provides fail-over support for paired servers (see Endnote 2). Should one server fail or require downtime for routine maintenance, the other takes over. When all is well, the clustered machines are free to process network requests independently, improving performance by load-balancing requests for server resources between the physical machines.

  • The security protocols and services in Windows NT are more safe, flexible, and scalable than ever. The Windows 2000 updated security system offers stronger resistance to attacks. State-of-the-art public (RSA) and secret (Kerberos-based) key technologies provide simple, integrated, and comprehensive security services. Private security allows for secure management of network resources and gives applications easy access to information stored in the Active Directory. Public key is particularly important for very large enterprises, public networks, and interenterprise security. Although they address different security needs, public and private security share a single infrastructure within the Windows 2000 distributed security services. The security services easily scale from a small local area network, to a large corporate intranet, to a "virtual private network" across the Internet (see Figure 5), to the Internet at large.

    Figure 4. Using the Point-to-Point Tunneling Protocol (PPTP), businesses can connect private networks in different geographic locations across the public Internet instead of through expensive leased lines. Remote users can also access the corporate network through the Internet.

    The Windows 2000 security services incorporate X509v3 certificates, which are used during strong authentication to identify users trying to gain access to the directory. Security for X509v3 certificate holders is integrated with access checking for Windows 2000. The system maps an identifying property of each certificate to a property on a principal (an object used for security checking). Thus, when the system determines access rights for the principal (for example, during an access control list [ACL] check), it can use that principal to determine access rights to the directory.

    It is now possible for a component running on a server, such as a Web server, to impersonate user identities when accessing other servers, such as a SQL database, on their behalf. And Windows 2000 makes it possible for any application to use the system's public key services, for example, to encrypt an object so that only a specific recipient can read it, or to "sign" an object or message so that the recipient can securely verify its origin and integrity.

    Figure 5. In Windows NT 4.0, trust relationships between "Arcadia Bay" and the domains "North America" and "Europe" must be established in all directions. In Windows 2000, trust relationships can be transitive through a parent domain (in this case "ArcadiaBay"), reducing the number of relationships the administrator needs to maintain.

    The addition of transitive trust relationships between individual domains makes a huge difference in the ability of Windows NT to scale to larger enterprises. Smaller "domains" don't need explicit trust relationships with all other domains in the larger organization. Instead, they can inherit trust relationships established by a parental domain at the root of the organization—thus greatly simplifying ongoing management of directory and security services.

  • Power Management, Plug and Play, and Instrumentation make hardware easier to manage. Microsoft has found that installing and configuring peripheral devices generates more support calls than any other issue across all types of customer sites, from OEM vendors and software houses to Fortune 500 banks and individual homes. When users obtain a new piece of hardware or upgrade their system, they often don't know which drivers to install to run which peripherals. They may choose the wrong driver, in which case the hardware won't work, or they may configure the right driver incorrectly, in which case the hardware still won't work.

Windows 2000 Plug-and-Play support will do the following:

  • Automatically and dynamically recognize hardware.
  • Allocate and reallocate hardware resources (interrupt requests).
  • Load the appropriate drivers for hardware it automatically detects.
  • Notify applications of device events (for example, the device arrives or goes away).
  • Provide interfaces so device drivers can interact with the Plug and Play system.
  • Coordinate power management for devices that support the Advanced Configuration and Power Interface (ACPI).

With Windows 2000, administrators can set power management policies—dictating that each user's desktop machine will fall into a sleep state when idle, or that PCs will remain on overnight for periodic maintenance such as application inventory, backup, software upgrades, or virus scans. Both the Plug and Play and Power Management initiatives give the operating system more control over resource allocation. Next-generation devices will control more behavior through software and less through ROM or BIOS instructions. This will create more consistency across all hardware configurations and make upgrading functionality less expensive.

In addition to Plug and Play and Power Management, Windows 2000 will make devices easier to manage through a structured set of instrumentation services that allow hardware devices to communicate status information with applications and with management software such as enterprise consoles and diagnostic utilities. For example, an instrumented disk drive device could alert a management application that its disk spindle is wearing out. This would allow administrators to replace the spindle before it actually failed.

  • Windows 2000 includes an improved storage infrastructure and in-the-box storage components to simplify storage management. For example, the new Disk Administrator enables scalable volume management using an improved partition layout. With Plug and Play, all partition and RAID configuration changes can be made online without rebooting, and can also be managed remotely. Drive letter restrictions are removed using new volume mount points, allowing you to mount any volume's namespace into the directory on another volume. Removable Media Services provides an arbitration layer for applications that want to access tape and other robotic media libraries, as well as track and manage the offline media. With Remote Storage (or, Hierarchical Storage Management), servers never "run out of space." Remote Storage can maintain a free space threshold on a disk by automatically migrating file data that hasn't been recently accessed to tape media, while keeping the directory entry and property information for a file online so that files are automatically recalled when accessed. Numerous infrastructure improvements have been made to help third-party storage applications scale to large volumes.

    The Microsoft Indexing Service (formerly Microsoft Index Server) has been enhanced to use the new features of NTFS. Native properties, which can be associated with any file on a volume, are indexed and searchable. The NTFS change journal is used to detect file additions, deletions, and modifications, even when the service is not running. This eliminates costly file rescans and improves performance. NTFS sparse streams are used for index storage. This allows index optimizations to happen using the existing allocated disk space. Index merges no longer require free disk space to run. Finally, in Windows 2000, programmers and script writers can use the native indexing and search features using Microsoft ActiveX® Data Objects (ADO) and OLE-DB interfaces. This allows independent software vendors (ISVs) to integrate full-text and property searching into their applications.

  • Windows NT Server supports large memory configurations that boost application performance. As microprocessors grow increasingly fast and SMP designs grow increasingly capable, memory size must scale better. Version 4.0 of Windows NT Server, Enterprise Edition introduced a system configuration option to allow applications to access up to 3 GB of addressable memory. Windows 2000 allows for even greater levels of application performance in certain workloads by enabling application developers to access memory beyond the traditional 32-bit boundaries. Existing 32-bit applications can run unchanged on Windows NT Server, Enterprise Edition–based systems that have large physical memories. However, large-scale statistical analysis programs, mechanical engineering or fluid dynamics simulations, or I/O-intensive applications such as Relational Database Management Systems (RDBMs) can access the Windows NT Enterprise Memory Architecture through a small set of APIs in order to optimize performance.

With Windows 2000, Microsoft is building a model for full-scale Internet and intranet computing that is much simpler to use and manage than anything available today. Within corporations and on the Internet, Information Systems groups will be able to scale their service offerings, increase robustness, and ensure availability using "building blocks" of servers without worrying about how to manage individual machines. And while centralized planning can yield benefits down the road, it is not required; Windows 2000 allows for the real-world, "grass roots" growth of workgroups that can be merged into an enterprise network at a later time.

Windows DNA: The Foundation for Next-Generation Applications

The system services in Windows 2000 form the foundation of Microsoft's application architecture, Windows Distributed interNet Applications (DNA).

With the advent of the Internet and the Web, consumers and business partners are demanding applications that require an unprecedented distribution of data and logic. Customers want applications that are easy to install, that don't require local configuration, that can be used offline, and that work unchanged in a wide range of scenarios—large, small, corporate, and home. What's more, they want the benefits of the Web while continuing to take advantage of their existing hardware and software investments:

  • The Web gives businesses the ability to develop and deploy applications rapidly—to streamline their business activities and meet the needs of their customers. Given the blinding speed of "Internet time," no one wants to wait through a two-year product cycle for a solution.
  • Using Web technologies, IT managers can build a single network architecture that serves both their intranet and Internet needs.
  • The Web paradigm gives them logically centralized deployment and management that is physically decentralized—the benefit of a mainframe design with the cost savings of commodity hardware.
  • It is easy for users to navigate the Web and to use Web-based applications. The Web is familiar, approachable, and accessible.

Customer expectations, driven by the Web, are creating new demands on applications and developers. Applications have to be more scalable. They have to operate with existing applications, including those running on other platforms. They have to be available on a nonstop basis (and follow a solid course of action in case they do fail). What's more, they must not only be easy to install but inexpensive to upgrade and manage. To create these kinds of applications, developers must build software in components, register their applications in a centralized directory, use transactions for performance and fault-tolerance, handle data access and storage that is more sophisticated than simple files, and write code that functions well even when the network is down. That's a very tall order, especially if you are expected to design and build an application within a matter of months.

Building these kinds of applications has been, up to this point, much too hard. In the past, every development project required that teams set aside a considerable portion of their time—an average of 40 percent according to research conducted last year by Patricia Seybold—to write low-level code that required understanding the most minute details of network protocols, sockets, and remote procedure calls. Fortunately, with the advent of Windows 2000 and the component-based services described in this paper, developers will have at their disposal ready-made services that handle traditionally difficult programming tasks—distributed execution, authentication, automatic installation, and so on. Middleware services will provide easy access to services such as transactions, message queuing, threading, pooling, clustering, and data access and manipulation.

The services built into Windows 2000 will help Web-based computing transcend HTML and bring the traditional benefits of the Web to Windows-based applications as well. Just as the ease of developing with HTML, HTTP, and scripting languages has driven an explosion in applications and services, the services of Windows 2000 will make it remarkably easier to create applications.

For More Information

For the latest information on Windows NT Server, check out our World Wide Web site at www.microsoft.com/ntserver or the Windows NT Server Forum on MSN™, The Microsoft Network (GO WORD: MSNTS).

Endnotes

  1. An object "schema" is a definition of its properties and behavior. For example, the schema for a printer gives its description, location, name, driver, print queue, and the operations that can be performed on it, such as pause or resume. (Click here to return.)
  2. Later releases will allow for additional servers in a cluster and will add advanced features for automatic load-balancing across machines in a cluster. (Click here to return.)