Project Scope

profileallirohoman
referancebook.pdf

 Please use these chapter for referanceing     Meyers, M. (2018). CompTIA network+ certification all-in-one exam guide: Exam N10-

007 (7th ed.). New York, NY: McGraw Hill Education. Available in the courseroom via the VitalSource Bookshelf link.

o Chapter 1, “Network Models,” pages 2–41. o Chapter 18, “Managing Risk,” pages 627–640.

CHAPTER 1

Network Models

The CompTIA Network+ certification exam expects you to know how to

• 1.2 Explain devices, applications, protocols and services at their appropriate OSI layers

• 1.3 Explain the concepts and characteristics of routing and switching

To achieve these goals, you must be able to

• Describe how models such as the OSI seven-layer model and the TCP/IP model help technicians understand and troubleshoot networks

• Explain the major functions of networks with the OSI seven-layer model

• Describe the major functions of networks with the TCP/IP model

The CompTIA Network+ certification challenges you to understand virtually every aspect of networking—not a small task. Networking professionals use one of two methods to conceptualize the many parts of a network: the Open Systems Interconnection (OSI) seven-layer model and the Transmission Control Protocol/Internet Protocol (TCP/IP) model.

These models provide two tools that make them essential for networking techs. First, the OSI and TCP/IP models provide powerful mental tools for diagnosing problems. Understanding the models enables a tech to determine quickly at what layer a problem can occur and helps him or her zero in on a solution without wasting a lot of time on false leads. Second, these models provide a common language techs use to describe specific network functions. Figure 1-1 shows product information for a Cisco-branded advanced networking device. Note the use of the terms “L3” and “layer 7.” These terms directly reference the OSI seven-layer model. Techs who understand the OSI model understand what those numbers mean, giving them a quick understanding of what the device provides to a network.

Figure 1-1 Using OSI terminology in device documentation

This chapter looks first at models in general and how models help conceptualize and troubleshoot networks. The chapter then explores both the OSI seven-layer model and the TCP/IP model to see how they help clarify network architecture for techs.

Cisco and Certifications

Cisco Systems, Inc. is famous for making many of the “boxes” that interconnect networks all over the world. It’s not too far of a stretch to say that Cisco helps power a huge portion of the Internet. These boxes are complicated to configure, requiring a high degree of technical knowledge.

To address this need, Cisco offers a series of certifications. The entry-level certification, for example, is the Cisco Certified Entry Networking Technician (CCENT). The next step is the Cisco Certified Network Associate (CCNA) Routing and Switching.

Go to Cisco’s certification Web site and compare the objectives for the two certifications with what you learned about CompTIA Network+ in the “Introduction” of this book. Ask yourself this question: could you study for CCENT or CCNA R&S and CompTIA Network+ simultaneously?

Historical/Conceptual

Working with Models

Networking is hard. It takes a lot of pieces, both hardware and software, all working incredibly quickly and in perfect harmony, to get anything done. Just making Google appear in your Web browser requires millions of hours in research, development, and manufacturing to create the many pieces to successfully connect your system to a server somewhere in Googleland and to enable them to communicate. Whenever we encounter highly complex technologies, we need to simplify the overall process by breaking it into discrete, simple, individual processes. We do this using a network model.

Biography of a Model

Figure 1-3 Simple model airplane

Network Models Network models face similar challenges. What functions define all networks? What details can you omit without rendering the model inaccurate? Does the model retain its usefulness when describing a network that does not employ all the layers?

In the early days of networking, different manufacturers made unique types of networks that functioned well. Part of the reason they worked was that every network manufacturer made everything. Back then, a single manufacturer provided everything for a customer when the customer purchased a network solution: all the hardware and all the software in one complete and expensive package. Although these networks worked fine as stand-alone networks, the proprietary nature of the hardware and

software made it difficult—to put it mildly—to connect networks of multiple manufacturers. To interconnect networks and therefore improve the networking industry, someone needed to create a guide, a model, that described the functions of a network. Using this model, the people who made hardware and software could work together to make networks that worked together well.

Two models tend to stand out: the OSI model and the TCP/IP model. The OSI model is covered on the CompTIA Network+ exam. The TCP/IP model is not on the exam but it is common and important and you should know it. Let’s look at both.

NOTE The International Organization for Standardization (ISO) created the OSI seven-layer model. ISO may look like a misspelled acronym, but it’s actually a word, derived from the Greek word isos, which means “equal.” The International Organization for Standardization sets standards that promote equality among network designers and manufacturers, thus ISO.

The best way to learn the OSI and TCP/IP models is to see them in action. For this reason, I’ll introduce you to a small network that needs to copy a file from one computer to another. This example goes through each of the OSI and TCP/IP layers needed to copy that file, and I explain each step and why it is necessary. By the end of the chapter, you should have a definite handle on using either of these models as a tool to conceptualize networks. You’ll continue to build on this knowledge throughout the book and turn your OSI and TCP/IP model knowledge into a powerful troubleshooting tool.

The OSI Seven-Layer Model in Action Each layer in the OSI seven-layer model defines an important function in computer networking, and the protocols that operate at that layer offer solutions to those functions. Protocols are sets of clearly defined rules, regulations, standards, and procedures that enable hardware and software developers to make devices and applications that function properly at a particular layer. The OSI seven-layer model encourages modular design in networking, meaning that each layer has as little to do with the operation of other layers as possible. Think of it as an automobile assembly line. The guy painting the car doesn’t care about the gal putting doors on the car—he expects the assembly line process to make sure the cars he paints have doors. Each layer on the model trusts that the other layers on the model do their jobs.

The OSI seven layers are

• Layer 7 Application

• Layer 6 Presentation

• Layer 5 Session

• Layer 4 Transport

• Layer 3 Network

• Layer 2 Data Link

• Layer 1 Physical

The OSI seven layers are not laws of physics—anybody who wants to design a network can do it any way he or she wants. Although many protocols fit neatly into one of the seven layers, others do not.

Welcome to MHTechEd! Mike’s High-Tech Educational Supply Store and Post Office, or MHTechEd for short, has a small network of PCs running Windows, a situation typical of many small businesses today. Windows runs just fine on a PC unconnected to a network, but it also comes with all the network software it needs to connect to a network. All the computers in the MHTechEd network are connected by special network cabling.

As in most offices, virtually everyone at MHTechEd has his or her own PC. Figure 1-4 shows two workers, Janelle and Dana, who handle all the administrative functions at MHTechEd. Because of the kinds of work they do, these two often need to exchange data between their two PCs. At the moment, Janelle has just completed a new employee handbook in Microsoft Word, and she wants Dana to check it for accuracy. Janelle could transfer a copy of the file to Dana’s computer by the tried- and-true Sneakernet method—saving the file on a flash drive and walking it over to her—but thanks to the wonders of computer networking, she doesn’t even have to turn around in her chair. Let’s watch in detail each piece of the process that gives Dana direct access to Janelle’s computer, so she can copy the Word document from Janelle’s system to her own.

Let’s Get Physical—Network Hardware and Layers 1–2 Clearly the network needs a physical channel through which it can move bits of data between systems. Most networks use a cable like the one shown in Figure 1-5. This cable, known in the networking industry as unshielded twisted pair (UTP), usually contains four pairs of wires that can transmit and receive data.

Figure 1-5 UTP cabling

Another key piece of hardware the network uses is a special box-like device that handles the flow of data from each computer to every other computer (Figure 1-6). This box is often tucked away in a closet or an equipment room. (The technology of the central box has changed over time. For now, let’s just call it the “central box.” I’ll get to variations in a bit.) Each system on the network has its own cable that runs to the central box. Think of the box as being like one of those old-time telephone switchboards, where operators created connections between persons who called in wanting to reach other telephone users.

Figure 1-6 Typical central box

Layer 1 of the OSI model defines the method of moving data between computers, so the cabling and central box are part of the Physical layer (Layer 1). Anything that moves data from one system to another, such as copper cabling, fiber optics, even radio waves, is part of the OSI Physical layer. Layer 1 doesn’t care what data goes through; it just moves the data from one system to another system. Figure 1-7 shows the MHTechEd network in the OSI seven-layer model thus far. Note that each system has the full range of layers, so data from Janelle’s computer can flow to Dana’s computer. (I’ll cover what a “hub” is shortly.)

Into the Central Box When a system sends a frame out on the network, the frame goes into the central box. What happens next depends on the technology of the central box.

In the early days of networking, the central box was called a hub. A hub was a dumb device, essentially just a repeater. When it received a frame, the hub made an exact copy of that frame, sending a copy of the original frame out of all connected ports except the port on which the message originated.

The interesting part of this process was when the copy of the frame came into all the other systems. I like to visualize a frame sliding onto the receiving NIC’s “frame assembly table,” where the electronics of the NIC inspected it. (This doesn’t exist; use your imagination!) Here’s where the magic took place: only the NIC to which the frame was addressed would process that frame—the other NICs simply dropped it when they saw that it was not addressed to their MAC address. This is important to appreciate: with a hub, every frame sent on a network was received by every NIC, but only the NIC with the matching MAC address would process that frame (Figure 1-19).

Figure 1-19 Incoming frame!

Later networks replaced the hub with a smarter device called a switch. Switches, as you’ll see in much more detail as we go deeper into networking, filter traffic by MAC address. Rather than sending all incoming frames to all network devices connected to it, a switch sends the frame only to the interface associated with the destination MAC address.

FCS in Depth All FCSs are only 4 bytes long, yet the wired frame carries at most 1500 bytes of data. How can 4 bytes tell you if all 1500 bytes in the data are correct? That’s the magic of

the math of the CRC. Without going into the grinding details, think of the CRC as just the remainder of a division problem. (Remember learning remainders from division back in elementary school?) The NIC sending the frame does a little math to make the CRC. Using binary arithmetic, it works a division problem on the data using a divisor called a key. The result of this division is the CRC. When the frame gets to the receiving NIC, it divides the data by the same key. If the receiving NIC’s answer is the same as the CRC, it knows the data is good; if it’s not good, the frame is dropped.

Getting the Data on the Line The process of getting data onto the wire and then picking that data off the wire is amazingly complicated. For instance, what would happen to keep two NICs from speaking at the same time? Because all the data sent by one NIC is read by every other NIC on the network, only one system could speak at a time in early wired networks. Networks use frames to restrict the amount of data a NIC can send at once, giving all NICs a chance to send data over the network in a reasonable span of time. Dealing with this and many other issues requires sophisticated electronics, but the NICs handle these issues completely on their own without our help. Thankfully, the folks who design NICs worry about all these details, so we don’t have to!

Getting to Know You Using the MAC address is a great way to move data around, but this process raises an important question. How does a sending NIC know the MAC address of the NIC to which it’s sending the data? In most cases, the sending system already knows the destination MAC address because the NICs had probably communicated earlier, and each system stores that data. If it doesn’t already know the MAC address, a NIC may send a broadcast onto the network to ask for it. The MAC address of FF-FF-FF-FF- FF-FF is the Layer 2 broadcast address—if a NIC sends a frame using the broadcast address, every single NIC on the network will process that frame. That broadcast frame’s data will contain a request for a system’s MAC address. Without knowing the MAC address to begin with, the requesting computer will use an IP address to pick the target computer out of the crowd. The system with the MAC address your system is seeking will read the request in the broadcast frame and respond with its MAC address. (See “IP—Playing on Layer 3, the Network Layer” later in this chapter for more on IP addresses and packets.)

The Complete Frame Movement Now that you’ve seen all the pieces used to send and receive frames, let’s put these pieces together and see how a frame gets from one system to another. The basic send/receive process is as follows.

First, the sending system’s operating system hands some data to its NIC. The NIC builds a frame to transport that data to the receiving NIC (Figure 1-20).

Figure 1-20 Building the frame

NIC and Layers Most networking materials that describe the OSI seven-layer model put NICs

squarely into the Data Link layer of the model. It’s at the MAC sublayer, after all, that data gets encapsulated into a frame, destination and source MAC addresses get added to that frame, and error checking occurs. What bothers most students with placing NICs solely in the Data Link layer is the obvious other duty of the NIC—putting the ones and zeroes on the network cable for wired networks and in the air for wireless networks. How much more physical can you get?

Many teachers will finesse this issue by defining the Physical layer in its logical

sense—that it defines the rules for the ones and zeroes—and then ignore the fact that the data sent on the cable has to come from something. The first question when you

hear a statement like that—at least to me—is, “What component does the sending?” It’s the NIC, of course, the only device capable of sending and receiving the physical signal.

Network cards, therefore, operate at both Layer 2 and Layer 1 of the OSI seven-

layer model. If cornered to answer one or the other, however, go with the more common answer, Layer 2.

Beyond the Single Wire—Network Software and Layers 3–7 Getting data from one system to another in a simple network (defined as one in

which all the computers connect to one switch) takes relatively little effort on the part of the NICs. But one problem with simple networks is that computers need to broadcast to get MAC addresses. It works for small networks, but what happens when the network gets big, like the size of the entire Internet? Can you imagine millions of computers all broadcasting? No data could get through.

Equally important, data flows over the Internet using many technologies, not just

Ethernet. These technologies don’t know what to do with Ethernet MAC addresses. When networks get large, you can’t use the MAC addresses anymore.

Large networks need a logical addressing method, like a postal code or telephone

numbering scheme, that ignores the hardware and enables you to break up the entire large network into smaller networks called subnets. Figure 1-26 shows two ways to set up a network. On the left, all the computers connect to a single switch. On the right, however, the LAN is separated into two five-computer subnets.

Figure 1-26 LLC and MAC, the two parts of the Data Link layer To move past the physical MAC addresses and start using logical addressing

requires some special software called a network protocol. Network protocols exist in every operating system. A network protocol not only has to create unique identifiers for each system, but also must create a set of communication rules for issues like how to handle data chopped up into multiple packets and how to ensure those packets get from one subnet to another. Let’s take a moment to learn a bit about the most famous network protocol—TCP/IP—and its unique universal addressing system.

EXAM TIP MAC addresses are also known as physical addresses. It’s important to appreciate that the TCP/IP model doesn’t have a standards body to

define the layers. Because of this, there are a surprising number of variations on the TCP/IP model.

A great example of this lack of standardization is the Link layer. Without a standardizing body, we can’t even agree on the name. While “Link layer” is extremely common, the term “Network Interface layer” is equally popular. A good tech knows both of these terms and understands that they are interchangeable. Notice also that, unlike the OSI model, the TCP/IP model does not identify each layer with a number.

The version I use is concise, having only four layers, and many important companies, like Cisco and Microsoft, use it as well. The TCP/IP model gives each protocol in the TCP/IP protocol suite a clear home in one of the four layers.

The clarity of the TCP/IP model shows the flaws in the OSI model. The OSI model couldn’t perfectly describe all the TCP/IP protocols.

The TCP/IP model fixes this ambiguity, at least for TCP/IP. Because of its tight protocol-to-layer integration, the TCP/IP model is a descriptive model, whereas the OSI seven-layer model is a prescriptive model.

The Link Layer The TCP/IP model lumps together the OSI model’s Layer 1 and Layer 2 into a single layer called the Link layer (or Network Interface layer), as seen in Figure 1-41. It’s not that the Physical and Data Link layers are unimportant to TCP/IP, but the TCP/IP protocol suite really begins at Layer 3 of the OSI model. In essence, TCP/IP techs count on other techs to handle the physical connections in their networks. All of the pieces that you learned in the OSI model (cabling, physical addresses, NICs, and switches) sit squarely in the Link layer.

Managing Risk

Chapter 18

CHAPTER 18

Managing Risk The CompTIA Network+ certification exam expects you to know how to

• 1.4 Given a scenario, configure the appropriate IP addressing components

• 3.1 Given a scenario, use appropriate documentation and diagrams to manage the network

• 3.2 Compare and contrast business continuity and disaster recovery concepts

• 3.3 Explain common scanning, monitoring and patching processes and summarize their expected outputs

• 3.5 Identify policies and best practices

• 4.6 Explain common mitigation techniques and their purposes

• 5.2 Given a scenario, use the appropriate tool

To achieve these goals, you must be able to • Describe the industry standards for risk management

• Discuss contingency planning

• Examine safety standards and actions

Companies need to manage risk, to minimize the dangers posed by internal and external threats. They need policies in place for expected dangers and also procedures established for things that will happen eventually. This is contingency planning. Finally, every company needs proper safety policies. Let’s look at all three facets of managing risk.

Test Specific

Risk Management IT risk management is the process of how organizations deal with the bad things (let’s call them attacks) that take place on their networks. The entire field of IT security is based on the premise that somewhere, at some time, something will attack some part of your network. The attack may take as many forms as your paranoia allows: intentional, unintentional, earthquake, accident, war, meteor impact … whatever.

What do we do about all these attacks? You can’t afford to build up a defense for every possible attack—nor should you need to, for a number of reasons. First, different attacks have different probabilities of taking place. The probability of a meteor taking out your server room is very low. There is, however, a pretty good chance that some clueless user will eventually load malware on their company-issued

laptop. Second, different attacks/potential problems have different impacts. If a meteor hits your server room, you’re going to have a big, expensive problem. If a user forgets his password, it’s not a big deal and is easily dealt with.

The CompTIA Network+ certification covers a number of issues that roughly fit under the idea of risk management. Let’s run through each of these individually.

NOTE One of the scariest attacks is a data breach. A data breach is any form of attack where secured data is taken or destroyed. The many corporate database hacks we’ve seen over the last few years—databases containing information about user passwords, credit card information, and other personal identification—are infamous examples of data breaches.

Security Policies A security policy is a written document that defines how an organization will protect its IT infrastructure. There are hundreds of different security policies, but for the scope of the CompTIA Network+ certification exam we only need to identify just a few of the most common ones. These policies include internal and external ones that affect just about every organization.

NOTE The CompTIA Network+ exam, is in my opinion, way too light in its coverage of security policies. The CompTIA Security+ exam does a much better job, but even it is a bit slim. Check out the Wikipedia entry for “security policy” to discover the many types of security policies in use today.

Acceptable Use Policy The acceptable use policy (AUP) defines what is and what is not acceptable to do on an organization’s computers. It’s arguably the most famous of all security policies as this is one document that pretty much everyone who works for any organization is

required to read, and in many cases sign, before they can start work. The following are some provisions contained in a typical acceptable use policy:

• Ownership Equipment and any proprietary information stored on the organization’s computers are the property of the organization.

• Network Access Users will access only information they are authorized to access.

• Privacy/Consent to Monitoring Anything users do on the organization’s computers is not private. The organization will monitor what is being done on computers at any time.

Acceptable Use Policy The acceptable use policy (AUP) defines what is and what is not acceptable to do on an organization’s computers. It’s arguably the most famous of all security policies as this is one document that pretty much everyone who works for any organization is required to read, and in many cases sign, before they can start work. The following are some provisions contained in a typical acceptable use policy:

• Ownership Equipment and any proprietary information stored on the organization’s computers are the property of the organization.

• Network Access Users will access only information they are authorized to access.

• Privacy/Consent to Monitoring Anything users do on the organization’s computers is not private. The organization will monitor what is being done on computers at any time.

• Illegal Use No one may use an organization’s computers for anything that breaks a law. (This is usually broken down into many subheadings, such as introducing malware, hacking, scanning, spamming, and so forth.)

NOTE Many organizations require employees to sign an acceptable use policy, especially if it includes a consent to monitoring clause.

Network Access Policies Companies need a policy that defines who can do what on the company’s network. The network access policy defines who may access the network, how they may access

the network, and what they can access. Network access policies may be embedded into policies such as VPN policy, password policy, encryption policy, and many others, but they need to be in place. Let’s look at a couple specifically called out on the CompTIA Network+ exam objectives.

• Privileged user agreement policy A privileged user has access to resources just short of those available to administrators. Anyone granted one of those accounts should know the policies on what he or she can access without escalating a permission request. (This sort of policy also reflects on standard employee management of role separation, where users might have privileged access, but only to content that fits in their role in the company.)

• Password policy Password policies revolve around strength of password and rotation frequency (how often users have to change their passwords, password reuse, and so on.) See “Training” later in this chapter for details.

• Data loss prevention policy Data loss prevention (DLP) can mean a lot of things, from redundant hardware and backups, to access levels to data. A DLP policy takes into consideration many of these factors and helps minimize the risk of loss or theft of essential company data.

• Remote access policy A remote access policy (like the VPN policy mentioned a moment ago) enforces rules on how and when and from what device users can access company resources from remote locations. A typical restriction might be no access from an open wireless portal, for example.

Policies reinforce an organization’s IT security. Policies help define what equipment is used, how data is organized, and what actions people take to ensure the security of an organization. Policies tell an organization how to handle almost any situation that might arise (such as disaster recovery, covered later in this chapter).

Externally Imposed Policies Government laws and regulations impose policies on organizations. There are rules restricting what a company employee can bring with him or her to a conference in another country, for example. There are security policies that provide international export controls that restrict what technology—including hardware and software—can be exported.

The licensing restrictions on most commercial software allow users to travel with that software to other countries. Microsoft sells worldwide, for example, so visiting Beijing in the spring with the Microsoft Office 365 suite installed on your laptop is no big deal. Commercial encryption software, on the other hand, generally falls into the forbidden-for-foreign-travel list.

Data affected by laws, such as health information spelled out in the Health Insurance Portability and Accountability Act of 1996 (HIPAA), should not be stored

on devices traveling to other countries. Often such data requires special export licenses.

Most organizations devote resources to comply with externally imposed policies. Just about every research university in the United States, for example, has export control officers who review all actions that risk crossing federal laws and regulations. It’s a really huge subject that the CompTIA Network+ only lightly touches.

Adherence to Policies Given the importance of policies, it’s also imperative for an organization to adhere to its policies strictly. This can often be a challenge. As technologies change, organizations must review and update policies to reflect those changes.

Initiating the Change The first part of many change processes is a request from a part of the organization. Let’s say you’re in charge of IT network support for a massive art department. There are over 150 graphic artists, each manning a powerful macOS workstation. The artists have discovered a new graphics program that they claim will dramatically improve their ability to do what they do. After a quick read of the program’s features on its Web site, you’re also convinced that this a good idea. It’s now your job to make this happen.

Create a change request. Depending on the organization, this can be a highly official document or, for a smaller organization, nothing more than a detailed e-mail message. Whatever the case, you need to document the reason for this change. A good change request will include the following:

• Type of change Software and hardware changes are obviously part of this category, but this could also encompass issues like backup methods, work hours, network access, workflow changes, and so forth.

• Configuration procedures What is it going to take to make this happen? Who will help? How long will it take?

• Rollback process If this change in some way makes such a negative impact that going back to how things were before the change is needed, what will it take to roll back to the previous configuration?

• Potential impact How will this change impact the organization? Will it save time? Save money? Increase efficiency? Will it affect the perception of the organization?

• Notification What steps will be taken to notify the organization about this change?

Dealing with the Change Management Team

With your change request in hand, it’s time to get the change approved. In most organizations, change management teams meet at fixed intervals, so there’s usually a deadline for you to be ready at a certain time. From here, most organizations will rely heavily on a well-written change request form to get the details. The approval process usually consists of considering the issues listed in the change request, but also management approval and funding.

Making the Change Happen Once your change is approved, the real work starts. Equipment, software, tools, and so forth must be purchased. Configuration teams need to be trained. The change committee must provide an adequate maintenance window: the time it will take to implement and thoroughly test the coming changes. As part of that process, the committee must authorize downtime for systems, departments, and so on. Your job is to provide notification of the change to those people who will be affected, if possible providing alternative workplaces or equipment.

Documenting the Change The ongoing and last step of the change is change management documentation. All changes must be clearly documented, including but not limited to:

• Network configurations, such as server settings, router configurations, and so on

• Additions to the network, such as additional servers, switches, and so on

• Physical location changes, such as moved workstations, relocated switches, and so on

Patching and Updates It’s often argued whether applying patches and updates to existing systems fits under change management or regular maintenance. In general, all but the most major patches and updates are really more of a maintenance issue than a change management issue. But, given the similarity of patching to change management, it seems that here is as good a place as any to discuss patching.

EXAM TIP CompTIA calls regularly updating operating systems and applications to avoid security threats patch management.

When we talk about patching and updates, we aren’t just talking about the handy tools provided to us by Microsoft Windows or Ubuntu Linux. Almost every piece of software and firmware on almost every type of equipment you own is subject to patching and updating: printers, routers, wireless access points, desktops,

programmable logic controllers (PLCs) … everything needs a patch or update now and then.

What Do We Update? In general, specific types of updates routinely take place. Let’s cover each of these individually, starting with the easiest and most famous, operating system (OS) updates.

OS updates are easily the most common type of update. Individuals install automatic updates on their OSs with impunity, but when you’re updating a large number of systems, especially critical nodes like servers, it’s never a good idea to apply all OS updates without a little bit of due diligence beforehand. Most operating systems provide some method of network server-based patching, giving administrators the opportunity to test first and then distribute patches when they desire.

All systems use device drivers, and they are another part of the system we often need to patch. In general, we only apply driver updates to fix an incompatibility, incorporate new features, or repair a bug. Since device drivers are only present in systems with full-blown operating systems, all OS-updating tools will include device drivers in their updates. Many patches will include feature changes and updates, as well as security vulnerability patches.

Feature changes/updates are just what they sound like: adding new functionality to the system. Remember back in the old days when a touchscreen phone only understood a single touch? Then some phone operating system came out to provide multi-touch. Competitors responded with patches to their own phone OSs that added the multi-touch feature.

All software of any complexity has flaws. Hardware changes, exposing flaws in the software that supports that hardware; newer applications create unexpected interactions; security standards change over time. All of these factors mean that responsible companies patch their products after they release them. How they approach the patching depends on scope: major vs. minor updates require different actions.

When a major vulnerability to an OS or other system is discovered, vendors tend to respond quickly by creating a fix in the form of a vulnerability patch. If the vulnerability is significant, that patch is usually made available as soon as it is complete. Sometimes, these high-priority security patches are even pushed to the end user right away.

Less significant vulnerabilities get patched as part of a regular patch cycle. You may have noticed that on the second Wednesday of each month, Microsoft-based computers reboot. Since October of 2003, Microsoft has sent out patches that have been in development and are ready for deployment on the second Tuesday of the month. This has become known as Patch Tuesday. These patches are released for a

wide variety of Microsoft products, including operating systems, productivity applications, utilities, and more.

Firmware updates are far less common than software updates and usually aren’t as automated (although a few motherboard makers might challenge this statement). In general, firmware patching is a manual process and is done in response to a known problem or issue. Keep in mind that firmware updates are inherently risky, because in many cases it’s difficult to recover from a bad patch.

Training End users are probably the primary source of security problems for any organization. We must increase end user awareness and training so they know what to look for and how to act to avoid or reduce attacks. Training users is a critical piece of managing risk. While a formal course is preferred, it’s up to the IT department to do what it can to make sure users have an understanding of the following:

• Security policies Users need to read, understand, and, when necessary, sign all pertinent security policies.

• Passwords Make sure users understand basic password skills, such as sufficient length and complexity, refreshing passwords regularly, and password control. Traditional best practices for complexity, for example, use a minimum length of 8 characters—longer is better—with a combination of upper- and lowercase letters, numbers, and nonalphanumeric symbols, like !, $, &, and so on. Management should insist on new passwords every month, plus not allow users to reuse a password for a period of a year or more

Standard Business Documents Dealing with third-party vendors is an ongoing part of any organization. When you are dealing with third parties, you must have some form of agreement that defines the relationship between you and the third party. The CompTIA Network+ exam expects you to know about five specific business documents: a service level agreement, a memorandum of understanding, a multi-source agreement, a statement of work, and a nondisclosure agreement. Let’s review each of these documents.

Service Level Agreement A service level agreement (SLA) is a document between a customer and a service provider that defines the scope, quality, and terms of the service to be provided. In CompTIA terminology, SLA requirements are a common part of business continuity and disaster recovery (both covered a little later in this chapter).

SLAs are common in IT, given the large number of services provided. Some of the more common SLAs in IT are provided by ISPs to customers. A typical SLA from an ISP contains the following:

• Definition of the service provided Defines the minimum and/or maximum bandwidth and describes any recompense for degraded services or downtime.

• Equipment Defines what equipment, if any, the ISP provides. It also specifies the type of connections to be provided.

• Technical support Defines the level of technical support that will be given, such as phone support, Web support, and in-person support. This also defines costs for that support.

Memorandum of Understanding A memorandum of understanding (MOU) is a document that defines an agreement between two parties in situations where a legal contract wouldn’t be appropriate. An MOU defines the duties the parties commit to perform for each other and a time frame for the MOU. An MOU is common between companies that have only occasional business relations with each other. For example, all of the hospitals in a city might generate an MOU to take on each other’s patients in case of a disaster such as a fire or tornado. This MOU would define costs, contacts, logistics, and so forth.

Multi-source Agreement Manufacturers of various network hardware agree to a multi-source agreement (MSA), a document that details the interoperability of their components. For example, two companies might agree that their gigabit interface converters (GBICs) will work in Cisco and Juniper switches.

Statement of Work A statement of work (SOW) is in essence a legal contract between a vendor and a customer. An SOW defines the services and products the vendor agrees to supply and the time frames in which to supply them. A typical SOW might be between an IT security company and a customer. An SOW tends to be a detailed document, clearly explaining what the vendor needs to do. Time frames must also be very detailed, with milestones through the completion of the work.

Nondisclosure Agreement Any company with substantial intellectual property will require new employees—and occasionally even potential candidates—to sign a nondisclosure agreement (NDA). An NDA is a legal document that prohibits the signer from disclosing any company secrets learned as part of his or her job.

Security Preparedness Preparing for incidents is the cornerstone of managing risk. If you decide to take the next logical CompTIA certification, the CompTIA Security+, you’ll find an incredibly detailed discussion of how the IT security industry spends inordinate amounts of time

and energy creating a secure IT environment. But for the CompTIA Network+ certification, there are two issues that come up: vulnerability scanning and penetration testing.