It is a collection of interconnected devices which are spread across the globe

The core network is facilitated by network servers (also called service nodes). Service nodes are composed of (1) a variety of data sources (such as cached read-only, updateable, and shared data sources) to store data such as subscriber profile and (2) service logic to perform functions such as computing data items, retrieving data items from data sources, and so on.

Service nodes can be of different types, with each type assigned specific functions. The major service node types in the circuit-switched domain include the Home Location Register (HLR), the Visitor Location Register (VLR), the Mobile Switching Center (MSC), and the Gateway Mobile Switching Center (GMSC) [4].

All subscribers are permanently assigned to a fixed HLR located in the home network. The HLR stores permanent subscriber profile data and relevant temporary data such as current subscriber location (pointer to VLR) of all subscribers assigned to it. Each network area is assigned a VLR. The VLR stores temporary data of subscribers currently roaming in its assigned area; this subscriber data is received from the HLR of the subscriber. Every VLR is always associated with an MSC. The MSC acts as an interface between the radio access network and the core network. It also handles circuit-switched services for subscribers currently roaming in its area. The GMSC is in charge of routing the call to the actual location of the mobile station. Specifically, the GMSC acts as interface between the fixed PSTN network and the cellular network. The radio access network comprises a transmitter, receiver, and speech transcoder called the base station (BS) [5].

Service nodes are geographically distributed and serve the subscriber through collaborative functioning of various network components. Such collaborative functioning is possible due to the inter-component network relationships (called dependencies). A dependency means that a network component must rely on other network components to perform a function. For example, there is a dependency between service nodes to service subscribers. Such a dependency is made possible through signaling messages containing data items. Service nodes typically request other service nodes to perform specific operations by sending them signaling messages containing data items with predetermined values. On receiving signaling messages, service nodes realize the operations to perform based on values of data items received in signaling messages. Further, dependencies may exist between data items so that received data items may be used to derive other data items. Several application layer protocols are used for signaling messages. Examples of signaling message protocols include Mobile Application Part (MAP), ISDN User Part (ISUP), and Transaction Capabilities Application Part (TCAP) protocols.

Typically in a cellular network, to provide a specific service a preset group of signaling messages is exchanged between a preset group of service node types. The preset group of signaling messages indicates the operations to be performed at the various service nodes and is called a signal flow. In the following, we use the call delivery service [6] to illustrate a signal flow and show how the various geographically distributed service nodes function together.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124166899000113

Intelligent Control with Neural Networks

D. POPOVIC, in Soft Computing and Intelligent Systems, 2000

4.3 Hopfield Networks

The Hopfield network is a typical recurrent fully interconnected network in which every processing unit is connected to all other units (Figure 9). It is represented by a vector

It is a collection of interconnected devices which are spread across the globe

FIGURE 9. Hopfield network architecture.

S=(s1,s2,…,sn)

that describes the instantaneous state of the network. The components of the state vector are binary variables and can have either the value 0 or the value 1. Furthermore, the weights of the interconnections between the nodes, called the connection strengths, are elements of the symmetric matrix

Wij=Wji

Each node is excited by the resulting input signal

Xj=∑SiWij

where the Si is the binary output value of the processing unit i. Here, if the neuron of the processing unit fires its output has the value 1, i.e.,

Xj≥0,

and if it is inhibited its output is 0.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780126464900500214

TCP/IP

William Buchanan BSc (Hons), CEng, PhD, in Computer Busses, 2000

23.2 TCP/IP gateways and hosts

TCP/IP hosts are nodes which communicate over interconnected networks using TCP/IP communications. A TCP/IP gateway node connects one type of network to another. It contains hardware to provide the physical link between the different networks and the hardware and software to convert frames from one network to the other. Typically, it converts a Token Ring MAC layer to an equivalent Ethernet MAC layer, and vice versa.

A router connects a network of a similar type to another of the same kind through a point-to-point link. The main operational difference between a gateway, a router, and a bridge is that for a Token Ring and Ethernet network, the bridge uses the 48-bit MAC address to route frames, whereas the gateway and router use the IP network address. As an analogy to the public telephone system, the MAC address would be equivalent to a randomly assigned telephone number, whereas the IP address would contain the information on where the telephone is logically located, such as which country, area code, and so on.

Figure 23.2 shows how a gateway (or router) routes information. It reads the frame from the computer on network A, and reads the IP address contained in the frame and makes a decision whether it is routed out of network A to network B. If it does then it relays the frame to network B.

It is a collection of interconnected devices which are spread across the globe

Figure 23.2. Internet gateway layers

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780340740767500237

Introduction

In Next Generation SSH2 Implementation, 2009

Why Is There a Need To Use SSH?

In the beginning there were main frame computers. These large computers allowed programmers to input large mathematical formulas that would take hours or days to solve by hand. These computers could take the same formula and datum and solve it in seconds or minutes. As these computers became more flexible and could handle not only mathematical datum but also text and numerical information, people began to use them to manage more and more business and research data. Computers became more than just a tool for college and government organizations, as they started to be able to manage business data. As they became smaller and more powerful, tools to input and store data came into being and costs became more reasonable.

More customers were in the business world. These computers stored massive amounts of data and people could access these machines in a controlled environment. The topology of the network was called the Centralized Data Model; in this model all the data was stored on one central computer and access was through “dumb” terminals. The terminals themselves had no computer processing power or storage. This protected the data from loss, damage, theft, and spying. In this model encryption was not necessary as the data was never vulnerable to the outside world. People could see only what the administrators allowed through the “green screen,” or dumb terminal.

As computers became more powerful and a need to share data across diverse and distant locations became more prevalent, wide area connections were established. At first these connections were done over analog phone lines using modem (Modulator/Demodulator) technology. There were two types of modems, synchronous and asynchronous. Synchronous modems used a special timing bit in the stream to keep the communications channel operating smoothly. In asynchronous modems, instead of a constant timing bit, the technology used a start and stop bit for each part of the transmission, ensuring each piece of data was received consistently. These analog connections were point to point and it was not easy for people to “listen in” on these connections.

As communications technology progressed and a shared, or interconnected, network of networks developed and more and more “private” data was being transmitted over these open links, the need for encrypted transmission become necessary. In addition, with the wide areas of transmission, personal computers also brought about internal or Local Area Networks (LANs). These internal networks allowed computers to transmit and receive data from other computers and servers within the building. The data traffic of these devices became subject to eavesdropping by other individuals inside the network. The eavesdropping, also known as packet capturing, allowed internal people to view data they might not otherwise had the privilege of viewing. These two scenarios increased the need for data encryption.

Are You Owned?

Data Loss, an Inside Job

Survey after survey shows that data loss and data exposure are most likely done by people inside the organization. Check out some of the statistics:

61% of respondents think data leakage is an insider's job. 23% believe those leaks are malicious.

McAfee and Datamonitor's Data Loss Survey, 2007 (requires registration)

85% of organizations surveyed reported that they have had a data breach event.

Scott and Scott LLP and Ponemon Institute LLC, May 15th, 2007

One third of companies surveyed said a major security breach could put them out of business.

McAfee and Datamonitor's Data Loss Survey, 2007 (requires registration)

More than 90% of the breaches were in digital form.

2006 Annual Study: The Cost of Data Breach. Ponemon Institute, LLC, 2007

These statistics can be found at: http://www.absolute.com/resources/computer-theft-statistics-details.asp

For each type of remote connection, there are options on how to secure it. In this book we will focus on remote login/control from a client to a server. In the early days, we had two options. The first was remote login, or RLOGIN (TCP port 513); it allowed us to open a session on a UNIX server and issue commands. The second option was telnet (TCP port 23); both of these protocols use a clear text channel to send and receive information. Any user with a packet capture program like Wireshark™ will be able to see the entire session, including usernames and passwords. As networks became more vulnerable to these types of attacks and data leakage, we needed to protect the sessions. For this connectivity issue, SSH is the answer.

SSH employs strong industry recognized encryption methods to protect your data from exposure. It makes no difference if you are using SSH across your local area network or the Internet from a remote location; your data will be secured in these encrypted channels. This software replaces telnet and rlogin as your connectivity method and offers protection to your data. Continued use of rlogin and telnet could be considered a violation of your organization's security police and in some cases a violation of law; Sarbanes Oxley, for example, mandates that all communications containing financial data must be encrypted. If you are using telnet to create a remote session to a UNIX computer that contains your financial application, you are not in compliance with Sarbanes Oxley.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492836000015

Microsoft Windows Server 2008

Aaron Tiensivu, in Securing Windows Server 2008, 2008

Not Your Father's TCP/IP Stack

Before TCP/IP there was the 1822 Protocol, which was developed in 1969 for the Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense. In 1973 Vint Cerf and Bob Kahn took the existing protocols and rebuilt them into what we now know as TCP/IP version 4. These protocols became the foundation of what we now use on the Internet.

In TCP/IP version 4 (IPv4) we had a 32-bit address space that would support approximately 4.9 billion addresses. This was more than enough for the ARPA network (ARPANET) that they were working on. In the early days of the APRANET one computer was added every 20 days. In the 70’s and 80’s there was no question of running out of addresses on this private network used by the government and university research facilities to share data. In 1990 the responsibility for the network was transferred to the National Science Foundation and became the NSFNET. Although government and universities still shared the network, National Science Foundation added six supercomputers, and opened up its use to high-tech companies as well.

In 1995 the National Science Foundation offered the backbone of the network to a communications company, who would then have control of the NSFNET and thereby make it a for-profit network. The company purchased part of the backbone and other companies were able to purchase pieces, thus creating the Internet as we know it today.

Once the Internet became a for-profit network of interconnected networks, it became apparent that the public address space of IPv4 was not sufficient to meet the demands of a global network. The use of standards like network address translation (NAT) bought time while a new standard was developed. This new standard became IPv6, the next generation of Internet Protocol addressing.

Notes from the Underground…

TCP/IP: The Next Generation

Windows Server 2008 includes the following new features:

A dual-layer IP architecture for IPv6

Support for a strong host model and for scaling on multiprocessor computers

New packet filtering and security for APIs

Support for a kernel-mode programming interface, called Winsock Kernel, which was designed to replace the transport driver interface (TDI) in Windows XP and Windows Server 2003

Routing compartments and new mechanisms for protocol stack overloads

For a full discussion of the changes to the TCP/IP implementation in Windows Server 2008 read Microsoft's TechNet article located at www.microsoft.com/technet/community/columns/cableguy/cg0905.mspx.

Introduction of IPv6 and Dual Stack

The dual-stack architecture of IPv6 allows for the running of IPv6 and IPv4 at the same time. If both endpoints are capable of communicating on IPv6, they will. If either of the endpoints cannot use IPv6, they will fall back to IPv4. This makes the transition to IPv6 smoother as both endpoints do not have to be native IPv6 all at once.

IPv6 Addressing Conventions

IPv6 uses a 128-bit address space. The first 64 bits are the network portion and the remaining 64bits are the host portion of the address. The first 16 bits will determine the address type.

The main types of addresses are unicast, multicast, and anycast.

Unicast addresses are used for endpoint-to-endpoint communications and can be site local, link local or globally routable address. If communication between endpoints is internal and access to the Internet is not required, then site or link local addresses can be used. If connection to the Internet is required, then globally routable address are to be used.

IPv6 Assigned Unicast Routable Address Prefixes

Here is a list of IPv6 assigned Unicast routable address prefixes.

Prefix (Hex) Description

2001::/16 IPv6 Internet

2002::/16 IPv6 to IPv4 transition mechanisms

2003::/16 3FFD::/16 Unassigned (for future use)

3FFE::/16 6Bone

As you can see there is a tremendous address space. Anycast addressing became the replacement for broadcast. Broadcast packets were a misnomer under IPv4 as the destination address 255.255.255.255, the universal broadcast, was never truly a broadcast to all stations on all networks. Routers in each network stop the passing of broadcast packets, keeping them on the network that they originated on. These packets were really only able to talk to “any device” on my network, so they renamed them anycast.

There are some special addresses reserved for specific uses on networks—for example, loopback, internal-only networks, and so on.

A few of the special addresses include the following:

::1/128 (or just ::1) Local loopback address, refers to the local computer.

::FFFF:0:0/96 Prefix used for IPv4 mapped addresses.

::0/128 Used to point to the default gateway for your system.

FE80::/64 A local-link address. Seeing this address assigned to an interface indicates there was no DHCPv6 server available.

IPv6 Auto-Configuration Options

Depending on how your IPv6 routers are set up, auto-configuration of an IPv6 client can happen in three ways: stateless, stateful, or both. In stateless mode, an IPv6 client configures its own IPv6 address by using IPv6 router advertisements. In stateful mode, an IPv6 client will get its addressing information from a Dynamic Host Conversion Protocol version 6 (DHCPv6) server when it receives router advertisement messages with no prefix options (and when certain other conditions are met). This also occurs if no IPv6 routers are available. The both option uses stateful and stateless together. The most common example of this is an IPv6 client using stateless auto-configuration to obtain an IPv6 address and using stateful auto-configuration to get DNS and other IP configuration information from a DHCPv6 server.

In addition, addresses can be nontemporary (the equivalent of static IP addresses in IPv4) or temporary. Routers, gateways, and other devices may need these types of addresses and, just as with IPv4, you can allow a host to auto-configure or you can manually set up the IPv6 addressing.

IPv6 Transition Technologies

Because the transition to IPv6 won't happen overnight (or even anytime soon), there are numerous ways companies can transition to IPv6. Some options are provided in the list below. For more information, you can visit the Microsoft Web site and query for the title “IPv6 Transition Technologies.” The following list includes some options for transitioning to IPv6:

Dual IP Layer architecture Allows computers to communicate using both IPv6 and IPv4. This is required for ISATAP and Teredo hosts, and for 6to4 routers.

IPv6 over IPv4 tunneling Places IPv6 packet data inside an IPv4 header with an IP value of 41. This tunneling technique is used with either ISATAP or 6to4.

Intra-Site Automatic Tunnel Addressing Protocol (ISATAP) Allows IPv6 hosts to use IPv6 over IPv4 tunneling to communicate on intranets.

6to4 Allows IPv6 hosts to communicate with the IPv6-based Internet. A 6to4 router with a public IPv4 address is required.

Teredo Allows IPv4/IPv6 hosts to communicate with the IPv6-based Internet even if they are behind a network address translator (NAT).

Configuring IPv6 Settings

The first time you log on to a Windows 2008 server you will get the server manager screen shown in Figure 6.1.

It is a collection of interconnected devices which are spread across the globe

Figure 6.1. Server Manager on Windows Server 2008

You can go directly to the Network Connections Control Panel Page (see Figure 6.2) using the Computer Information Section of the Server Summary.

It is a collection of interconnected devices which are spread across the globe

Figure 6.2. The Network Connections Control Panel

In the Network Connections Control Panel you will find your Ethernet and Wireless network connections. Right-click on the connection you wish to work with and select Properties from the pop-up menu (see Figure 6.3).

It is a collection of interconnected devices which are spread across the globe

Figure 6.3. Selecting a Connection

Once you have selected Properties, the screen shown in Figure 6.4 will appear.

It is a collection of interconnected devices which are spread across the globe

Figure 6.4. Local Area Connection Properties

From the Networking tab you can set the options for IPv6 (see Figure 6.5).

It is a collection of interconnected devices which are spread across the globe

Figure 6.5. IPv6 Properties

As with IPv4, you would typically allow host computers to obtain an IPv6 address automatically from the DHCP server. However, since this computer is a server, you may want to assign a nontemporary IP address to it. (Recall that nontemporary is the IPv6 equivalent of a static IP address in IPv4.) If you choose to use a nontemporary address, you could click the radio button next to the Use the following IPv6 address option and enter the specifics. Also remember that if you set a nontemporary IP address here, you should create a reservation for this address in the DHCP server so that this address does not get assigned to another computer on the network. Best practices typically include creating your DHCP server scope and reservations before activating the DHCP server, then activating the DHCP server and assigning nontemporary (and static) IP addresses. This helps avoid potential problems with IP address assignments.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492805000067

Examining the ISA Server 2004 Feature Set

Dr.Thomas W. Shinder, Debra Littlejohn Shinder, in Dr. Tom Shinder's Configuring ISA Server 2004, 2005

New Features on the Block

It is a collection of interconnected devices which are spread across the globe

With ISA Server 2004, Microsoft has introduced a multi-networking model that is appropriate for interconnected networks used by many corporations.

It is a collection of interconnected devices which are spread across the globe

Now you can create network rules and control how different networks communicate with one another.

It is a collection of interconnected devices which are spread across the globe

ISA Server 2004 includes several built-in network definitions, including: the Internal network (includes the addresses on the primary protected network), the External network (includes addresses that don't belong to any other network), the VPN clients network (includes the addresses assigned to VPN clients), and the Local host network (includes the IP addresses on the ISA Server).

It is a collection of interconnected devices which are spread across the globe

ISA Server 2004's new multi-networking features make it easy for you to protect your network against internal and external security threats by limiting communication between clients, even within your own organization.

It is a collection of interconnected devices which are spread across the globe

You can use ISA Server 2004 to define the routing relationship between networks, depending on the type of access and communication required between the networks.

It is a collection of interconnected devices which are spread across the globe

ISA Server 2004 provides network templates that you can use to easily configure firewall policy governing the traffic between multiple networks.

It is a collection of interconnected devices which are spread across the globe

ISA Server 2004's HTTP policy allows the firewall to perform deep HTTP stateful inspection (application layer filtering). You can configure the extent of the inspection on a per-rule basis.

It is a collection of interconnected devices which are spread across the globe

You can configure ISA Server 2004's HTTP policy to block all connection attempts to Windows executable content, regardless of the file extension used on the resource.

It is a collection of interconnected devices which are spread across the globe

ISA Server 2004's HTTP policy makes it easy for you to allow all file extensions, allow all except a specified group of extensions, or block all extensions except for a specified group.

It is a collection of interconnected devices which are spread across the globe

With ISA Server 2004's HTTP policy, you can control HTTP access for all ISA Server 2004 client connections, regardless of client type.

It is a collection of interconnected devices which are spread across the globe

ISA Server 2004's deep HTTP inspection also allows you to create “HTTP Signatures” that can be compared to the Request URL, Request headers, Request body, Response headers, and Response body.

It is a collection of interconnected devices which are spread across the globe

You can control which HTTP methods are allowed through the firewall by setting access controls on user access to various methods.

It is a collection of interconnected devices which are spread across the globe

ISA Server 2004's Secure Exchange Server Publishing Rules allow remote users to connect to the Exchange server by using the fully-functional Outlook MAPI client over the Internet.

It is a collection of interconnected devices which are spread across the globe

You can configure ISA Server 2004's FTP policy to allow users to upload and download via FTP, or you can limit user FTP access to download only.

It is a collection of interconnected devices which are spread across the globe

ISA Server 2004 includes a link translation feature, which allows you to create a dictionary of definitions for internal computer names that map to publicly-known names.

It is a collection of interconnected devices which are spread across the globe

ISA Server 2004 leverages the Network Access Quarantine Control feature built into Windows Server 2003 to provide VPN quarantine, which allows you to quarantine VPN clients on a separate network until they meet a predefined set of security requirements.

It is a collection of interconnected devices which are spread across the globe

ISA Server 2004 adds support for port redirection and the ability to publish FTP servers on alternate ports.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781931836197500095

A Brief Introduction to Smart Grid Safety and Security

S. Khoussi, A. Mattas, in Handbook of System Safety and Security, 2017

11.1.2 The Traditional Power Grid

The power grid is one of the most complex engineered systems in modern world. It is an interconnected network consisting of power plants, transmission lines, substations, distribution lines, and users. The whole idea of the power grid is to deliver power from the generation sources to the service locations [3] (businesses and consumers) [4,5]. And this is accomplished today through these highlighted steps [6] of energy conversion and delivery [7,8]:

Generation

In 2014 there were about 19,745 individual generators, with nameplate generation capacities of at least 1 megawatt (MW) at about 7677 operational power plants in the United States. A power plant may have one or more generators, and some generators may use more than one type of fuel. Most of these plants are centralized and built away from extremely populated areas. These power plants contain electromechanical generators, steered by water or by heat engines driven by steam from chemical combustion of fossil fuel, including coal, petroleum, natural gas, and liquefied petroleum gas [9].

Transmission

To move the generated electric power over long distances and with less loss, it is stepped up to higher voltages to substations through transmission lines, since power plants are located in isolated and unpopulated regions away from consumers [10,11].

Distribution

Upon the arrival at a substation, usually near the users, the power must be stepped down from the transmission level voltage to a distribution level voltage. This step is called the distribution phase and this portion of the grid is called the distribution grid [12].

Consumption

By now, the power has arrived at the service location. Therefore it needs to be stepped down again from the distribution voltage to the required service voltage(s).

Fig. 11.1 shows the different elements or components of the power delivery system in the traditional grid.

It is a collection of interconnected devices which are spread across the globe

Figure 11.1. The different stages of delivering electricity in the conventional power grid [13].

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128037737000115

Cloud Storage Basics

Caesar Wu, Rajkumar Buyya, in Cloud Data Centers and Cost Modeling, 2015

12.3.3.1 The idea of NAS

The initial idea of network attached storage (NAS) came from file sharing by many users via an interconnected network. During the early evolution of computers, file sharing was not an easy thing, especially when some users turned their computers off. It would be impossible for other people to access the files stored in their computers (see Figure 12.34).

It is a collection of interconnected devices which are spread across the globe

Figure 12.34. Early sharing issues via a network.

This issue led to the idea of having a dedicated computer to host shared files so that every user can access the shared files throughout a file storage network. This was an early form of NAS architecture (see Figure 12.35).

It is a collection of interconnected devices which are spread across the globe

Figure 12.35. Early form of NAS.

As we can see in Figure 12.35, NAS is actually an IP-based file sharing environment that is attached to a LAN. Therefore, it needs protocols for users to communicate with file servers. If we use the UNIX operating system (OS), Sun Microsystem developed the Network File System (NFS) as a protocol to share files. However, if we adopt Windows, the protocol is the Common Internet File System (CIFS). Today, there are many software solutions that allow people operating across both the UNIX and Windows OSes to share files.

When more and more users need to access the file server to share files, the requirements to improve I/O performance and run different applications for diversified configurations become the priority. As a result, a NAS device that differentiates with a general-purpose server has been developed that is dedicated to NAS applications and clients. In contrast to a general-purpose server, it has own OS, namely the Real-Time Operating System (RTOS) for file serving (see Figure 12.36).

It is a collection of interconnected devices which are spread across the globe

Figure 12.36. General-purpose servers moved to NAS box or device.

This OS adopts an open standard protocol so that many vendors can support it. A NAS device is capable of optimize the execution of file access and the connection of storage arrays. It has a “many-to-one” configuration that allows the NAS device to serve many clients simultaneously. It can also be configured as “one-to-many,” which enables one client to access multiple NAS devices or shared file servers at the same time. With a UNIX environment, the NAS device can leverage the NFS system as a shared catalog for network users.

What is a system of interconnected network across the globe?

The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide.

What is the collection of interconnected network?

The resulting system of interconnected networks are called an internetwork, or simply an internet.

Is a collection of interconnected computers and other devices which are able to communicate with each other and share hardware and software resources?

A computer network, also referred to as a data network, is a series of interconnected nodes that can transmit, receive and exchange data, voice and video traffic. Examples of nodes in a network include servers or modems. Computer networks commonly help endpoint users share resources and communicate.

Which of these is a collection of interconnected devices?

A network is a collection of computers and devices connected together via communications devices and transmission media.