1. Token Ring is a very old and outdated LAN protocol in comparison to Ethernet. If I remember correctly, Token Ring was developed years ago by IBM, after which the IEEE committee made it a standard. Speeds of up to 16Mbps are possible with Token Ring networks, providing safer methods of data transfer between nodes when compared to Ethernet.
source: http://searchnetworking.techtarget.com/loginMembersOnly/1,289498,sid7_gci1112642,00.html?NextURL=http%3A//searchnetworking.techtarget.com/expert/KnowledgebaseAnswer/0%2C289625%2Csid7_gci1112642%2C00.html?referrer=SEO_MO|www.google.com.ph_ER_1112642
2.The implicatiions of passing logon procedures, user IDs, and passwords openly on the network are great risk to protect critical information on the network. Appropriate access controls assist to protect information processed and stored in computer systems. The organisation's system security policy must clearly define the needs of each user or group to access systems, applications and data. The file-access rights should be configured according to business requirements and the "need to know" principle.
3.The importance of computer networking to achieve resource sharing and data sharing is widely recognized. Local networking to achieve high performance and reliability is inevitable for the future. As cost of logic and memory decreases, the cost of communications resources become increasingly significant and these resources must be increasingly shared. This heightens the possibility of deadlocks due to this sharing.
- their must be a mutual exclusive access in the network.
- their must be a condition for holding non-sharable resources.
- any request for certain files inn the network must be allocated, but it cannot request another resources unless it releases the resource that it holds.
- before requesting resources, we first check wheter they arre available or not to prevent hold and wait state.
In an apparatus having a network including successive stages of cross-point switches which collectively interconnect a plurality of nodes external to said network, wherein at least one message is carried between one of the nodes and one of the cross-point switches over a route through said network, a method for preventing routing deadlocks from occurring in the network which comprises the steps of: creating a graphical representation of the network; searching for the existence of cycles within the graphical representation; partitioning the graphical representation into at a first subgraph and a second subgraph if cycles exist in the graphical representation; searching for the existence of edges directed from the first subgraph to the second subgraph; and removing the edges directed from the first subgraph to the second subgraph. Preferably the step of partitioning the network into at a first subgraph and a second subgraph is performed such that the first subgraph and the second subgraph have an equal number of vertices, a number of directed edges from the first subgraph to the second subgraph is minimized so as to minimize the number of routes prohibited, and a set of partition constraints are satisfied. The method is recursively applied to the first subgraph and then the second subgraph, thereby removing all of the deadlock prone cycles in the network while minimizing the number of routes prohibited due to remove edges.
The problem of detecting process deadlocks is common to transaction oriented computer systems which allow data sharing. Several good algorithms exist for detecting process deadlocks in a single location facility. However, the deadlock detection problem becomes more complex in a geographically distributed computer network due to the fact that all the information needed to detect a deadlock is not necessarily available in a single node, and communications may lead to synchronization problems in getting an accurate view of the network state.
Two algorithms are then presented for detecting deadlocks in a computer network which allows processes to wait for access to a portion of a database, or a message from another process. The first algorithm presented is based on the premise that there is one control node in the network, and this node has primary responsibility for detecting process deadlocks. The second, and recommended, algorithm distributes the responsibility for detecting deadlocks among the nodes in which the involved processes and resources reside. Thus a failure of any single node has limited effect upon the other node in the network.
Routing algorithms used in wormhole switched networks must all provide a solution to the deadlock problem. If the routing algorithm allows deadlock cycles to form, then it must provide a deadlock recovery mechanism. Because deadlocks are anomalies that occur while routing, the deadlock recovery mechanism should not allocate any expensive hardware resources for the sake of handling such a rare event. Rather, it should only dedicate a minimal set of required resources to the recovery process in order to engage most of the hardware resources to the task of routing normal packets. This paper proposes a new deadlock recovery mechanism to be used with the True Fully Adaptive Routing algorithm. The new deadlock recovery mechanism takes advantage of the concept behind wormhole switching. The scheme is efficient in terms of hardware requirements, causes fewer deadlocks and can compete with other expensive deadlock recovery schemes.
Source: http://www.freepatentsonline.com/6065063.html
http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA047025
http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/6712/17967/00830311.pdf?temp=x
4.I Would Prefer A switch (switching hub) in the context of networking refers to a device which filters and forwards data packets across a network.
Unlike a standard hub which simply replicates what it receives on one port onto all the other ports, a switching hub keeps a record of the MAC addresses of the devices attached to it.
When the switch receives a data packet, it forwards the packet directly to the recipient device by looking up the MAC address.
A network switch can utilise the full throughput potential of a networks connection for each device making it a natural choice over a standard hub.
In other words, say for instance you had a network of 5 PCs and a server all connected with 10Mbps UTP cable, with a hub the throughput (10Mbps) would be shared between each device, with a switch each device could utilise the full 10Mbps connection.
When using a switch instead of a hub it is common place to create a faster throughput connection between the switch and the server (backbone).
For example if you had 10 PCs connected to the switch with 10Mbps cable then it would improve performance to use a 100Mbps connection from the switch to the server.
Source: http://www.helpwithpcs.com/jargon/network-switch.htm
Wednesday, September 24, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment