Mar 12, 2010

CASE STUDY ON NETWORKING.

Best Blogger Tips

Introduction to Network
 
A computer network is a group of computers that are interconnected by electronic circuits or wireless transmissions of various designs and technologies for the purpose of exchanging data or communicating information between them or their users. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of types and categories and also presents the basic components of a network.

A computer network allows sharing of resources and information among devices connected to the network. The Advanced Research Projects Agency (ARPA) funded the design of the Advanced Research Projects Agency Network (ARPANET) for the United States Department of Defense. It was the first operational computer network in the world. Development of the network began in 1969, based on designs developed during the 1960s. For a history see ARPANET, the first network.

History of Computer Networks

Before the advent of computer networks that were based upon some type of telecommunications system, communication between calculation machines and history of computer hardware early computers was performed by human users by carrying instructions between them. Many of the social behavior seen in today's Internet was demonstrably present in nineteenth-century and arguably in even earlier networks using visual signals. The Victorian Internet

In September 1940 George Stibitz used a teletype machine to send instructions for a problem set from his Model at Dartmouth College in New Hampshire to his Complex Number Calculator in New York and received results back by the same means. Linking output systems like teletypes to computers was an interest at the Advanced Research Projects Agency (ARPA) when, in 1962, J.C.R. Licklider was hired and developed a working group he called the "Intergalactic Network", a precursor to the ARPANet.

In 1964, researchers at Dartmouth developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at MIT, a research group supported by General Electric and Bell Labs used a computer DEC's to route and manage telephone connections.

Throughout the 1960s Leonard Kleinrock,Paul Baran and Donald Davies independently conceptualized and developed network systems which used datagrams or Packet information technology that could be used in a network between computer systems.

1965 Thomas Merrill and Lawrence G. Roberts created the first wide area network (WAN). The first widely used PSTN switch that used true computer control was the Western Electric introduced in 1965.

In 1969 the University of California at Los Angeles, SRI (in Stanford), University of California at Santa Barbara, and the University of Utah were connected as the beginning of the ARPANet network using 50 Kbit/s circuits. Commercial services using X.25 were deployed in 1972, and later used as an underlying infrastructure for expanding TCP/IP networks.

Computer networks, and the technologies needed to connect and communicate through and between them, continue to drive computer hardware, software, and peripherals industries. This expansion is mirrored by growth in the numbers and types of users of networks from the researcher to the home user.

Today, computer networks are the core of modern communication. All modern aspects of the Public Switched Telephone Network (PSTN) are computer-controlled, and telephony increasingly runs over the Internet Protocol, although not necessarily the public Internet. The scope of communication has increased significantly in the past decade and this boom in communications would not have been possible without the progressively advancing computer network.

Purpose

Facilitating communications. Using a network, people can communicate efficiently and easily via e-mail, instant messaging, chat rooms, telephony, video telephone calls, and videoconferencing.

Sharing hardware. In a networked environment, each computer on a network can access and use hardware on the network. Suppose several personal computers on a network each require the use of a laser printer. If the personal computers and a laser printer are connected to a network, each user can then access the laser printer on the network, as they need it.

Sharing files, data, and information. In a network environment, any authorized user can access data and information stored on other computers on the network. The capability of providing access to data and information on shared storage devices is an important feature of many networks.

Sharing software. Users connected to a network can access application programs on the network.

Network classification

The following list presents categories used for classifying networks.

Connection method

Computer networks can be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as optical fiber, Ethernet, Wireless LAN, HomePNA, Power line communication or G.hn.

Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs, switches, bridges and/or routers. Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium. ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network.
 
Wired technologies

Twisted-pair wire is the most widely used medium for telecommunication. Twisted-pair wires are ordinary telephone wires which consist of two insulated copper wires twisted into pairs and are used for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 100 million bits per second.

Coaxial cable is widely used for cable television systems, office buildings, and other worksites for local area networks. The cables consist of copper or aluminum wire wrapped with insulating layer typically of a flexible material with a high dielectric constant, all of which are surrounded by a conductive layer. The layers of insulation help minimize interference and distortion. Transmission speed range from 200 million to more than 500 million bits per second.

Fiber optic cable consists of one or more filaments of glass fiber wrapped in protective layers. It transmits light which can travel over extended distances without signal loss. Fiber-optic cables are not affected by electromagnetic radiation. Transmission speed may reach trillions of bits per second. The transmission speed of fiber optics is hundreds of times faster than for coaxial cables and thousands of times faster than for twisted-pair wire.

Wireless technologies

Terrestrial Microwave – Terrestrial microwaves use Earth-based transmitter and receiver. The equipment look similar to satellite dishes. Terrestrial microwaves use low-gigahertz range, which limits all communications to line-of-sight. Path between relay stations spaced approx. 30 miles apart. Microwave antennas are usually placed on top of buildings, towers, hills, and mountain peaks.

Communications Satellites – The satellites use microwave radio as their telecommunications medium which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically 22,000 miles above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.

Cellular and PCS Systems – Use several radio communications technologies. The systems are divided to different geographic area. Each area has low-power transmitter or radio relay antenna device to relay calls from one area to the next area.

Wireless LANs – Wireless local area network use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. An example of open-standards wireless radio-wave technology is IEEE 802.11b.

Bluetooth – A short range wireless technology. Operate at approx. 1Mbps with range from 10 to 100 meters. Bluetooth is an open wireless protocol for data exchange over short distances.

The Wireless Web – The wireless web refers to the use of the World Wide Web through equipments like cellular phones, pagers,PDAs, and other portable communications devices. The wireless web service offers anytime/anywhere connection.

Scale

Networks are often classified as local area network (LAN), wide area network (WAN), metropolitan area network (MAN), personal area network (PAN), virtual private network (VPN), campus area network (CAN), storage area network (SAN), and others, depending on their scale, scope and purpose. Usage, trust level, and access right often differ between these types of network. For example, LANs tend to be designed for internal use by an organization's internal systems and employees in individual physical locations (such as a building), while WANs may connect physically separate parts of an organization and may include connections to third parties.

Functional relationship (network architecture)

Computer networks may be classified according to the functional relationships which exist among the elements of the network, e.g., active networking, client-server and peer-to-peer (workgroup) architecture.

Network topology

Computer networks may be classified according to the network topology upon which the network is based, such as bus network, star network, ring network, mesh network, star-bus network, tree or hierarchical topology network. Network topology is the coordination by which devices in the network are arrange in their logical relations to one another, independent of physical arrangement. Even if networked computers are physically placed in a linear arrangement and are connected to a hub, the network has a star topology, rather than a bus topology. In this regard the visual and operational characteristics of a network are distinct. Networks may be classified based on the method of data used to convey the data; these include digital and analog networks.

Types of networks


Common types of computer networks may be identified by their scale.

Personal area network

A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless connections between devices. The reach of a PAN typically extends to 10 meters. Wired PAN network is usually constructed with USB and Firewire while wireless with Bluetooth and Infrared.

Local area network


A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as home, school, computer laboratory, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Current wired LANs are most likely to be based on Ethernet technology, although new standards like ITU-T G.hn also provide a way to create a wired LAN using existing home wires (coaxial cables, phone lines and power lines).
For example, a library may have a wired or wireless LAN for users to interconnect local devices (e.g., printers and servers) and to connect to the internet. On a wired LAN, PCs in the library are typically connected by category 5 (Cat5) cable, running the IEEE 802.3 protocol through a system of interconnected devices and eventually connect to the Internet. The cables to the servers are typically on Cat 5e enhanced cable, which will support IEEE 802.3 at 1 Gbit/s. A wireless LAN may exist using a different IEEE protocol, 802.11b, 802.11g or possibly 802.11n. The staff computers (bright green in the figure) can get to the color printer, checkout records, and the academic network and the Internet. All user computers can get to the Internet and the card catalog. Each workgroup can get to its local printer. Note that the printers are not accessible from outside their workgroup.

Typical library network, in a branching tree topology and controlled access to resources

All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and academic networks' customer access routers.

The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include their higher data transfer rates, smaller geographic range, and no need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 40 and 100 Gbit/s. 

Home area network

A home area network (HAN) or home network is a residential local area network. It is used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a CATV or Digital Subscriber Line (DSL) provider.

Campus area network

A campus area network (CAN) is a computer network made up of an interconnection of local area networks (LANs) within a limited geographical area. It can be considered one form of a metropolitan area network, specific to an academic setting.

In the case of a university campus-based campus area network, the network is likely to link a variety of campus buildings including; academic departments, the university library and student residence halls. A campus area network is larger than a local area network but smaller than a wide area network (WAN) (in some cases).
The main aim of a campus area network is to facilitate students accessing internet and university resources. This is a network that connects two or more LANs but that is limited to a specific and contiguous geographical area such as a college campus, industrial complex, office building, or a military base. A CAN may be considered a type of MAN (metropolitan area network), but is generally limited to a smaller area than a typical MAN. This term is most often used to discuss the implementation of networks for a contiguous area. This should not be confused with a Controller Area Network. A LAN connects network devices over a relatively short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby buildings.

Metropolitan area network

A metropolitan area network (MAN) is a network that connects two or more local area networks or campus area networks together but does not extend beyond the boundaries of the immediate town/city. Routers, switches and hubs are connected to create a metropolitan area network.
Wide area network

A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances, using a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.
Global area network
 
A global area network (GAN) is a model for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial WIRELESS local area networks (WLAN).
Virtual private network

A virtual private network (VPN) is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.
A VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.

A VPN allows computer users to appear to be editing from an IP address location other than the one which connects the actual computer to the Internet.

Internetwork

An Internetwork is the connection of two or more distinct computer networks via a common routing technology. The result is called an internetwork (often shortened to internet). Two or more networks connect using devices that operate at the [Network Layer]] (Layer 3) of the OSI Basic Reference Model, such as a router. Any interconnection among or between public, private, commercial, industrial, or governmental networks may also be defined as an internetwork.

Internet

The Internet is a global system of interconnected governmental, academic, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the U.S. Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW). The 'Internet' is most commonly spelled with a capital 'I' as a proper noun, for historical reasons and to distinguish it from other generic internetworks.

Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP Addresses) administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

Intranets and extranets

Intranets and extranets are parts or extensions of a computer network, usually a local area network.

An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web browsers and file transfer applications, that is under the control of a single administrative entity. That administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is the internal network of an organization. A large intranet will typically have at least one web server to provide users with organizational information.

An extranet is a network that is limited in scope to a single organization or entity and also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities (e.g., a company's customers may be given access to some part of its intranet creating in this way an extranet, while at the same time the customers may not be considered 'trusted' from a security standpoint). 

Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN; it must have at least one connection with an external network.

Basic hardware components

All networks are made up of basic hardware building blocks to interconnect network nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers. In addition, some method of connecting these building blocks is required, usually in the form of galvanic cable (most commonly Category 5 cable). Less common are microwave links (as in IEEE 802.12) or optical cable ("optical fiber"). An Ethernet card may also be required.

Network interface cards

A network card, network adapter, or NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It provides physical access to a networking medium and often provides a low-level addressing system through the use of MAC addresses.

Repeaters

A repeater is an electronic device that receives a signal, clean it from the unnecessary noise, regenerate it and retransmits it at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable which runs longer than 100 meters.

Hubs

A network hub contains multiple ports. When a packet arrives at one port, it is copied unmodified to all ports of the hub for transmission. The destination address in the frame is not changed to a broadcast address.

Bridges

A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address only to that port. Bridges do send broadcasts to all ports except the one on which the broadcast was received.

Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived.

 
Bridges come in three basic types:

Local bridges: Directly connect local area networks (LANs)

Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.

Wireless bridges: Can be used to join LANs or connect remote stations to LANs

Switches

A network switch is a device that forwards and filters OSI layer 2 datagrams (chunk of data communication) between ports (connected cables) based on the MAC addresses in the packets. This is distinct from a hub in that it only forwards the frames to the ports involved in the communication rather than all ports connected. A switch breaks the collision domain but represents itself a broadcast domain. Switches make forwarding decisions of frames on the basis of MAC addresses. A switch normally has numerous ports, facilitating a star topology for devices, and cascading additional switches. Some switches are capable of routing based on Layer 3 addressing or additional logical levels; these are called multi-layer switches. The term switch is used loosely in marketing to encompass devices including routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier).

Routers

A router is a networking device that forwards packets between networks using information in protocol headers and forwarding tables to determine the best next router for each packet.
Continue Reading...

Dec 1, 2009

Liability of Defective Software

Best Blogger Tips
Liability for defective software

Liability when defective software causes injury
Increasingly software is used in situations where failure may result in death or injury.
In these situations the software is often described as safety critical software. Where such software is used and where an accident occurs it is proper that the law should intervene in an attempt to afford some form of redress to the injured party or the relatives of a deceased person. Safety critical software is used in specialized situations such as flight control in the aviation industry and by the medical profession in carrying out diagnostic tasks.
Nowadays software will have an impact on the average citizen’s life whether by choice or otherwise. However for most individuals as the plane leaves the airport typical concerns usually centre on the exchange rate and not the computer software controlling the flight. These concerns of course change when the plane falls from the sky without explanation. What can the individual do when faced with such occurrences? In such a dramatic scenario the situation there is unlikely to be a contractual relationship between the individual affected by the defective software and software developer. In this article I shall attempt to examine how liability may accordingly arise.
The legal concept of liability
The legal concept of liability has traditionally long included as a base element the concept of culpa or fault. Humans are marvelous at attributing blame in any given situation, the converse of this phenomenon being they are equally good at passing the buck. When things go wrong and where a computer is involved more often than not the initial response is blame the computer. Whilst solving the puzzle following a calamity is never straightforward the first line of attack is often the technology used in a situation that has gone wrong.
An example of this pattern of behavior can be seen following the introduction of computerized stock indexed arbitraging in the New York financial markets back in 1987. On 23rd January 1987 the Dow Jones Industrial Average rose 64 points, only to fall 114 points in a period of 70 minutes, causing widespread panic. Black Monday as it became known was indeed a black day for many investors large and small alike who sustained heavy financial losses. The response of the authorities in the face of the crisis was to suspend the computerized trading immediately.
In considering this event Stevens argues that all computerized program trading did was increase market efficiency and perhaps more significantly get the market to where it was going faster without necessarily determining its direction. However, the decision to suspend computerized trading was taken without a full investigation of all the relevant facts.
“Every disaster needs a villain. In the securities markets of 1987, program trading played that role. Computerized stock-indexed arbitrage has been singled out as the source of a number of market ills” 1
Of course in the situation outlined above the losses incurred would be economic in nature which is not to say that such losses do not have real and human consequences for those who suffer them, but as now appears to be the case in both Britain and America there can be no recovery where the losses are purely economic unless there has been reliance in accordance with the Hedley Byrne principle2.
Turning from the purely financial implications of software failure other failures have rightly generated considerable public concern. In particular the report of the inquiry into the London Ambulance Service3 highlighted the human consequences when software failed to perform as it was expected to.
The situation becomes all the more problematic when it is remembered that nobody expects software to work first time. Software by its very nature is extremely complex, consisting as it does of line upon line of code. It might be thought that the simple solution to this problem would be to check all software thoroughly. That of course begs the question as to what actually constitutes a thorough check. Even where software developers check each line of code or test the validity of every statement in the code the reality is that such testing will not ensure that the code is error free. Kaner4 has identified at least 110 tests that could be carried out in respect of a piece of software none of which would necessarily guarantee that the software would be error free. Indeed the courts in England have explicitly accepted that there is no such thing as error free software.
Furthermore the hardware on which software runs can also be temperamental. It can be affected by temperature changes, power fluctuations or failure could occur simply due to wear and tear. There are any number of other factors which could affect software such as incompatibility with hardware and when all the factors are taken together one could easily justify the assertion that establishing the precise cause of software failure is no easy task. This is of significance given the necessity for the person claiming loss to prove the actual cause of that loss.
Principle, Pragmatism or Reliance
In situations where an individual is killed or injured as a consequence of the failure of a piece of software there is no reason why, in principle, recovery of damages should not be possible. It would however be foolhardy for the individual to assume that recovering damages is by any means straightforward.
In order for an individual to establish a case against a software developer at common law it would be necessary to show that the person making the claim was owed a duty of care by the software developer, that there was a breach of that duty, that the loss sustained was a direct result of the breach of duty and that the loss was of a kind for which recovery of damages would be allowed. In determining whether or not a duty of care exists between parties the starting point has traditionally been to consider the factual relationship between the parties and whether or not that relationship gives rise to a duty of care.
The neighbor principle as espoused by Lord Atkins in the seminal case of Donohue v Stevenson6 requires the individual, in this case the software developer, to have in his contemplation those persons who may be affected his acts or omissions. By way of an example it is obvious to assume that the developer of a program used to operate a diagnostic tool or therapeutic device is aware that the ultimate consumer (for lack of a better word) will be a member of the public although the identity of that person may be unknown to the software developer.
The case of the ill-fated Therac–25, a machine controlled by computer software used to provide radiation therapy for cancer patients, highlights the problem. Prior to the development of radiotherapy treatment, radical invasive surgery was the only means of treating various cancers. Not only was this extremely traumatic for patients but often it was unsuccessful.
With the development of radiotherapy treatment the requirement for surgery has been greatly reduced. However between 1985 and 1987 six patients were seriously injured or killed as a result of receiving excessive radiation doses attributable to the Therac-25 and defective software. Commenting on the tragedy Liver sedge stated that:
“Although technology has progressed to the point where many tasks may be handled by our silicon–based friends, too much faith in the infallibility of software will always result in disaster.
In considering the question of to whom a duty of care is owed the law will have to develop with a degree of flexibility as new problems emerge. The question for the courts will be whether or not this is done on an incremental basis or by application of principles. If the former approach is adopted a further question for consideration is whether or not it is just and reasonable to impose a duty where none has existed before. However in my view the absence of any direct precedent should not prevent the recovery of damages where there has been negligence. I do however acknowledge that theoretical problems that can arise are far from straightforward.
The broad approach adumbrated in Donohue is according to Rowland8 appropriate in cases where there is a direct link between the damage and the negligently designed software such as might cause intensive care equipment to fail. However she argues that in other cases the manner in which damage results cannot provide the test for establishing a duty of care. She cites as an example the situation where passengers board a train with varying degrees of knowledge as to the signaling system being Y2K compliant. Whilst she does not directly answer the questions she poses, the problems highlighted are interesting when one looks at the extremes. For instance, should it make any difference to the outcome of claims by passengers injured in a train accident that one passenger travelled only on the basis that the computer controlled signaling system was certified as safe while the other passenger did not apply his mind to the question of safety at all? It is tempting to assume that liability would arise in the former scenario on the basis of reliance but that then begs the question of whether or not liability arises in the latter scenario at all. If, as she implies, reliance is the key to establishing liability then it would not as there has been no reliance. That result in the foregoing scenario would be harsh indeed, avoiding as it does the issue of the failure of the developer to produce a signaling system that functioned properly.
More often than not an individual may have little appreciation that his environment is being controlled by computer software. It could be argued that because of the specialist knowledge on the part of the computer programmer it follows that he or she assumes responsibility for the individual ultimately affected by the software. The idea that reliance could give rise to a duty of care first came to prominence in the Hedley Byrne case. The basis of the concept is that a special relationship exists between someone providing expert information or an expert service thereby creating a duty of care. In the context of computer programming the concept while superficially attractive ignores the artificiality of such a proposition given that it is highly unlikely that the individual receiving for example radiotherapy treatment will have any idea of the role software will play in the treatment process. Furthermore in attempting to establish a duty of care based on reliance the House of Lords have been at pains to stress that the assumption of responsibility to undertake a specific task is not of itself evidence of the existence of a duty of care to a particular class of persons.9
Standard of Care
It might come as a shock to some and no great surprise to others that there is no accepted standard as to what constitutes good practice amongst software developers. That is not to say that there are not codes of practice and other guidelines but merely that no one code prevails over others. The legal consequences of this situation can be illustrated by the following example. Two software houses given the task of producing a program for the same application do so but the code produced by each house is different. One application fails while the other runs as expected. It is tempting to assume that the failed application was negligently designed simply because it did not work. However such an assumption is not merited without further inquiry. In order to establish that the program was produced negligently it would be necessary to demonstrate that no reasonable man would have produced such a program. In the absence of a universal standard, proving such a failing could be something of a tall order.
An increasing emphasis on standards is of considerable importance given that it is by this route that an assessment of whether or not a design is reasonable will become possible. Making such an assessment should not be an arbitrary judgment but one based on the objective application of principles to established facts. At present in the absence of a uniform approach one is faced with the specter of competing experts endeavoring to justify their preferred approaches. In dealing with what can be an inexact science the problem that could emerge is that it may prove difficult to distinguish between the experts who hold differing opinions and the courts in both England and Scotland have made it clear that it is wrong to side with one expert purely on the basis of preference alone.
In America standards have been introduced for accrediting educational programs in computer science technology. The Computer Science Accreditation Commission (CSAC) established by the Computer Science Accreditation Board (CSAB) oversees these standards. Although such a move towards standardization has positive benefits, not least that such standards should reflect best practice, it would be naïve to assume that this will make litigation any easier. Perhaps only in those cases where it was clear that no regard whatsoever had been paid to any of the existing standards would there be a chance of establishing liability.  
It should also be borne in mind that in the age of the Internet software will undoubtedly travel and may be produced subject to different standards in many jurisdictions. In determining what standards could be regarded as the best standards the courts could be faced with a multiplicity of choices. That of course is good news for the expert and as one case suggests some of them are more than happy to participate in costly bun fights.
Causation
Even where it is possible to establish a duty of care and a breach of that duty the individual may not be home and dry. It is necessary to show that the damage sustained was actually caused by the breach of duty. That is not as straightforward as it might sound when one pauses to consider the complexities of any computer system.
The topography of a computer is such as to make it necessary that an expert is instructed to confirm that the computer program complained of was the source of the defect, giving rise to the damage. A computer program may be incompatible with a particular operating system and therefore fail to work as expected. In these circumstances it would be difficult to establish liability on that basis alone unless the programmer had given an assurance that compatibility would not be an issue. If ISO 9127, one of the main standards, were to become the accepted standard then of course the computer programmer would be bound to provide information as to the appropriate uses of a particular program. In that event it may be easier to establish liability as a result of failure to give appropriate advice with the curious consequence that the question of whether or not the program was itself defective in design would be relegated to one of secondary importance.
A more difficult question arises in relation to the use of machines in areas such as medical electronics. Returning by way of example to the case of the ill-fated Therac-25, while it is clear that the machine caused harm in those cases where there were fatalities it would be difficult to maintain that the machines caused death as it is highly probable that the cancers if left untreated would have lead to death in any event. Equally where an ambulance was late and a young girl died from a severe asthma attack it could not be said that the cause of death was as a result of the failure in the computer controlled telephone system even although if the system had worked the chances of survival would have been greatly increased.
Let the Developer Defend
As can be seen from the above, establishing liability at common law in the context of defectively designed software is no mean feat. With the passing of the Consumer Protection Act 1987 following the EC Directive (85/374/EEC) the concept of product liability has been part of UK law for over a decade. The effect of the Directive and the Act is to create liability without fault on the part of the producer of a defective product that causes death or personal injury or any loss or damage to property including land. Part of the rationale of the Directive is that as between the consumer and the producer it is the latter that is better able to bear the costs of accidents as opposed to individuals affected by the software. Both the Directive and the Act provide defenses to an allegation that a product is defective so that liability is not absolute. However, given that under the Directive and the Act an individual does not have to prove fault on the part of the producer the onus of proof shifts from the consumer to the producer, requiring the producer to make out one of the available defenses.
In relation to computer programs and the application of the Directive and the Act the immediate and much debated question that arises is whether or not computer technology can be categorized as a product. Undoubtedly hardware will be covered by the directive no doubt providing a modicum of comfort to those working in close proximity to ‘killer robots’.
The difficulty arises in relation to the question of software. The arguments against software being classified as a product are essentially threefold. Firstly, software is not moveable, therefore is not a product. Secondly, software is information as opposed to a product, although some obiter comments on the question of the status of software suggests that information forms an integral part of a product. Thirdly, software development is a service, and consequently the legislation does not apply.
Against that it can be argued that software should be treated like electricity, which itself is specifically covered by the Directive in Article 2 and the Act in Section 1(2), and that software is essentially compiled from energy that is material in the scientific sense. Ultimately it could be argued that placing an over legalistic definition on the word “product” ignores the reality that we now live in an information society where for social and economic purposes information is treated as a product and that the law should also recognize this. Furthermore, following the St Albans case it could be argued that the trend is now firmly towards categorizing software as a product and indeed the European Commission has expressed the view that software should in fact be categorized as a product.
Conclusion
How the courts deal with some of the problems highlighted above remains to be seen, as at least within the UK there has been little litigation in this matter. If as Rowland’s suggests pockets of liability emerge covering specific areas of human activity, such as the computer industry, it is likely that this will only happen over a long period of time. Equally relying on general principles, which has to a certain extent become unfashionable, gives no greater guarantee that the law will become settled more quickly.
Parliament could intervene to further afford consumers greater rights and clarify for once and for all the status of software. However it should be borne in mind that any potential expansion of liability on the part of producers of software may have adverse consequences in respect of insurance coverage and make obtaining comprehensive liability coverage more difficult. For smaller companies obtaining such coverage may not be an option, forcing them out of the market. Whatever the future holds in this brave new world perhaps the only thing that can be said with any certainty is that it will undoubtedly be exciting.
Continue Reading...

Nov 19, 2009

Case Study on Wireless Network

Best Blogger Tips
Wireless network in computers means, computers connected to each other without wires. A wireless network allows you to connect your computer to a network using radio waves instead of wires. As long as you are within range of a wireless access point, you can move your computer from place to place while maintaining access to networked resources. This can make networking extremely portable. Unlike its predecessor Ethernet which uses wires, wireless networking uses the air as the medium to transport data.  As long as you have a wireless network card for your laptop and configure your laptop correctly you're free to roam about the network with the same functionality as conventional Ethernet without a reduced speed. There are different types of wireless network like WLAN, WAN & PAN.


Different Types of Wireless Network
Although we use the term wireless network loosely, there are in fact three different types of network.
  • Wide area networks that the cellular carriers create,(WAN)
  • Wireless local area networks, that you create, and(WLAN)
  • Personal area networks, that creates themselves.(PAN)
Wireless Wide Area Networks 
Wide Area Networks include the networks provided by the cell phone carriers and banks such as Grameen phone, City Cell and HSBC. It is unlike WLAN it has no limitation when it comes to distance or coverage. WAN are made of cell phone carriers or combining several WLAN together like Banks.
                                       
WAN are present everywhere, where there are cellular network available.


Wireless Local Area Networks 
Wireless LANs are networks set up to provide wireless connectivity within a finite coverage area. Typical coverage areas might be a hospital (for patient care systems), a university, the airport, or a gas plant. They usually have a well-known audience in mind, for example health care providers, students, or field maintenance staff. You would use WLANS when high data-transfer rate is the most important aspect of your solution, and reach is restricted. For example, in a hospital setting, you would require a high data rate to send patient X-rays wirelessly to a doctor, provided he is on the hospital premises.


Wireless LANS work in an unregulated part of the spectrum, so anyone can create their own wireless LAN, say in their home.
                                  
You have complete control over where coverage is provided. You can even share your printer, scanner or external hard disk  even if all the PCs are switched off through your cell phone or PDAs.
In addition to creating your own private WLAN, some organizations such as Starbucks and A&W are providing high speed WLAN internet access to the public at certain locations. These locations are called hotspots, and for a price you can browse the internet at speeds about 20 times greater than you could get over your cell phone or laptops.
Wireless LANs have their own share of terminology, including:
  • 802.11 - this is the network technology used in wireless LANs. In fact, it is a family of technologies such as 802.11a. 802.11b, etc., differing in speed and other attributes
  • Wifi - a common name for the early 802.11b standard.






Different types of Wireless LAN technologies

currently there are three options: 802.11b, 802.11a, and 802.11g. The sections below compare these technologies.
Option
Speed
Pros
Cons
802.11b
Up to 11 megabits per second (Mbps)
  • Costs the least.
  • Has the best signal range.
  • Has the slowest transmission speed.
  • Allows for fewer simultaneous users.
  • Uses the 2.4 gigahertz (GHz) frequency (the same as many microwave ovens, cordless phones, and other appliances), which can cause interference.
802.11a
Up to 54 Mbps
  • Has the fastest transmission speed.
  • Allows for more simultaneous users.
  • Uses the 5 GHz frequency, which limits interference from other devices.
  • Costs the most.
  • Has a shorter signal range, which is more easily obstructed by walls and other obstacles.
  • Is not compatible with 802.11b network adapters, routers, and access points.
802.11g
Up to 54 Mbps
  • Has a transmission speed comparable to 802.11a under optimal conditions.
  • Allows for more simultaneous users.
  • Has the best signal range and is not easily obstructed.
  • Is compatible with 802.11b network adapters, routers, and access points.
  • Uses the 2.4 GHz frequency so it has the same interference problems as 802.11b.
  • Costs more than 802.11b.


If you have more than one wireless network adapter in your computer or if your adapter uses more than one standard, you can specify which adapter or standard to use for each network connection. For example, if you have a computer that you use for streaming media, such as videos or music, to other computers on your network, you should set it up to use the 802.11a connection, if available, because you will get a faster data transfer rate when you watch videos or listen to music


Wireless Personal Area Networks 
These are networks that provide wireless connectivity over distances of up to 10m or so. At first this seems ridiculously small, but this range allows a computer to be connected wirelessly to a nearby printer, or a cell phone's hands-free headset to be connected wirelessly to the cell phone. The most talked about (and most hyped) technology is called Bluetooth. 


Personal Area Networks are a bit different than WANs and WLANs in one important respect. In the WAN and WLAN cases, networks are set up first, which devices then use. In the Personal Area Network case, there is no independent pre-existing network. The participating devices establish an ad-hoc network when they are within range, and the network is dissolved when the devices pass out of range. If you ever use Infrared (IR) to exchange data between laptops, you will be doing something similar. This idea of wireless devices discovering each other is a very important one, and appears in many guises in the evolving wireless world.


PAN technologies add value to other wireless technologies, although they wouldn't be the primary driver for a wireless business solution. For example, a wireless LAN in a hospital may allow a doctor to see a patient's chart on a handheld device. If the doctor's handheld was also Bluetooth enabled, he could walk to within range of the nearest Bluetooth enabled printer and print the chart.


Recommendations
Now we all know about the wireless technology, there benefits and there uses in different fields. As an IT & Network manager I would like to recommend, implementation for wireless network in our organization will be beneficiary for the organization. As we all know wireless LAN has the same function as LAN but with more satisfying ground. It will be very pleasing for our staff and students; they will be able to use printer and internet on their laptops, cell phones and PDAs anywhere in the campus.


Implementation
It is not as complicated as it looks compare to LAN. All we need is an access point through which every one access to the internet, printer, scanner, databank etc and a wireless network card for each PC, in case of laptops, cell phones and PDAs it’s built-in



All we need is to configure this access point and the network is created. Whenever a pc is on it will automatically detect the access point and the net work is running it is really that simple. We can also use this with wifi enabled projectors for presentation or lecturing. Not only that each PC can create their own network, if the PC is connected to internet it can even share its internet and other files on the network 




Managing and maintaining the Security of WLAN
Security is not a major issue in wireless network we can creates different network in the access point for example: we can create three different networks in the access point teacher, student and guest. For teachers and students the network it will be password protected, and some extra privilege for the teachers network where they will be able to use the network printer, scanner and databank. Where as guest network will not be password protected so that every in the campus can use the internet.



We can also use WLAN manager to control our network security
  • Identify rogue wireless devices
  • Know who is using our WLAN
  • Know what access points are connected to your WLAN
  • Monitor your WLAN devices
  • Monitor Access Point bandwidth utilization
  • Configure your WLAN Access Points
  • Enhance and enforce wireless LAN security.
  • Proactively manage the network problems before they impact the network.
  • Identify network bottlenecks, reduce downtime, and to improve network health and performance.
  • Troubleshoot network problems.
  • Capture and decode wireless traffic for testing and troubleshooting.
  • Upgrade firmware, schedule upgrades, and audit them.
  • Enforce WLAN policy.
  • Restrict website.
  • Allocate bandwidth speed for each network or user on the network.
  • View detail reports of the IP activities that have been don’t when connected to the network


Advantages   
·         24 hour access to internet even if the server PC is off
·         No messed up wires every in the floor or roof
·         It is easier to add or move workstations.
·         It is easier to provide connectivity in areas where it is difficult to lay cable.
·         Installation is fast and easy, and it can eliminate the need to pull cable through walls and ceilings.
·         Access to the network can be from anywhere within range of an access point.
·         Portable or semi permanent buildings can be connected using a WLAN.
·         Although the initial investment required for WLAN hardware can be similar to the cost of wired LAN hardware, installation expenses can be significantly lower.
·         When a facility is located on more than one site (such as on two sides of a road), a directional antenna can be used to avoid digging trenches under roads to connect the sites.
·         In historic buildings where traditional cabling would compromise the façade, a WLAN can avoid the need to drill holes in walls.
·         Long-term cost benefits can be found in dynamic environments requiring frequent moves and changes.
Continue Reading...
 

Yasir's Blog Copyright © 2010