Mar 12, 2010

CASE STUDY ON NETWORKING.

Best Blogger Tips

Introduction to Network
 
A computer network is a group of computers that are interconnected by electronic circuits or wireless transmissions of various designs and technologies for the purpose of exchanging data or communicating information between them or their users. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of types and categories and also presents the basic components of a network.

A computer network allows sharing of resources and information among devices connected to the network. The Advanced Research Projects Agency (ARPA) funded the design of the Advanced Research Projects Agency Network (ARPANET) for the United States Department of Defense. It was the first operational computer network in the world. Development of the network began in 1969, based on designs developed during the 1960s. For a history see ARPANET, the first network.

History of Computer Networks

Before the advent of computer networks that were based upon some type of telecommunications system, communication between calculation machines and history of computer hardware early computers was performed by human users by carrying instructions between them. Many of the social behavior seen in today's Internet was demonstrably present in nineteenth-century and arguably in even earlier networks using visual signals. The Victorian Internet

In September 1940 George Stibitz used a teletype machine to send instructions for a problem set from his Model at Dartmouth College in New Hampshire to his Complex Number Calculator in New York and received results back by the same means. Linking output systems like teletypes to computers was an interest at the Advanced Research Projects Agency (ARPA) when, in 1962, J.C.R. Licklider was hired and developed a working group he called the "Intergalactic Network", a precursor to the ARPANet.

In 1964, researchers at Dartmouth developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at MIT, a research group supported by General Electric and Bell Labs used a computer DEC's to route and manage telephone connections.

Throughout the 1960s Leonard Kleinrock,Paul Baran and Donald Davies independently conceptualized and developed network systems which used datagrams or Packet information technology that could be used in a network between computer systems.

1965 Thomas Merrill and Lawrence G. Roberts created the first wide area network (WAN). The first widely used PSTN switch that used true computer control was the Western Electric introduced in 1965.

In 1969 the University of California at Los Angeles, SRI (in Stanford), University of California at Santa Barbara, and the University of Utah were connected as the beginning of the ARPANet network using 50 Kbit/s circuits. Commercial services using X.25 were deployed in 1972, and later used as an underlying infrastructure for expanding TCP/IP networks.

Computer networks, and the technologies needed to connect and communicate through and between them, continue to drive computer hardware, software, and peripherals industries. This expansion is mirrored by growth in the numbers and types of users of networks from the researcher to the home user.

Today, computer networks are the core of modern communication. All modern aspects of the Public Switched Telephone Network (PSTN) are computer-controlled, and telephony increasingly runs over the Internet Protocol, although not necessarily the public Internet. The scope of communication has increased significantly in the past decade and this boom in communications would not have been possible without the progressively advancing computer network.

Purpose

Facilitating communications. Using a network, people can communicate efficiently and easily via e-mail, instant messaging, chat rooms, telephony, video telephone calls, and videoconferencing.

Sharing hardware. In a networked environment, each computer on a network can access and use hardware on the network. Suppose several personal computers on a network each require the use of a laser printer. If the personal computers and a laser printer are connected to a network, each user can then access the laser printer on the network, as they need it.

Sharing files, data, and information. In a network environment, any authorized user can access data and information stored on other computers on the network. The capability of providing access to data and information on shared storage devices is an important feature of many networks.

Sharing software. Users connected to a network can access application programs on the network.

Network classification

The following list presents categories used for classifying networks.

Connection method

Computer networks can be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as optical fiber, Ethernet, Wireless LAN, HomePNA, Power line communication or G.hn.

Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs, switches, bridges and/or routers. Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium. ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network.
 
Wired technologies

Twisted-pair wire is the most widely used medium for telecommunication. Twisted-pair wires are ordinary telephone wires which consist of two insulated copper wires twisted into pairs and are used for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 100 million bits per second.

Coaxial cable is widely used for cable television systems, office buildings, and other worksites for local area networks. The cables consist of copper or aluminum wire wrapped with insulating layer typically of a flexible material with a high dielectric constant, all of which are surrounded by a conductive layer. The layers of insulation help minimize interference and distortion. Transmission speed range from 200 million to more than 500 million bits per second.

Fiber optic cable consists of one or more filaments of glass fiber wrapped in protective layers. It transmits light which can travel over extended distances without signal loss. Fiber-optic cables are not affected by electromagnetic radiation. Transmission speed may reach trillions of bits per second. The transmission speed of fiber optics is hundreds of times faster than for coaxial cables and thousands of times faster than for twisted-pair wire.

Wireless technologies

Terrestrial Microwave – Terrestrial microwaves use Earth-based transmitter and receiver. The equipment look similar to satellite dishes. Terrestrial microwaves use low-gigahertz range, which limits all communications to line-of-sight. Path between relay stations spaced approx. 30 miles apart. Microwave antennas are usually placed on top of buildings, towers, hills, and mountain peaks.

Communications Satellites – The satellites use microwave radio as their telecommunications medium which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically 22,000 miles above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.

Cellular and PCS Systems – Use several radio communications technologies. The systems are divided to different geographic area. Each area has low-power transmitter or radio relay antenna device to relay calls from one area to the next area.

Wireless LANs – Wireless local area network use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. An example of open-standards wireless radio-wave technology is IEEE 802.11b.

Bluetooth – A short range wireless technology. Operate at approx. 1Mbps with range from 10 to 100 meters. Bluetooth is an open wireless protocol for data exchange over short distances.

The Wireless Web – The wireless web refers to the use of the World Wide Web through equipments like cellular phones, pagers,PDAs, and other portable communications devices. The wireless web service offers anytime/anywhere connection.

Scale

Networks are often classified as local area network (LAN), wide area network (WAN), metropolitan area network (MAN), personal area network (PAN), virtual private network (VPN), campus area network (CAN), storage area network (SAN), and others, depending on their scale, scope and purpose. Usage, trust level, and access right often differ between these types of network. For example, LANs tend to be designed for internal use by an organization's internal systems and employees in individual physical locations (such as a building), while WANs may connect physically separate parts of an organization and may include connections to third parties.

Functional relationship (network architecture)

Computer networks may be classified according to the functional relationships which exist among the elements of the network, e.g., active networking, client-server and peer-to-peer (workgroup) architecture.

Network topology

Computer networks may be classified according to the network topology upon which the network is based, such as bus network, star network, ring network, mesh network, star-bus network, tree or hierarchical topology network. Network topology is the coordination by which devices in the network are arrange in their logical relations to one another, independent of physical arrangement. Even if networked computers are physically placed in a linear arrangement and are connected to a hub, the network has a star topology, rather than a bus topology. In this regard the visual and operational characteristics of a network are distinct. Networks may be classified based on the method of data used to convey the data; these include digital and analog networks.

Types of networks


Common types of computer networks may be identified by their scale.

Personal area network

A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless connections between devices. The reach of a PAN typically extends to 10 meters. Wired PAN network is usually constructed with USB and Firewire while wireless with Bluetooth and Infrared.

Local area network


A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as home, school, computer laboratory, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Current wired LANs are most likely to be based on Ethernet technology, although new standards like ITU-T G.hn also provide a way to create a wired LAN using existing home wires (coaxial cables, phone lines and power lines).
For example, a library may have a wired or wireless LAN for users to interconnect local devices (e.g., printers and servers) and to connect to the internet. On a wired LAN, PCs in the library are typically connected by category 5 (Cat5) cable, running the IEEE 802.3 protocol through a system of interconnected devices and eventually connect to the Internet. The cables to the servers are typically on Cat 5e enhanced cable, which will support IEEE 802.3 at 1 Gbit/s. A wireless LAN may exist using a different IEEE protocol, 802.11b, 802.11g or possibly 802.11n. The staff computers (bright green in the figure) can get to the color printer, checkout records, and the academic network and the Internet. All user computers can get to the Internet and the card catalog. Each workgroup can get to its local printer. Note that the printers are not accessible from outside their workgroup.

Typical library network, in a branching tree topology and controlled access to resources

All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and academic networks' customer access routers.

The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include their higher data transfer rates, smaller geographic range, and no need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 40 and 100 Gbit/s. 

Home area network

A home area network (HAN) or home network is a residential local area network. It is used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a CATV or Digital Subscriber Line (DSL) provider.

Campus area network

A campus area network (CAN) is a computer network made up of an interconnection of local area networks (LANs) within a limited geographical area. It can be considered one form of a metropolitan area network, specific to an academic setting.

In the case of a university campus-based campus area network, the network is likely to link a variety of campus buildings including; academic departments, the university library and student residence halls. A campus area network is larger than a local area network but smaller than a wide area network (WAN) (in some cases).
The main aim of a campus area network is to facilitate students accessing internet and university resources. This is a network that connects two or more LANs but that is limited to a specific and contiguous geographical area such as a college campus, industrial complex, office building, or a military base. A CAN may be considered a type of MAN (metropolitan area network), but is generally limited to a smaller area than a typical MAN. This term is most often used to discuss the implementation of networks for a contiguous area. This should not be confused with a Controller Area Network. A LAN connects network devices over a relatively short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby buildings.

Metropolitan area network

A metropolitan area network (MAN) is a network that connects two or more local area networks or campus area networks together but does not extend beyond the boundaries of the immediate town/city. Routers, switches and hubs are connected to create a metropolitan area network.
Wide area network

A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances, using a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.
Global area network
 
A global area network (GAN) is a model for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial WIRELESS local area networks (WLAN).
Virtual private network

A virtual private network (VPN) is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.
A VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.

A VPN allows computer users to appear to be editing from an IP address location other than the one which connects the actual computer to the Internet.

Internetwork

An Internetwork is the connection of two or more distinct computer networks via a common routing technology. The result is called an internetwork (often shortened to internet). Two or more networks connect using devices that operate at the [Network Layer]] (Layer 3) of the OSI Basic Reference Model, such as a router. Any interconnection among or between public, private, commercial, industrial, or governmental networks may also be defined as an internetwork.

Internet

The Internet is a global system of interconnected governmental, academic, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the U.S. Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW). The 'Internet' is most commonly spelled with a capital 'I' as a proper noun, for historical reasons and to distinguish it from other generic internetworks.

Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP Addresses) administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

Intranets and extranets

Intranets and extranets are parts or extensions of a computer network, usually a local area network.

An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web browsers and file transfer applications, that is under the control of a single administrative entity. That administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is the internal network of an organization. A large intranet will typically have at least one web server to provide users with organizational information.

An extranet is a network that is limited in scope to a single organization or entity and also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities (e.g., a company's customers may be given access to some part of its intranet creating in this way an extranet, while at the same time the customers may not be considered 'trusted' from a security standpoint). 

Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN; it must have at least one connection with an external network.

Basic hardware components

All networks are made up of basic hardware building blocks to interconnect network nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers. In addition, some method of connecting these building blocks is required, usually in the form of galvanic cable (most commonly Category 5 cable). Less common are microwave links (as in IEEE 802.12) or optical cable ("optical fiber"). An Ethernet card may also be required.

Network interface cards

A network card, network adapter, or NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It provides physical access to a networking medium and often provides a low-level addressing system through the use of MAC addresses.

Repeaters

A repeater is an electronic device that receives a signal, clean it from the unnecessary noise, regenerate it and retransmits it at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable which runs longer than 100 meters.

Hubs

A network hub contains multiple ports. When a packet arrives at one port, it is copied unmodified to all ports of the hub for transmission. The destination address in the frame is not changed to a broadcast address.

Bridges

A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address only to that port. Bridges do send broadcasts to all ports except the one on which the broadcast was received.

Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived.

 
Bridges come in three basic types:

Local bridges: Directly connect local area networks (LANs)

Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.

Wireless bridges: Can be used to join LANs or connect remote stations to LANs

Switches

A network switch is a device that forwards and filters OSI layer 2 datagrams (chunk of data communication) between ports (connected cables) based on the MAC addresses in the packets. This is distinct from a hub in that it only forwards the frames to the ports involved in the communication rather than all ports connected. A switch breaks the collision domain but represents itself a broadcast domain. Switches make forwarding decisions of frames on the basis of MAC addresses. A switch normally has numerous ports, facilitating a star topology for devices, and cascading additional switches. Some switches are capable of routing based on Layer 3 addressing or additional logical levels; these are called multi-layer switches. The term switch is used loosely in marketing to encompass devices including routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier).

Routers

A router is a networking device that forwards packets between networks using information in protocol headers and forwarding tables to determine the best next router for each packet.
Continue Reading...

Dec 17, 2009

Case Study OF Holographic Storage

Best Blogger Tips

Holographic Storage is a potential replacement technology in the area of high-capacity data storage currently dominated by magnetic and conventional optical data storage. Magnetic and optical data storage devices rely on individual bits being stored as distinct magnetic or optical changes on the surface of the recording medium. Holographic data storage overcomes this limitation by recording information throughout the volume of the medium and is capable of recording multiple images in the same area utilizing light at different angles.

Additionally, whereas magnetic and optical data storage records information a bit at a time in a linear fashion, holographic storage is capable of recording and reading millions of bits in parallel, enabling data transfer rates greater than those attained by optical storage.

Recording data


Holographic data storage captures information using an optical inference pattern within a thick, photosensitive optical material. Light from a single laser beam is divided into two separate beams, a reference beam and an object or signal beam; a spatial light modulator is used to encode the object beam with the data for storage. An optical inference pattern results from the crossing of the beams’ paths, creating a chemical and/or physical change in the photosensitive medium; the resulting data is represented in an optical pattern of dark and light pixels. By adjusting the reference beam angle, wavelength, or media position, a multitude of holograms (theoretically, several thousand) can be stored on a single volume. The theoretical limits for the storage density of this technique are approximately tens of terabits (1 terabit = 1024 gigabits, 8 gigabits = 1 gigabyte) per cubic centimeter. In 2006, In Phase technologies published a white paper reporting an achievement of 500 GB/in2.

Reading data

The stored data is read through the reproduction of the same reference beam used to create the hologram. The reference beam’s light is focused on the photosensitive material, illuminating the appropriate inference pattern, the light diffracts on the inference pattern, and projects the pattern onto a detector. The detector is capable of reading the data in parallel, over one millions bits at once, resulting in the fast data transfer rate. Files on the holographic drive can be accessed in less than 200 milliseconds.

Longevity

Holographic data storage can provide companies a method to preserve and archive information. The write-once, read many (WORM) approach to data storage would ensure content security, preventing the information from being overwritten or modified. Manufacturers believe this technology can provide safe storage for content without degradation for more than 50 years, far exceeding current data storage options. Counterpoints to this claim point out the evolution of data reader technology changes every ten years; therefore, being able to store data for 50-100 years would not matter if you could not read or access it. However, this is thought to be a weak argument, because a storage method that works very well will not be outdated easily; plus there is the possibility that the new technology will be backwards-compatible with the technology it replaces, similar to how DVD technology is backwards-compatible with CD technology.

Terms used

Sensitivity refers to the extent of refractive index modulation produced per unit of exposure. Diffraction efficiency is proportional to the square of the index modulation times the effective thickness.
The dynamic range determines how many holograms may be multiplexed in a single volume data.
Spatial light modulators (SLM) are pixelated input devices (liquid crystal panels), used to imprint the data to be stored on the object beam.

Technical aspects

Like other media, holographic media is divided into write once (where the storage medium undergoes some irreversible change), and rewritable media (where the change is reversible). Rewritable holographic storage can be achieved via the photorefractive effect in crystals:
§  Mutually coherent light from two sources creates an interference pattern in the media. These two sources are called the reference beam and the signal.
§  Where there is constructive interference the light is bright and electrons can be promoted from the valence band to the conduction band of the material (since the light has given the electrons energy to jump the energy gap). The positively charged vacancies they leave are called holes and they must be immobile in rewritable holographic materials. Where there is destructive interference, there is less light and few electrons are promoted.
§  Electrons in the conduction band are free to move in the material. They will experience two opposing forces that determine how they move. The first force is the Coulomb force between the electrons and the positive holes that they have been promoted from. This force encourages the electrons to stay put or move back to where they came from. The second is the pseudo-force of diffusion that encourages them to move to areas where electrons are less dense. If the coulomb forces are not too strong, the electrons will move into the dark areas.
§  Beginning immediately after being promoted, there is a chance that a given electron will recombine with a hole and move back into the valence band. The faster the rate of recombination, the fewer the number of electrons that will have the chance to move into the dark areas. This rate will affect the strength of the hologram.
§  After some electrons have moved into the dark areas and recombined with holes there, there is a permanent space charge field between the electrons that moved to the dark spots and the holes in the bright spots. This leads to a change in the index of refraction due to the electro-optic effect.


When the information is to be retrieved or read out from the hologram, only the reference beam is necessary. The beam is sent into the material in exactly the same way as when the hologram was written. As a result of the index changes in the material that were created during writing, the beam splits into two parts. One of these parts recreates the signal beam where the information is stored. Something like a CCD camera can be used to convert this information into a more usable form.
Holograms can theoretically store one bit per cubic block the size of the wavelength of light in writing. For example, light from a helium-neon laser is red, 632.8 nm wavelength light. Using light of this wavelength, perfect holographic storage could store 4 gigabits per cubic millimeter. In practice, the data density would be much lower, for at least four reasons:
§  The need to add error-correction
§  The need to accommodate imperfections or limitations in the optical system
§  Economic payoff (higher densities may cost disproportionately more to achieve)
§  Design technique limitations--a problem currently faced in magnetic Hard Drives wherein magnetic domain configuration prevents manufacture of disks that fully utilize the theoretical limits of the technology.
Unlike current storage technologies that record and read one data bit at a time, holographic memory writes and reads data in parallel in a single flash of light.


Two-color recording

For two-color holographic recording, the reference and signal beams are fixed to a particular wavelength (green, red or IR) and the sensitizing/gating beam is a separate, shorter wavelength (blue or UV). The sensitizing/gating beam is used to sensitize the material before and during the recording process, while the information is recorded in the crystal via the reference and signal beams. It is shone intermittently on the crystal during the recording process for measuring the diffracted beam intensity. Readout is achieved by illumination with the reference beam alone. Hence the readout beam with a longer wavelength would not be able to excite the recombined electrons from the deep trap centers during readout, as they need the sensitizing light with shorter wavelength to erase them.
Usually, for two-color holographic recording, two different dopants are required to promote trap centers, which belong to transition metal and rare earth elements and are sensitive to certain wavelengths. By using two dopants, more trap centers would be created in the Lithium niobate crystal. Namely a shallow and a deep trap would be created. The concept now is to use the sensitizing light to excite electrons from the deep trap farther from the valence band to the conduction band and then to recombine at the shallow traps nearer to the conduction band. The reference and signal beam would then be used to excite the electrons from the shallow traps back to the deep traps. The information would hence be stored in the deep traps. Reading would be done with the reference beam since the electrons can no longer be excited out of the deep traps by the long wavelength beam.


Effect of annealing

For a doubly doped LiNbO3 crystal there exists an optimum oxidation/reduction state for desired performance. This optimum depends on the doping levels of shallow and deep traps as well as the annealing conditions for the crystal samples. This optimum state generally occurs when 95 – 98% of the deep traps are filled. In a strongly oxidized sample holograms cannot be easily recorded and the diffraction efficiency is very low. This is because the shallow trap is completely empty and the deep trap is also almost devoid of electrons. In a highly reduced sample on the other hand, the deep traps are completely filled and the shallow traps are also partially filled. This results in very good sensitivity (fast recording) and high diffraction efficiency due to the availability of electrons in the shallow traps. However during readout, all the deep traps get filled quickly and the resulting holograms reside in the shallow traps where they are totally erased by further readout. Hence after extensive readout the diffraction efficiency drops to zero and the hologram stored cannot be fixed.

Development and marketing

At the National Association of Broadcasters 2005 (NAB) convention in Las Vegas, In Phase conducted the first public demonstrations of the world’s first prototype of a commercial storage device at the Maxell Corporation of America booth.
The three main companies involved in developing holographic memory, as of 2002, were In Phase, Polaroid spinoff Aprilis, and Optware of Japan. Although holographic memory has been discussed since the 1960s, and has been touted for near-term commercial application at least since 2001, it has yet to convince critics that it can find a viable market. As of 2002, planned holographic products did not aim to compete head to head with hard drives, but instead to find a market niche based on virtues such as speed of access.

In the video game market

Some have speculated that Nintendo will be the first video game console maker to implement holographic data storage due to the recent uncovering of a Joint Research Agreement between In Phase and Nintendo.
Nintendo is also mentioned in the patent as a joint applicant: "... disclosure is herein made that the claimed invention was made pursuant to a Joint Research Agreement as defined in 35 U.S.C. 103 (c)(3), that was in effect on or before the date the claimed invention was made, and as a result of activities undertaken within the scope of the Joint Research Agreement, by or on the behalf of Nintendo Co., and InPhase Technologies, Inc."

Note


All content of that assignment is hundred percent appropriate
Continue Reading...

Dec 9, 2009

Intel's Core i3 530 processor

Best Blogger Tips
Remember Intel's budget-friendlier Core i3 line that we've been talking about since June? Despite recent leaks the company still hasn't made it officially official, but it's now unofficially officially thanks to a pre-order at a Canadian retailer. If you're getting a little déjà vu right now don't worry, it isn't a glitch in the matrix; this is exactly the same scenario that played out with the Core i5 back in August, about a month before that proc was finally given its coming out party. In other words, the full press release rigmarole sometime around the new year.
Continue Reading...

Dec 1, 2009

Liability of Defective Software

Best Blogger Tips
Liability for defective software

Liability when defective software causes injury
Increasingly software is used in situations where failure may result in death or injury.
In these situations the software is often described as safety critical software. Where such software is used and where an accident occurs it is proper that the law should intervene in an attempt to afford some form of redress to the injured party or the relatives of a deceased person. Safety critical software is used in specialized situations such as flight control in the aviation industry and by the medical profession in carrying out diagnostic tasks.
Nowadays software will have an impact on the average citizen’s life whether by choice or otherwise. However for most individuals as the plane leaves the airport typical concerns usually centre on the exchange rate and not the computer software controlling the flight. These concerns of course change when the plane falls from the sky without explanation. What can the individual do when faced with such occurrences? In such a dramatic scenario the situation there is unlikely to be a contractual relationship between the individual affected by the defective software and software developer. In this article I shall attempt to examine how liability may accordingly arise.
The legal concept of liability
The legal concept of liability has traditionally long included as a base element the concept of culpa or fault. Humans are marvelous at attributing blame in any given situation, the converse of this phenomenon being they are equally good at passing the buck. When things go wrong and where a computer is involved more often than not the initial response is blame the computer. Whilst solving the puzzle following a calamity is never straightforward the first line of attack is often the technology used in a situation that has gone wrong.
An example of this pattern of behavior can be seen following the introduction of computerized stock indexed arbitraging in the New York financial markets back in 1987. On 23rd January 1987 the Dow Jones Industrial Average rose 64 points, only to fall 114 points in a period of 70 minutes, causing widespread panic. Black Monday as it became known was indeed a black day for many investors large and small alike who sustained heavy financial losses. The response of the authorities in the face of the crisis was to suspend the computerized trading immediately.
In considering this event Stevens argues that all computerized program trading did was increase market efficiency and perhaps more significantly get the market to where it was going faster without necessarily determining its direction. However, the decision to suspend computerized trading was taken without a full investigation of all the relevant facts.
“Every disaster needs a villain. In the securities markets of 1987, program trading played that role. Computerized stock-indexed arbitrage has been singled out as the source of a number of market ills” 1
Of course in the situation outlined above the losses incurred would be economic in nature which is not to say that such losses do not have real and human consequences for those who suffer them, but as now appears to be the case in both Britain and America there can be no recovery where the losses are purely economic unless there has been reliance in accordance with the Hedley Byrne principle2.
Turning from the purely financial implications of software failure other failures have rightly generated considerable public concern. In particular the report of the inquiry into the London Ambulance Service3 highlighted the human consequences when software failed to perform as it was expected to.
The situation becomes all the more problematic when it is remembered that nobody expects software to work first time. Software by its very nature is extremely complex, consisting as it does of line upon line of code. It might be thought that the simple solution to this problem would be to check all software thoroughly. That of course begs the question as to what actually constitutes a thorough check. Even where software developers check each line of code or test the validity of every statement in the code the reality is that such testing will not ensure that the code is error free. Kaner4 has identified at least 110 tests that could be carried out in respect of a piece of software none of which would necessarily guarantee that the software would be error free. Indeed the courts in England have explicitly accepted that there is no such thing as error free software.
Furthermore the hardware on which software runs can also be temperamental. It can be affected by temperature changes, power fluctuations or failure could occur simply due to wear and tear. There are any number of other factors which could affect software such as incompatibility with hardware and when all the factors are taken together one could easily justify the assertion that establishing the precise cause of software failure is no easy task. This is of significance given the necessity for the person claiming loss to prove the actual cause of that loss.
Principle, Pragmatism or Reliance
In situations where an individual is killed or injured as a consequence of the failure of a piece of software there is no reason why, in principle, recovery of damages should not be possible. It would however be foolhardy for the individual to assume that recovering damages is by any means straightforward.
In order for an individual to establish a case against a software developer at common law it would be necessary to show that the person making the claim was owed a duty of care by the software developer, that there was a breach of that duty, that the loss sustained was a direct result of the breach of duty and that the loss was of a kind for which recovery of damages would be allowed. In determining whether or not a duty of care exists between parties the starting point has traditionally been to consider the factual relationship between the parties and whether or not that relationship gives rise to a duty of care.
The neighbor principle as espoused by Lord Atkins in the seminal case of Donohue v Stevenson6 requires the individual, in this case the software developer, to have in his contemplation those persons who may be affected his acts or omissions. By way of an example it is obvious to assume that the developer of a program used to operate a diagnostic tool or therapeutic device is aware that the ultimate consumer (for lack of a better word) will be a member of the public although the identity of that person may be unknown to the software developer.
The case of the ill-fated Therac–25, a machine controlled by computer software used to provide radiation therapy for cancer patients, highlights the problem. Prior to the development of radiotherapy treatment, radical invasive surgery was the only means of treating various cancers. Not only was this extremely traumatic for patients but often it was unsuccessful.
With the development of radiotherapy treatment the requirement for surgery has been greatly reduced. However between 1985 and 1987 six patients were seriously injured or killed as a result of receiving excessive radiation doses attributable to the Therac-25 and defective software. Commenting on the tragedy Liver sedge stated that:
“Although technology has progressed to the point where many tasks may be handled by our silicon–based friends, too much faith in the infallibility of software will always result in disaster.
In considering the question of to whom a duty of care is owed the law will have to develop with a degree of flexibility as new problems emerge. The question for the courts will be whether or not this is done on an incremental basis or by application of principles. If the former approach is adopted a further question for consideration is whether or not it is just and reasonable to impose a duty where none has existed before. However in my view the absence of any direct precedent should not prevent the recovery of damages where there has been negligence. I do however acknowledge that theoretical problems that can arise are far from straightforward.
The broad approach adumbrated in Donohue is according to Rowland8 appropriate in cases where there is a direct link between the damage and the negligently designed software such as might cause intensive care equipment to fail. However she argues that in other cases the manner in which damage results cannot provide the test for establishing a duty of care. She cites as an example the situation where passengers board a train with varying degrees of knowledge as to the signaling system being Y2K compliant. Whilst she does not directly answer the questions she poses, the problems highlighted are interesting when one looks at the extremes. For instance, should it make any difference to the outcome of claims by passengers injured in a train accident that one passenger travelled only on the basis that the computer controlled signaling system was certified as safe while the other passenger did not apply his mind to the question of safety at all? It is tempting to assume that liability would arise in the former scenario on the basis of reliance but that then begs the question of whether or not liability arises in the latter scenario at all. If, as she implies, reliance is the key to establishing liability then it would not as there has been no reliance. That result in the foregoing scenario would be harsh indeed, avoiding as it does the issue of the failure of the developer to produce a signaling system that functioned properly.
More often than not an individual may have little appreciation that his environment is being controlled by computer software. It could be argued that because of the specialist knowledge on the part of the computer programmer it follows that he or she assumes responsibility for the individual ultimately affected by the software. The idea that reliance could give rise to a duty of care first came to prominence in the Hedley Byrne case. The basis of the concept is that a special relationship exists between someone providing expert information or an expert service thereby creating a duty of care. In the context of computer programming the concept while superficially attractive ignores the artificiality of such a proposition given that it is highly unlikely that the individual receiving for example radiotherapy treatment will have any idea of the role software will play in the treatment process. Furthermore in attempting to establish a duty of care based on reliance the House of Lords have been at pains to stress that the assumption of responsibility to undertake a specific task is not of itself evidence of the existence of a duty of care to a particular class of persons.9
Standard of Care
It might come as a shock to some and no great surprise to others that there is no accepted standard as to what constitutes good practice amongst software developers. That is not to say that there are not codes of practice and other guidelines but merely that no one code prevails over others. The legal consequences of this situation can be illustrated by the following example. Two software houses given the task of producing a program for the same application do so but the code produced by each house is different. One application fails while the other runs as expected. It is tempting to assume that the failed application was negligently designed simply because it did not work. However such an assumption is not merited without further inquiry. In order to establish that the program was produced negligently it would be necessary to demonstrate that no reasonable man would have produced such a program. In the absence of a universal standard, proving such a failing could be something of a tall order.
An increasing emphasis on standards is of considerable importance given that it is by this route that an assessment of whether or not a design is reasonable will become possible. Making such an assessment should not be an arbitrary judgment but one based on the objective application of principles to established facts. At present in the absence of a uniform approach one is faced with the specter of competing experts endeavoring to justify their preferred approaches. In dealing with what can be an inexact science the problem that could emerge is that it may prove difficult to distinguish between the experts who hold differing opinions and the courts in both England and Scotland have made it clear that it is wrong to side with one expert purely on the basis of preference alone.
In America standards have been introduced for accrediting educational programs in computer science technology. The Computer Science Accreditation Commission (CSAC) established by the Computer Science Accreditation Board (CSAB) oversees these standards. Although such a move towards standardization has positive benefits, not least that such standards should reflect best practice, it would be naïve to assume that this will make litigation any easier. Perhaps only in those cases where it was clear that no regard whatsoever had been paid to any of the existing standards would there be a chance of establishing liability.  
It should also be borne in mind that in the age of the Internet software will undoubtedly travel and may be produced subject to different standards in many jurisdictions. In determining what standards could be regarded as the best standards the courts could be faced with a multiplicity of choices. That of course is good news for the expert and as one case suggests some of them are more than happy to participate in costly bun fights.
Causation
Even where it is possible to establish a duty of care and a breach of that duty the individual may not be home and dry. It is necessary to show that the damage sustained was actually caused by the breach of duty. That is not as straightforward as it might sound when one pauses to consider the complexities of any computer system.
The topography of a computer is such as to make it necessary that an expert is instructed to confirm that the computer program complained of was the source of the defect, giving rise to the damage. A computer program may be incompatible with a particular operating system and therefore fail to work as expected. In these circumstances it would be difficult to establish liability on that basis alone unless the programmer had given an assurance that compatibility would not be an issue. If ISO 9127, one of the main standards, were to become the accepted standard then of course the computer programmer would be bound to provide information as to the appropriate uses of a particular program. In that event it may be easier to establish liability as a result of failure to give appropriate advice with the curious consequence that the question of whether or not the program was itself defective in design would be relegated to one of secondary importance.
A more difficult question arises in relation to the use of machines in areas such as medical electronics. Returning by way of example to the case of the ill-fated Therac-25, while it is clear that the machine caused harm in those cases where there were fatalities it would be difficult to maintain that the machines caused death as it is highly probable that the cancers if left untreated would have lead to death in any event. Equally where an ambulance was late and a young girl died from a severe asthma attack it could not be said that the cause of death was as a result of the failure in the computer controlled telephone system even although if the system had worked the chances of survival would have been greatly increased.
Let the Developer Defend
As can be seen from the above, establishing liability at common law in the context of defectively designed software is no mean feat. With the passing of the Consumer Protection Act 1987 following the EC Directive (85/374/EEC) the concept of product liability has been part of UK law for over a decade. The effect of the Directive and the Act is to create liability without fault on the part of the producer of a defective product that causes death or personal injury or any loss or damage to property including land. Part of the rationale of the Directive is that as between the consumer and the producer it is the latter that is better able to bear the costs of accidents as opposed to individuals affected by the software. Both the Directive and the Act provide defenses to an allegation that a product is defective so that liability is not absolute. However, given that under the Directive and the Act an individual does not have to prove fault on the part of the producer the onus of proof shifts from the consumer to the producer, requiring the producer to make out one of the available defenses.
In relation to computer programs and the application of the Directive and the Act the immediate and much debated question that arises is whether or not computer technology can be categorized as a product. Undoubtedly hardware will be covered by the directive no doubt providing a modicum of comfort to those working in close proximity to ‘killer robots’.
The difficulty arises in relation to the question of software. The arguments against software being classified as a product are essentially threefold. Firstly, software is not moveable, therefore is not a product. Secondly, software is information as opposed to a product, although some obiter comments on the question of the status of software suggests that information forms an integral part of a product. Thirdly, software development is a service, and consequently the legislation does not apply.
Against that it can be argued that software should be treated like electricity, which itself is specifically covered by the Directive in Article 2 and the Act in Section 1(2), and that software is essentially compiled from energy that is material in the scientific sense. Ultimately it could be argued that placing an over legalistic definition on the word “product” ignores the reality that we now live in an information society where for social and economic purposes information is treated as a product and that the law should also recognize this. Furthermore, following the St Albans case it could be argued that the trend is now firmly towards categorizing software as a product and indeed the European Commission has expressed the view that software should in fact be categorized as a product.
Conclusion
How the courts deal with some of the problems highlighted above remains to be seen, as at least within the UK there has been little litigation in this matter. If as Rowland’s suggests pockets of liability emerge covering specific areas of human activity, such as the computer industry, it is likely that this will only happen over a long period of time. Equally relying on general principles, which has to a certain extent become unfashionable, gives no greater guarantee that the law will become settled more quickly.
Parliament could intervene to further afford consumers greater rights and clarify for once and for all the status of software. However it should be borne in mind that any potential expansion of liability on the part of producers of software may have adverse consequences in respect of insurance coverage and make obtaining comprehensive liability coverage more difficult. For smaller companies obtaining such coverage may not be an option, forcing them out of the market. Whatever the future holds in this brave new world perhaps the only thing that can be said with any certainty is that it will undoubtedly be exciting.
Continue Reading...
 

Yasir's Blog Copyright © 2010