Dec 17, 2009

Case Study OF Holographic Storage

Best Blogger Tips

Holographic Storage is a potential replacement technology in the area of high-capacity data storage currently dominated by magnetic and conventional optical data storage. Magnetic and optical data storage devices rely on individual bits being stored as distinct magnetic or optical changes on the surface of the recording medium. Holographic data storage overcomes this limitation by recording information throughout the volume of the medium and is capable of recording multiple images in the same area utilizing light at different angles.

Additionally, whereas magnetic and optical data storage records information a bit at a time in a linear fashion, holographic storage is capable of recording and reading millions of bits in parallel, enabling data transfer rates greater than those attained by optical storage.

Recording data


Holographic data storage captures information using an optical inference pattern within a thick, photosensitive optical material. Light from a single laser beam is divided into two separate beams, a reference beam and an object or signal beam; a spatial light modulator is used to encode the object beam with the data for storage. An optical inference pattern results from the crossing of the beams’ paths, creating a chemical and/or physical change in the photosensitive medium; the resulting data is represented in an optical pattern of dark and light pixels. By adjusting the reference beam angle, wavelength, or media position, a multitude of holograms (theoretically, several thousand) can be stored on a single volume. The theoretical limits for the storage density of this technique are approximately tens of terabits (1 terabit = 1024 gigabits, 8 gigabits = 1 gigabyte) per cubic centimeter. In 2006, In Phase technologies published a white paper reporting an achievement of 500 GB/in2.

Reading data

The stored data is read through the reproduction of the same reference beam used to create the hologram. The reference beam’s light is focused on the photosensitive material, illuminating the appropriate inference pattern, the light diffracts on the inference pattern, and projects the pattern onto a detector. The detector is capable of reading the data in parallel, over one millions bits at once, resulting in the fast data transfer rate. Files on the holographic drive can be accessed in less than 200 milliseconds.

Longevity

Holographic data storage can provide companies a method to preserve and archive information. The write-once, read many (WORM) approach to data storage would ensure content security, preventing the information from being overwritten or modified. Manufacturers believe this technology can provide safe storage for content without degradation for more than 50 years, far exceeding current data storage options. Counterpoints to this claim point out the evolution of data reader technology changes every ten years; therefore, being able to store data for 50-100 years would not matter if you could not read or access it. However, this is thought to be a weak argument, because a storage method that works very well will not be outdated easily; plus there is the possibility that the new technology will be backwards-compatible with the technology it replaces, similar to how DVD technology is backwards-compatible with CD technology.

Terms used

Sensitivity refers to the extent of refractive index modulation produced per unit of exposure. Diffraction efficiency is proportional to the square of the index modulation times the effective thickness.
The dynamic range determines how many holograms may be multiplexed in a single volume data.
Spatial light modulators (SLM) are pixelated input devices (liquid crystal panels), used to imprint the data to be stored on the object beam.

Technical aspects

Like other media, holographic media is divided into write once (where the storage medium undergoes some irreversible change), and rewritable media (where the change is reversible). Rewritable holographic storage can be achieved via the photorefractive effect in crystals:
§  Mutually coherent light from two sources creates an interference pattern in the media. These two sources are called the reference beam and the signal.
§  Where there is constructive interference the light is bright and electrons can be promoted from the valence band to the conduction band of the material (since the light has given the electrons energy to jump the energy gap). The positively charged vacancies they leave are called holes and they must be immobile in rewritable holographic materials. Where there is destructive interference, there is less light and few electrons are promoted.
§  Electrons in the conduction band are free to move in the material. They will experience two opposing forces that determine how they move. The first force is the Coulomb force between the electrons and the positive holes that they have been promoted from. This force encourages the electrons to stay put or move back to where they came from. The second is the pseudo-force of diffusion that encourages them to move to areas where electrons are less dense. If the coulomb forces are not too strong, the electrons will move into the dark areas.
§  Beginning immediately after being promoted, there is a chance that a given electron will recombine with a hole and move back into the valence band. The faster the rate of recombination, the fewer the number of electrons that will have the chance to move into the dark areas. This rate will affect the strength of the hologram.
§  After some electrons have moved into the dark areas and recombined with holes there, there is a permanent space charge field between the electrons that moved to the dark spots and the holes in the bright spots. This leads to a change in the index of refraction due to the electro-optic effect.


When the information is to be retrieved or read out from the hologram, only the reference beam is necessary. The beam is sent into the material in exactly the same way as when the hologram was written. As a result of the index changes in the material that were created during writing, the beam splits into two parts. One of these parts recreates the signal beam where the information is stored. Something like a CCD camera can be used to convert this information into a more usable form.
Holograms can theoretically store one bit per cubic block the size of the wavelength of light in writing. For example, light from a helium-neon laser is red, 632.8 nm wavelength light. Using light of this wavelength, perfect holographic storage could store 4 gigabits per cubic millimeter. In practice, the data density would be much lower, for at least four reasons:
§  The need to add error-correction
§  The need to accommodate imperfections or limitations in the optical system
§  Economic payoff (higher densities may cost disproportionately more to achieve)
§  Design technique limitations--a problem currently faced in magnetic Hard Drives wherein magnetic domain configuration prevents manufacture of disks that fully utilize the theoretical limits of the technology.
Unlike current storage technologies that record and read one data bit at a time, holographic memory writes and reads data in parallel in a single flash of light.


Two-color recording

For two-color holographic recording, the reference and signal beams are fixed to a particular wavelength (green, red or IR) and the sensitizing/gating beam is a separate, shorter wavelength (blue or UV). The sensitizing/gating beam is used to sensitize the material before and during the recording process, while the information is recorded in the crystal via the reference and signal beams. It is shone intermittently on the crystal during the recording process for measuring the diffracted beam intensity. Readout is achieved by illumination with the reference beam alone. Hence the readout beam with a longer wavelength would not be able to excite the recombined electrons from the deep trap centers during readout, as they need the sensitizing light with shorter wavelength to erase them.
Usually, for two-color holographic recording, two different dopants are required to promote trap centers, which belong to transition metal and rare earth elements and are sensitive to certain wavelengths. By using two dopants, more trap centers would be created in the Lithium niobate crystal. Namely a shallow and a deep trap would be created. The concept now is to use the sensitizing light to excite electrons from the deep trap farther from the valence band to the conduction band and then to recombine at the shallow traps nearer to the conduction band. The reference and signal beam would then be used to excite the electrons from the shallow traps back to the deep traps. The information would hence be stored in the deep traps. Reading would be done with the reference beam since the electrons can no longer be excited out of the deep traps by the long wavelength beam.


Effect of annealing

For a doubly doped LiNbO3 crystal there exists an optimum oxidation/reduction state for desired performance. This optimum depends on the doping levels of shallow and deep traps as well as the annealing conditions for the crystal samples. This optimum state generally occurs when 95 – 98% of the deep traps are filled. In a strongly oxidized sample holograms cannot be easily recorded and the diffraction efficiency is very low. This is because the shallow trap is completely empty and the deep trap is also almost devoid of electrons. In a highly reduced sample on the other hand, the deep traps are completely filled and the shallow traps are also partially filled. This results in very good sensitivity (fast recording) and high diffraction efficiency due to the availability of electrons in the shallow traps. However during readout, all the deep traps get filled quickly and the resulting holograms reside in the shallow traps where they are totally erased by further readout. Hence after extensive readout the diffraction efficiency drops to zero and the hologram stored cannot be fixed.

Development and marketing

At the National Association of Broadcasters 2005 (NAB) convention in Las Vegas, In Phase conducted the first public demonstrations of the world’s first prototype of a commercial storage device at the Maxell Corporation of America booth.
The three main companies involved in developing holographic memory, as of 2002, were In Phase, Polaroid spinoff Aprilis, and Optware of Japan. Although holographic memory has been discussed since the 1960s, and has been touted for near-term commercial application at least since 2001, it has yet to convince critics that it can find a viable market. As of 2002, planned holographic products did not aim to compete head to head with hard drives, but instead to find a market niche based on virtues such as speed of access.

In the video game market

Some have speculated that Nintendo will be the first video game console maker to implement holographic data storage due to the recent uncovering of a Joint Research Agreement between In Phase and Nintendo.
Nintendo is also mentioned in the patent as a joint applicant: "... disclosure is herein made that the claimed invention was made pursuant to a Joint Research Agreement as defined in 35 U.S.C. 103 (c)(3), that was in effect on or before the date the claimed invention was made, and as a result of activities undertaken within the scope of the Joint Research Agreement, by or on the behalf of Nintendo Co., and InPhase Technologies, Inc."

Note


All content of that assignment is hundred percent appropriate
Continue Reading...

Dec 9, 2009

Intel's Core i3 530 processor

Best Blogger Tips
Remember Intel's budget-friendlier Core i3 line that we've been talking about since June? Despite recent leaks the company still hasn't made it officially official, but it's now unofficially officially thanks to a pre-order at a Canadian retailer. If you're getting a little déjà vu right now don't worry, it isn't a glitch in the matrix; this is exactly the same scenario that played out with the Core i5 back in August, about a month before that proc was finally given its coming out party. In other words, the full press release rigmarole sometime around the new year.
Continue Reading...

Dec 1, 2009

Liability of Defective Software

Best Blogger Tips
Liability for defective software

Liability when defective software causes injury
Increasingly software is used in situations where failure may result in death or injury.
In these situations the software is often described as safety critical software. Where such software is used and where an accident occurs it is proper that the law should intervene in an attempt to afford some form of redress to the injured party or the relatives of a deceased person. Safety critical software is used in specialized situations such as flight control in the aviation industry and by the medical profession in carrying out diagnostic tasks.
Nowadays software will have an impact on the average citizen’s life whether by choice or otherwise. However for most individuals as the plane leaves the airport typical concerns usually centre on the exchange rate and not the computer software controlling the flight. These concerns of course change when the plane falls from the sky without explanation. What can the individual do when faced with such occurrences? In such a dramatic scenario the situation there is unlikely to be a contractual relationship between the individual affected by the defective software and software developer. In this article I shall attempt to examine how liability may accordingly arise.
The legal concept of liability
The legal concept of liability has traditionally long included as a base element the concept of culpa or fault. Humans are marvelous at attributing blame in any given situation, the converse of this phenomenon being they are equally good at passing the buck. When things go wrong and where a computer is involved more often than not the initial response is blame the computer. Whilst solving the puzzle following a calamity is never straightforward the first line of attack is often the technology used in a situation that has gone wrong.
An example of this pattern of behavior can be seen following the introduction of computerized stock indexed arbitraging in the New York financial markets back in 1987. On 23rd January 1987 the Dow Jones Industrial Average rose 64 points, only to fall 114 points in a period of 70 minutes, causing widespread panic. Black Monday as it became known was indeed a black day for many investors large and small alike who sustained heavy financial losses. The response of the authorities in the face of the crisis was to suspend the computerized trading immediately.
In considering this event Stevens argues that all computerized program trading did was increase market efficiency and perhaps more significantly get the market to where it was going faster without necessarily determining its direction. However, the decision to suspend computerized trading was taken without a full investigation of all the relevant facts.
“Every disaster needs a villain. In the securities markets of 1987, program trading played that role. Computerized stock-indexed arbitrage has been singled out as the source of a number of market ills” 1
Of course in the situation outlined above the losses incurred would be economic in nature which is not to say that such losses do not have real and human consequences for those who suffer them, but as now appears to be the case in both Britain and America there can be no recovery where the losses are purely economic unless there has been reliance in accordance with the Hedley Byrne principle2.
Turning from the purely financial implications of software failure other failures have rightly generated considerable public concern. In particular the report of the inquiry into the London Ambulance Service3 highlighted the human consequences when software failed to perform as it was expected to.
The situation becomes all the more problematic when it is remembered that nobody expects software to work first time. Software by its very nature is extremely complex, consisting as it does of line upon line of code. It might be thought that the simple solution to this problem would be to check all software thoroughly. That of course begs the question as to what actually constitutes a thorough check. Even where software developers check each line of code or test the validity of every statement in the code the reality is that such testing will not ensure that the code is error free. Kaner4 has identified at least 110 tests that could be carried out in respect of a piece of software none of which would necessarily guarantee that the software would be error free. Indeed the courts in England have explicitly accepted that there is no such thing as error free software.
Furthermore the hardware on which software runs can also be temperamental. It can be affected by temperature changes, power fluctuations or failure could occur simply due to wear and tear. There are any number of other factors which could affect software such as incompatibility with hardware and when all the factors are taken together one could easily justify the assertion that establishing the precise cause of software failure is no easy task. This is of significance given the necessity for the person claiming loss to prove the actual cause of that loss.
Principle, Pragmatism or Reliance
In situations where an individual is killed or injured as a consequence of the failure of a piece of software there is no reason why, in principle, recovery of damages should not be possible. It would however be foolhardy for the individual to assume that recovering damages is by any means straightforward.
In order for an individual to establish a case against a software developer at common law it would be necessary to show that the person making the claim was owed a duty of care by the software developer, that there was a breach of that duty, that the loss sustained was a direct result of the breach of duty and that the loss was of a kind for which recovery of damages would be allowed. In determining whether or not a duty of care exists between parties the starting point has traditionally been to consider the factual relationship between the parties and whether or not that relationship gives rise to a duty of care.
The neighbor principle as espoused by Lord Atkins in the seminal case of Donohue v Stevenson6 requires the individual, in this case the software developer, to have in his contemplation those persons who may be affected his acts or omissions. By way of an example it is obvious to assume that the developer of a program used to operate a diagnostic tool or therapeutic device is aware that the ultimate consumer (for lack of a better word) will be a member of the public although the identity of that person may be unknown to the software developer.
The case of the ill-fated Therac–25, a machine controlled by computer software used to provide radiation therapy for cancer patients, highlights the problem. Prior to the development of radiotherapy treatment, radical invasive surgery was the only means of treating various cancers. Not only was this extremely traumatic for patients but often it was unsuccessful.
With the development of radiotherapy treatment the requirement for surgery has been greatly reduced. However between 1985 and 1987 six patients were seriously injured or killed as a result of receiving excessive radiation doses attributable to the Therac-25 and defective software. Commenting on the tragedy Liver sedge stated that:
“Although technology has progressed to the point where many tasks may be handled by our silicon–based friends, too much faith in the infallibility of software will always result in disaster.
In considering the question of to whom a duty of care is owed the law will have to develop with a degree of flexibility as new problems emerge. The question for the courts will be whether or not this is done on an incremental basis or by application of principles. If the former approach is adopted a further question for consideration is whether or not it is just and reasonable to impose a duty where none has existed before. However in my view the absence of any direct precedent should not prevent the recovery of damages where there has been negligence. I do however acknowledge that theoretical problems that can arise are far from straightforward.
The broad approach adumbrated in Donohue is according to Rowland8 appropriate in cases where there is a direct link between the damage and the negligently designed software such as might cause intensive care equipment to fail. However she argues that in other cases the manner in which damage results cannot provide the test for establishing a duty of care. She cites as an example the situation where passengers board a train with varying degrees of knowledge as to the signaling system being Y2K compliant. Whilst she does not directly answer the questions she poses, the problems highlighted are interesting when one looks at the extremes. For instance, should it make any difference to the outcome of claims by passengers injured in a train accident that one passenger travelled only on the basis that the computer controlled signaling system was certified as safe while the other passenger did not apply his mind to the question of safety at all? It is tempting to assume that liability would arise in the former scenario on the basis of reliance but that then begs the question of whether or not liability arises in the latter scenario at all. If, as she implies, reliance is the key to establishing liability then it would not as there has been no reliance. That result in the foregoing scenario would be harsh indeed, avoiding as it does the issue of the failure of the developer to produce a signaling system that functioned properly.
More often than not an individual may have little appreciation that his environment is being controlled by computer software. It could be argued that because of the specialist knowledge on the part of the computer programmer it follows that he or she assumes responsibility for the individual ultimately affected by the software. The idea that reliance could give rise to a duty of care first came to prominence in the Hedley Byrne case. The basis of the concept is that a special relationship exists between someone providing expert information or an expert service thereby creating a duty of care. In the context of computer programming the concept while superficially attractive ignores the artificiality of such a proposition given that it is highly unlikely that the individual receiving for example radiotherapy treatment will have any idea of the role software will play in the treatment process. Furthermore in attempting to establish a duty of care based on reliance the House of Lords have been at pains to stress that the assumption of responsibility to undertake a specific task is not of itself evidence of the existence of a duty of care to a particular class of persons.9
Standard of Care
It might come as a shock to some and no great surprise to others that there is no accepted standard as to what constitutes good practice amongst software developers. That is not to say that there are not codes of practice and other guidelines but merely that no one code prevails over others. The legal consequences of this situation can be illustrated by the following example. Two software houses given the task of producing a program for the same application do so but the code produced by each house is different. One application fails while the other runs as expected. It is tempting to assume that the failed application was negligently designed simply because it did not work. However such an assumption is not merited without further inquiry. In order to establish that the program was produced negligently it would be necessary to demonstrate that no reasonable man would have produced such a program. In the absence of a universal standard, proving such a failing could be something of a tall order.
An increasing emphasis on standards is of considerable importance given that it is by this route that an assessment of whether or not a design is reasonable will become possible. Making such an assessment should not be an arbitrary judgment but one based on the objective application of principles to established facts. At present in the absence of a uniform approach one is faced with the specter of competing experts endeavoring to justify their preferred approaches. In dealing with what can be an inexact science the problem that could emerge is that it may prove difficult to distinguish between the experts who hold differing opinions and the courts in both England and Scotland have made it clear that it is wrong to side with one expert purely on the basis of preference alone.
In America standards have been introduced for accrediting educational programs in computer science technology. The Computer Science Accreditation Commission (CSAC) established by the Computer Science Accreditation Board (CSAB) oversees these standards. Although such a move towards standardization has positive benefits, not least that such standards should reflect best practice, it would be naïve to assume that this will make litigation any easier. Perhaps only in those cases where it was clear that no regard whatsoever had been paid to any of the existing standards would there be a chance of establishing liability.  
It should also be borne in mind that in the age of the Internet software will undoubtedly travel and may be produced subject to different standards in many jurisdictions. In determining what standards could be regarded as the best standards the courts could be faced with a multiplicity of choices. That of course is good news for the expert and as one case suggests some of them are more than happy to participate in costly bun fights.
Causation
Even where it is possible to establish a duty of care and a breach of that duty the individual may not be home and dry. It is necessary to show that the damage sustained was actually caused by the breach of duty. That is not as straightforward as it might sound when one pauses to consider the complexities of any computer system.
The topography of a computer is such as to make it necessary that an expert is instructed to confirm that the computer program complained of was the source of the defect, giving rise to the damage. A computer program may be incompatible with a particular operating system and therefore fail to work as expected. In these circumstances it would be difficult to establish liability on that basis alone unless the programmer had given an assurance that compatibility would not be an issue. If ISO 9127, one of the main standards, were to become the accepted standard then of course the computer programmer would be bound to provide information as to the appropriate uses of a particular program. In that event it may be easier to establish liability as a result of failure to give appropriate advice with the curious consequence that the question of whether or not the program was itself defective in design would be relegated to one of secondary importance.
A more difficult question arises in relation to the use of machines in areas such as medical electronics. Returning by way of example to the case of the ill-fated Therac-25, while it is clear that the machine caused harm in those cases where there were fatalities it would be difficult to maintain that the machines caused death as it is highly probable that the cancers if left untreated would have lead to death in any event. Equally where an ambulance was late and a young girl died from a severe asthma attack it could not be said that the cause of death was as a result of the failure in the computer controlled telephone system even although if the system had worked the chances of survival would have been greatly increased.
Let the Developer Defend
As can be seen from the above, establishing liability at common law in the context of defectively designed software is no mean feat. With the passing of the Consumer Protection Act 1987 following the EC Directive (85/374/EEC) the concept of product liability has been part of UK law for over a decade. The effect of the Directive and the Act is to create liability without fault on the part of the producer of a defective product that causes death or personal injury or any loss or damage to property including land. Part of the rationale of the Directive is that as between the consumer and the producer it is the latter that is better able to bear the costs of accidents as opposed to individuals affected by the software. Both the Directive and the Act provide defenses to an allegation that a product is defective so that liability is not absolute. However, given that under the Directive and the Act an individual does not have to prove fault on the part of the producer the onus of proof shifts from the consumer to the producer, requiring the producer to make out one of the available defenses.
In relation to computer programs and the application of the Directive and the Act the immediate and much debated question that arises is whether or not computer technology can be categorized as a product. Undoubtedly hardware will be covered by the directive no doubt providing a modicum of comfort to those working in close proximity to ‘killer robots’.
The difficulty arises in relation to the question of software. The arguments against software being classified as a product are essentially threefold. Firstly, software is not moveable, therefore is not a product. Secondly, software is information as opposed to a product, although some obiter comments on the question of the status of software suggests that information forms an integral part of a product. Thirdly, software development is a service, and consequently the legislation does not apply.
Against that it can be argued that software should be treated like electricity, which itself is specifically covered by the Directive in Article 2 and the Act in Section 1(2), and that software is essentially compiled from energy that is material in the scientific sense. Ultimately it could be argued that placing an over legalistic definition on the word “product” ignores the reality that we now live in an information society where for social and economic purposes information is treated as a product and that the law should also recognize this. Furthermore, following the St Albans case it could be argued that the trend is now firmly towards categorizing software as a product and indeed the European Commission has expressed the view that software should in fact be categorized as a product.
Conclusion
How the courts deal with some of the problems highlighted above remains to be seen, as at least within the UK there has been little litigation in this matter. If as Rowland’s suggests pockets of liability emerge covering specific areas of human activity, such as the computer industry, it is likely that this will only happen over a long period of time. Equally relying on general principles, which has to a certain extent become unfashionable, gives no greater guarantee that the law will become settled more quickly.
Parliament could intervene to further afford consumers greater rights and clarify for once and for all the status of software. However it should be borne in mind that any potential expansion of liability on the part of producers of software may have adverse consequences in respect of insurance coverage and make obtaining comprehensive liability coverage more difficult. For smaller companies obtaining such coverage may not be an option, forcing them out of the market. Whatever the future holds in this brave new world perhaps the only thing that can be said with any certainty is that it will undoubtedly be exciting.
Continue Reading...

Nov 19, 2009

Case Study on Wireless Network

Best Blogger Tips
Wireless network in computers means, computers connected to each other without wires. A wireless network allows you to connect your computer to a network using radio waves instead of wires. As long as you are within range of a wireless access point, you can move your computer from place to place while maintaining access to networked resources. This can make networking extremely portable. Unlike its predecessor Ethernet which uses wires, wireless networking uses the air as the medium to transport data.  As long as you have a wireless network card for your laptop and configure your laptop correctly you're free to roam about the network with the same functionality as conventional Ethernet without a reduced speed. There are different types of wireless network like WLAN, WAN & PAN.


Different Types of Wireless Network
Although we use the term wireless network loosely, there are in fact three different types of network.
  • Wide area networks that the cellular carriers create,(WAN)
  • Wireless local area networks, that you create, and(WLAN)
  • Personal area networks, that creates themselves.(PAN)
Wireless Wide Area Networks 
Wide Area Networks include the networks provided by the cell phone carriers and banks such as Grameen phone, City Cell and HSBC. It is unlike WLAN it has no limitation when it comes to distance or coverage. WAN are made of cell phone carriers or combining several WLAN together like Banks.
                                       
WAN are present everywhere, where there are cellular network available.


Wireless Local Area Networks 
Wireless LANs are networks set up to provide wireless connectivity within a finite coverage area. Typical coverage areas might be a hospital (for patient care systems), a university, the airport, or a gas plant. They usually have a well-known audience in mind, for example health care providers, students, or field maintenance staff. You would use WLANS when high data-transfer rate is the most important aspect of your solution, and reach is restricted. For example, in a hospital setting, you would require a high data rate to send patient X-rays wirelessly to a doctor, provided he is on the hospital premises.


Wireless LANS work in an unregulated part of the spectrum, so anyone can create their own wireless LAN, say in their home.
                                  
You have complete control over where coverage is provided. You can even share your printer, scanner or external hard disk  even if all the PCs are switched off through your cell phone or PDAs.
In addition to creating your own private WLAN, some organizations such as Starbucks and A&W are providing high speed WLAN internet access to the public at certain locations. These locations are called hotspots, and for a price you can browse the internet at speeds about 20 times greater than you could get over your cell phone or laptops.
Wireless LANs have their own share of terminology, including:
  • 802.11 - this is the network technology used in wireless LANs. In fact, it is a family of technologies such as 802.11a. 802.11b, etc., differing in speed and other attributes
  • Wifi - a common name for the early 802.11b standard.






Different types of Wireless LAN technologies

currently there are three options: 802.11b, 802.11a, and 802.11g. The sections below compare these technologies.
Option
Speed
Pros
Cons
802.11b
Up to 11 megabits per second (Mbps)
  • Costs the least.
  • Has the best signal range.
  • Has the slowest transmission speed.
  • Allows for fewer simultaneous users.
  • Uses the 2.4 gigahertz (GHz) frequency (the same as many microwave ovens, cordless phones, and other appliances), which can cause interference.
802.11a
Up to 54 Mbps
  • Has the fastest transmission speed.
  • Allows for more simultaneous users.
  • Uses the 5 GHz frequency, which limits interference from other devices.
  • Costs the most.
  • Has a shorter signal range, which is more easily obstructed by walls and other obstacles.
  • Is not compatible with 802.11b network adapters, routers, and access points.
802.11g
Up to 54 Mbps
  • Has a transmission speed comparable to 802.11a under optimal conditions.
  • Allows for more simultaneous users.
  • Has the best signal range and is not easily obstructed.
  • Is compatible with 802.11b network adapters, routers, and access points.
  • Uses the 2.4 GHz frequency so it has the same interference problems as 802.11b.
  • Costs more than 802.11b.


If you have more than one wireless network adapter in your computer or if your adapter uses more than one standard, you can specify which adapter or standard to use for each network connection. For example, if you have a computer that you use for streaming media, such as videos or music, to other computers on your network, you should set it up to use the 802.11a connection, if available, because you will get a faster data transfer rate when you watch videos or listen to music


Wireless Personal Area Networks 
These are networks that provide wireless connectivity over distances of up to 10m or so. At first this seems ridiculously small, but this range allows a computer to be connected wirelessly to a nearby printer, or a cell phone's hands-free headset to be connected wirelessly to the cell phone. The most talked about (and most hyped) technology is called Bluetooth. 


Personal Area Networks are a bit different than WANs and WLANs in one important respect. In the WAN and WLAN cases, networks are set up first, which devices then use. In the Personal Area Network case, there is no independent pre-existing network. The participating devices establish an ad-hoc network when they are within range, and the network is dissolved when the devices pass out of range. If you ever use Infrared (IR) to exchange data between laptops, you will be doing something similar. This idea of wireless devices discovering each other is a very important one, and appears in many guises in the evolving wireless world.


PAN technologies add value to other wireless technologies, although they wouldn't be the primary driver for a wireless business solution. For example, a wireless LAN in a hospital may allow a doctor to see a patient's chart on a handheld device. If the doctor's handheld was also Bluetooth enabled, he could walk to within range of the nearest Bluetooth enabled printer and print the chart.


Recommendations
Now we all know about the wireless technology, there benefits and there uses in different fields. As an IT & Network manager I would like to recommend, implementation for wireless network in our organization will be beneficiary for the organization. As we all know wireless LAN has the same function as LAN but with more satisfying ground. It will be very pleasing for our staff and students; they will be able to use printer and internet on their laptops, cell phones and PDAs anywhere in the campus.


Implementation
It is not as complicated as it looks compare to LAN. All we need is an access point through which every one access to the internet, printer, scanner, databank etc and a wireless network card for each PC, in case of laptops, cell phones and PDAs it’s built-in



All we need is to configure this access point and the network is created. Whenever a pc is on it will automatically detect the access point and the net work is running it is really that simple. We can also use this with wifi enabled projectors for presentation or lecturing. Not only that each PC can create their own network, if the PC is connected to internet it can even share its internet and other files on the network 




Managing and maintaining the Security of WLAN
Security is not a major issue in wireless network we can creates different network in the access point for example: we can create three different networks in the access point teacher, student and guest. For teachers and students the network it will be password protected, and some extra privilege for the teachers network where they will be able to use the network printer, scanner and databank. Where as guest network will not be password protected so that every in the campus can use the internet.



We can also use WLAN manager to control our network security
  • Identify rogue wireless devices
  • Know who is using our WLAN
  • Know what access points are connected to your WLAN
  • Monitor your WLAN devices
  • Monitor Access Point bandwidth utilization
  • Configure your WLAN Access Points
  • Enhance and enforce wireless LAN security.
  • Proactively manage the network problems before they impact the network.
  • Identify network bottlenecks, reduce downtime, and to improve network health and performance.
  • Troubleshoot network problems.
  • Capture and decode wireless traffic for testing and troubleshooting.
  • Upgrade firmware, schedule upgrades, and audit them.
  • Enforce WLAN policy.
  • Restrict website.
  • Allocate bandwidth speed for each network or user on the network.
  • View detail reports of the IP activities that have been don’t when connected to the network


Advantages   
·         24 hour access to internet even if the server PC is off
·         No messed up wires every in the floor or roof
·         It is easier to add or move workstations.
·         It is easier to provide connectivity in areas where it is difficult to lay cable.
·         Installation is fast and easy, and it can eliminate the need to pull cable through walls and ceilings.
·         Access to the network can be from anywhere within range of an access point.
·         Portable or semi permanent buildings can be connected using a WLAN.
·         Although the initial investment required for WLAN hardware can be similar to the cost of wired LAN hardware, installation expenses can be significantly lower.
·         When a facility is located on more than one site (such as on two sides of a road), a directional antenna can be used to avoid digging trenches under roads to connect the sites.
·         In historic buildings where traditional cabling would compromise the façade, a WLAN can avoid the need to drill holes in walls.
·         Long-term cost benefits can be found in dynamic environments requiring frequent moves and changes.
Continue Reading...
 

Yasir's Blog Copyright © 2010