Sunday, March 29, 2009

Sniper Rifle




Sniper rifle
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The 7.62x51mm M40, United States Marine Corps standard-issue sniper rifle.
The Accuracy International Arctic Warfare series of sniper rifles is standard issue in the armies of many countries, including those of Britain and Germany (picture shows a rifle of the German Army).
"Alex" - the new 7.62x51mm Polish bolt-action sniper rifle.

In military and law enforcement terminology, a sniper rifle is a rifle used to ensure accurate placement of bullets at longer ranges than small arms. A typical sniper rifle is built for optimal levels of accuracy, fitted with a telescopic sight and chambered for a military centerfire cartridge. The term is often used in the media to describe any type of accurized firearm fitted with a telescopic sight that is employed against human targets.

The military role of sniper (a term derived from the snipe, a bird which was difficult to hunt and shoot) dates back to the turn of the 18th century, but the sniper rifle itself is a much more recent development. Advances in technology, specifically that of telescopic sights and more accurate manufacturing, allowed armies to equip specially-trained soldiers with rifles that would enable them to deliver precise shots over greater distances than regular infantry weapons. The rifle itself could be a standard rifle (at first, a bolt-action rifle); however, when fitted with a telescopic sight, it would become a sniper rifle.

History

During World War II, the (7.62x54mmR) Mosin-Nagant rifle mounted with a telescopic sight was commonly used as a sniper rifle by Russian snipers.

In the American Civil War, Confederate troops equipped with barrel-length three power scopes mounted on the then-premium British Whitworth rifle had been known to kill Union officers at ranges bordering 800 yards, an unheard-of distance at that time.[1][2][3][4]

The earliest sniper rifles were little more than conventional military or target rifles with long-range "peep sights" and Galilean 'open telescope' front and rearsights, designed for use on the target range. Only from the beginning of World War I did specially adapted sniper rifles come to the fore. Germany deployed military calibre hunting rifles with telescopic sights which was countered by the British with Aldis, Winchester and Periscopic Prism Co. sights fitted by gunsmiths, to regulation SMLE Mk III and Mk III* rifles. Australia's No.1 Mk III* (HT) rifle was another later conversion of the SMLE fitted with the Lithgow heavy target barrel at the end of WW2.

Typical World War II-era sniper rifles were generally standard issue battle rifles, hand-picked for accuracy, with a 2.5x or 3x telescopic sight and cheek-rest fitted, with the bolt turned down if necessary to allow operation with the scope affixed. By the end of the war, forces on all sides had specially trained soldiers equipped with sniper rifles, and they have played an increasingly important role in military operations ever since.

TERRORIST




Terrorism

The examples and perspective in this article may not represent a worldwide view of the subject. Please improve this article or discuss the issue on the talk page.

Terrorism, according to the Oxford English Dictionary is "A policy intended to strike with terror those against whom it is adopted; the employment of methods of intimidation; the fact of terrorizing or condition of being terrorized."[1] At present, there is no internationally agreed upon definition of terrorism.[2][3] Common definitions of terrorism refer only to those acts which (1) are intended to create fear (terror), (2) are perpetrated for an ideological goal (as opposed to a materialistic goal or a lone attack), and (3) deliberately target (or disregard the safety of) non-combatants. Some definitions also include acts of unlawful violence or unconventional warfare.

A person who practices terrorism is a terrorist. Acts of terrorism are criminal acts according to United Nations Security Council Resolution 1373 and the domestic jurisprudence of almost all nations.

The word “terrorism” is politically and emotionally charged,[4] and this greatly compounds the difficulty of providing a precise definition. A 1988 study by the United States Army found that over 100 definitions of the word “terrorism” have been used.[5] The concept of terrorism is itself controversial because it is often used by states to delegitimize political or foreign opponents, and potentially legitimize the state's own use of terror against them.

The history of terrorist organizations suggests that they do not practice terrorism only for its political effectiveness; individual terrorists are also motivated by a desire for social solidarity with other members.[6]

Terrorism has been practiced by a broad array of political organizations for furthering their objectives. It has been practiced by both right-wing and left-wing political parties, nationalistic groups, religious groups, revolutionaries, and ruling governments.

Origin of term
Main article: Definition of terrorism
See also: State terrorism

"Terror" comes from a Latin word meaning "to frighten". The terror cimbricus was a panic and state of emergency in Rome in response to the approach of warriors of the Cimbri tribe in 105BC. The Jacobins cited this precedent when imposing a Reign of Terror during the French Revolution. After the Jacobins lost power, the word "terrorist" became a term of abuse. Although the Reign of Terror was imposed by a government, in modern times "terrorism" usually refers to the killing of innocent people by a private group in such a way as to create a media spectacle. This meaning can be traced back to Sergey Nechayev, who described himself as a "terrorist".[8] Nechayev founded the Russian terrorist group "People's Retribution" (Народная расправа) in 1869.

In November 2004, a United Nations Security Council report described terrorism as any act "intended to cause death or serious bodily harm to civilians or non-combatants with the purpose of intimidating a population or compelling a government or an international organization to do or abstain from doing any act". (Note that this report does not constitute international law).[9]

In many countries, acts of terrorism are legally distinguished from criminal acts done for other purposes, and "terrorism" is defined by statute; see definition of terrorism for particular definitions. Common principles among legal definitions of terrorism provide an emerging consensus as to meaning and also foster cooperation between law enforcement personnel in different countries. Among these definitions there are several that do not recognize the possibility of legitimate use of violence by civilians against an invader in an occupied country and would, thus label all resistance movements as terrorist groups. Others make a distinction between lawful and unlawful use of violence. Ultimately, the distinction is a political judgment.[10]

Friday, March 27, 2009

INTERNET



Internet
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Internet (disambiguation).
Semi-protected
It has been suggested that Internet capitalization conventions be merged into this article or section. (Discuss)
Visualization of the various routes through a portion of the Internet
Internet portal

The Internet is a global network of interconnected computers, enabling users to share information along multiple channels. Typically, a computer that connects to the Internet can access information from a vast array of available servers and other computers by moving information from them to the computer's local memory. The same connection allows that computer to send information to servers on the network; that information is in turn accessed and potentially modified by a variety of other interconnected computers. A majority of widely accessible information on the Internet consists of inter-linked hypertext documents and other resources of the World Wide Web (WWW). Computer users typically manage sent and received information with web browsers; other software for users' interface with computer networks includes specialized programs for electronic mail, online chat, file transfer and file sharing.

The movement of information in the Internet is achieved via a system of interconnected computer networks that share data by packet switching using the standardized Internet Protocol Suite (TCP/IP). It is a "network of networks" that consists of millions of private and public, academic, business, and government networks of local to global scope that are linked by copper wires, fiber-optic cables, wireless connections, and other technologies.

History
Main article: History of the Internet

Creation

The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency, known as ARPA, in February 1958 to regain a technological lead.[2][3] ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO, and networking as a potential unifying human revolution.

Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.

At the IPTO, Licklider got Lawrence Roberts to start a project to make a network, and Roberts based the technology on the work of Paul Baran,[4] who had written an exhaustive study for the U.S. Air Force that recommended packet switching (as opposed to circuit switching) to make a network highly robust and survivable. After much work, the first two nodes of what would become the ARPANET were interconnected between UCLA and SRI (later SRI International) in Menlo Park, California, on October 29, 1969. The ARPANET was one of the "eve" networks of today's Internet.

Following on from the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976.

X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period. Vinton Cerf and Robert Kahn developed the first description of the TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems.

The first TCP/IP-based wide-area network was operational by January 1, 1983 when all hosts on the ARPANET were switched over from the older NCP protocols. In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF.

The opening of the network to commercial interests began in 1988. The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic e-mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISP) were created: UUNET, PSINet and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet, Tymnet, Compuserve and JANET were interconnected with the growing Internet. Telenet (later called Sprintnet) was a large privately funded national computer network with free dial-up access in cities throughout the U.S. that had been in operation since the 1970s. This network was eventually interconnected with the others in the 1980s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over virtually any pre-existing communication networks allowed for a great ease of growth, although the rapid growth of the Internet was due primarily to the availability of commercial routers from companies such as Cisco Systems, Proteon and Juniper, the availability of commercial Ethernet equipment for local-area networking, and the widespread implementation of TCP/IP on the UNIX operating system.

Growth
Graph of internet users per 100 inhabitants between 1997 and 2007 by International Telecommunication Union

Although the basic applications and guidelines that make the Internet possible had existed for almost two decades, the network did not gain a public face until the 1990s. On 6 August 1991, CERN, a pan European organisation for particle research, publicized the new World Wide Web project. The Web was invented by English scientist Tim Berners-Lee in 1989.

An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web.

Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the 1990s, it was estimated that the Internet grew by 100% per year, with a brief period of explosive growth in 1996 and 1997.[5] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. [6]

Using various statistics, AMD estimated the population of internet users to be 1.5 billion as of January 2009.[7]

University students' appreciation and contributions

New findings in the field of communications during the 1960s, 1970s and 1980s were quickly adopted by universities across North America.

Examples of early university Internet communities are Cleveland FreeNet, Blacksburg Electronic Village and NSTN in Nova Scotia.[8] Students took up the opportunity of free communications and saw this new phenomenon as a tool of liberation. Personal computers and the Internet would free them from corporations and governments (Nelson, Jennings, Stallman).

Graduate students played a huge part in the creation of ARPANET.[citation needed] In the 1960s, the network working group, which did most of the design for ARPANET's protocols, was composed mainly of graduate students.

Today's Internet

Thursday, March 26, 2009

APOLLO NASA


Apollo program
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Apollo program insignia

The Apollo program was a human spaceflight program undertaken by NASA during the years 1961–1975 with the goal of conducting manned moon landing missions. In 1961, President John F. Kennedy announced a goal of landing a man on the moon by the end of the decade. It was accomplished on July 20, 1969 by the landing of astronauts Neil Armstrong and Buzz Aldrin, with Michael Collins orbiting above during the Apollo 11 mission. Five other Apollo missions also landed astronauts on the Moon, the last one in 1972. These six Apollo spaceflights are the only times humans have landed on another celestial body.[1] The Apollo program, specifically the lunar landings, is often cited as the greatest achievement in human history.[2][3]

Apollo was the third human spaceflight program undertaken by NASA, the space agency of the United States. It used Apollo spacecraft and Saturn launch vehicles, which were later used for the Skylab program and the joint American-Soviet Apollo-Soyuz Test Project. These later programs are thus often considered to be part of the overall Apollo program.

The goal of the program, as articulated by President Kennedy, was accomplished with only two major failures. The first failure resulted in the deaths of three astronauts, Gus Grissom, Ed White and Roger Chaffee, in the Apollo 1 launchpad fire. The second was an in-space explosion on Apollo 13, which badly damaged the spacecraft on the moonward leg of its journey. The three astronauts aboard narrowly escaped with their lives, thanks to the efforts of flight controllers, project engineers, backup crew members and the skills of the astronauts themselves.

The program set major milestones in the history of human spaceflight. This program stands alone in sending manned missions beyond low Earth orbit. Apollo 8 was the first manned spacecraft to orbit another celestial body, while Apollo 17 marks the time of the last moonwalk and also the last manned mission beyond low Earth orbit. The major space exploration milestones leading up to the moon landing include:

* first sub-orbital flight (1942)
* first orbital flight (1957)
* first unmanned lunar mission (1959)
* first man in space (1961)
* first manned lunar mission (1968).
* first manned lunar landing (1969).

The program spurred advances in many areas of technology peripheral to rocketry and manned spaceflight. These include major contributions in the fields of avionics, telecommunications, and computers. The program sparked interest in many fields of engineering, including pioneering work using statistical methods to study the reliability of complex systems made from component parts. The physical facilities and machines which were necessary components of the manned spaceflight program remain as landmarks of civil, mechanical, and electrical engineering. Many objects and artifacts from the program are on display at various locations throughout the world, notably at the Smithsonian's Air and Space Museums.

Boosters

When the team of engineers led by Wernher von Braun began planning for the Apollo program, it was not yet clear what sort of mission their rocket boosters would have to support. Direct ascent would require a booster, the planned Nova rocket, which could lift a very large payload. NASA's decision in favor of lunar orbit rendezvous re-oriented the work of Marshall Spaceflight Center towards the development of the Saturn 1B and Saturn V. While these were less powerful than the Nova would have been, the Saturn V was still much more powerful than any booster developed before—or since.

[edit] Saturn V
Main article: Saturn V
The Saturn V rocket launched Apollo 11 and her crew on its journey to the Moon, July 16, 1969.
Saturn V diagram from the Apollo 6 press kit

The Saturn V consisted of three stages and an Instrument Unit which contained the booster's guidance system. The first stage, the S-IC, consisted of five F-1 engines arranged in a cross pattern, which produced a total of 7.5 million pounds of thrust. They burned for only 2.5 minutes, accelerating the spacecraft to a speed of approximately 6000 miles per hour (2.68 km/s).[22] During development, the F-1 engines were plagued by combustion instability—if the combustion of propellants was not uniform across the flame front of an engine, pressure waves could build which would cause the engine to destroy itself. The problem was solved in the end through trial and error, fine-tuning the engines through numerous tests so that even small charges set off inside the engine would not induce instability.[23]

The second stage, the S-II, used five J-2 engines. They burned for approximately six minutes, taking the spacecraft to a speed of 15,300 miles per hour (6.84 km/s) and an altitude of about 115 miles (185 km).[24] At this point the S-IVB third stage took over, putting the spacecraft into orbit. Its one J-2 engine was designed to be restarted in order to make the translunar injection burn.[25]

[edit] Saturn IB
Main article: Saturn IB

The Saturn IB was an upgraded version of the earlier Saturn I. It consisted of a first stage made up of eight H-1 engines and a second S-IVB stage which was identical to the Saturn V's third stage. The Saturn IB had only 1.6 million pounds of thrust in its first stage—compared to 7.5 million pounds for the Saturn V—but was capable of putting a command and lunar module into earth orbit.[26] It was used in Apollo test missions and in both the Skylab program and the Apollo-Soyuz Test Program. In 1973 a refitted S-IVB stage, launched by a Saturn V, became the Skylab space station.

Global Warming

Global Warming

From Wikipedia, the free encyclopedia
Jump to: navigation, search
For past climate change, see paleoclimatology and geologic temperature record.
Featured article
Semi-protected
Global mean surface temperature anomaly relative to 1961–1990
Global mean surface temperature anomaly relative to 1961–1990

Mean surface temperature anomalies during the period 1999 to 2008 with respect to the average temperatures from 1940 to 1980
Mean surface temperature anomalies during the period 1999 to 2008 with respect to the average temperatures from 1940 to 1980

Global warming is the increase in the average temperature of the Earth's near-surface air and the oceans since the mid-twentieth century and its projected continuation. Global surface temperature increased 0.74 ± 0.18 °C (1.33 ± 0.32 °F) during the 100 years ending in 2005.[1][A] The Intergovernmental Panel on Climate Change (IPCC) concludes that anthropogenic greenhouse gases are responsible for most of the observed temperature increase since the middle of the twentieth century,[1] and that natural phenomena such as solar variation and volcanoes probably had a small warming effect from pre-industrial times to 1950 and a small cooling effect afterward.[2][3] These basic conclusions have been endorsed by more than 40 scientific societies and academies of science,[B] including all of the national academies of science of the major industrialized countries.[4][5]

Climate model projections summarized in the latest IPCC report indicate that global surface temperature will likely rise a further 1.1 to 6.4 °C (2.0 to 11.5 °F) during the twenty-first century.[1] The uncertainty in this estimate arises from the use of models with differing climate sensitivity, and the use of differing estimates of future greenhouse gas emissions. Some other uncertainties include how warming and related changes will vary from region to region around the globe. Although most studies focus on the period up to 2100, warming is expected to continue beyond 2100 (even if emissions stop) because of the large heat capacity of the oceans and the lifespan of CO2 in the atmosphere.[6][7]

Increasing global temperature will cause sea levels to rise and will change the amount and pattern of precipitation, likely including an expanse of the subtropical desert regions.[8] Other likely effects include Arctic shrinkage and resulting Arctic methane release, shrinkage of Amazon rainforest and Boreal forests, increases in the intensity of extreme weather events, changes in agricultural yields, modifications of trade routes, glacier retreat, species extinctions and changes in the ranges of disease vectors.

Political and public debate continues regarding the appropriate response to global warming. The available options are mitigation to reduce further emissions; adaptation to reduce the damage caused by warming; and, more speculatively, geoengineering to reverse global warming. Most national governments have signed and ratified the Kyoto Protocol aimed at reducing greenhouse gas emissions. A successor to the first commitment period of the Kyoto protocol is expected to be agreed at the COP15 talks in December 2009.

Thunderbird 2 Features

Message Tagging

Thunderbird 2 allows you to “tag” messages with descriptors such as “To Do” or “Done” or even create your own tags that are specific to your needs.

screenshot-messagetagging
screenshot-savedsearch

Saved Searches

Do you find yourself searching for the same subject or message content over and over? Thunderbird 2 saves you time by allowing you to store this search as a folder. Rerunning the search is just a matter of clicking on the saved search folder in the folder pane.

Advanced Folder Views

Thunderbird 2 offers a variety of ways for you to organize and display your folders, whether by favorites, recently viewed or folders containing unread messages.

Stay Informed

Thunderbird 2 has been updated to provide more informative and relevant message alerts containing sender, subject and message text for newly arrived messages.

Easy Access to Popular Web Mail Services

Thunderbird 2 makes it even easier to integrate and use various Web mail accounts from one inbox. Gmail and .Mac users can access their accounts in Thunderbird by simply providing their user names and passwords.

screenshot-advancedfolder screenshot-stayinformed screenshot-easyaccess

Return to top

Customize Your Email Experience

Thunderbird allows you to customize your email to suit your specific needs whether it’s how you search and find messages or listening to music right out of your inbox.

Your Mail, Your Way

Thunderbird users can increase Thunderbird’s functionality and appearance using hundreds of add-ons. A Thunderbird add-on can let you place voice over IP calls, listen to music, manage contacts, and keep track of birth dates all from your inbox. You can even change the appearance of Thunderbird to suit your tastes.

Message Templates

Thunderbird 2 allows you to easily set up message templates to save you time – especially if you have to send the same mail message repeatedly.

Add-ons Manager for Extensions and Themes

The new Add-ons Manager improves the user interface for managing extensions and themes, making it even easier for you to customize Thunderbird 2.

screenshot-addonsmanager

Return to top

Secure and Protect Your Mail

Thunderbird’s security and privacy measures ensure that your communications and identity remain safe.

Cutting Out the Junk

Thunderbird's popular junk mail tools have been updated to stay ahead of spam. Each email you receive passes through Thunderbird's leading-edge junk mail filters. Each time you mark messages as spam, Thunderbird “learns” and improves its filtering so you can spend more time reading the mail that matters. Thunderbird can also use your mail provider's spam filters to keep junk mail out of your inbox.

Robust Privacy

Thunderbird 2 offers improved support for user privacy and remote image protection. To ensure a user’s privacy, Thunderbird 2 automatically blocks remote images in email messages.

screenshot-phishingprotection

Phishing Protection

Thunderbird protects you from email scams which try to trick users into handing over personal and confidential information by indicating when a message is a potential phishing attempt. As a second line of defense, Thunderbird warns you when you click on a link which appears to be taking you to a different Web site than the one indicated by the URL in the message.

screenshot-automatedupdate

Automated Update

Thunderbird’s update system checks to see if you’re running the latest version, and notifies you when a security update is available. These security updates are small (usually 200KB - 700KB), giving you only what you need and making the security update quick to download and install. The automated update system provides updates for Thunderbird on Windows, Mac OS X, and Linux in over 30 different languages.

Open Source

At the heart of Thunderbird is an open source development process driven by thousands of passionate, experienced developers and security experts spread all over the world. Our openness and active community of experts helps to ensure our products are more secure and quickly updated, while also enabling us to take advantage of the best third party security scanning and evaluation tools to further bolster overall security.

Tuesday, March 24, 2009

Drum

Drum

From Wikipedia, the free encyclopedia

Jump to: navigation, search
Bass drum made from wood, rope, and cowskin

The drum is a member of the percussion group, technically classified as a membranophone.[1]. Drums consist of at least one membrane, called a drumhead or drum skin, that is stretched over a shell and struck, either directly with parts of a player's body, or with some sort of implement such as a drumstick, to produce sound. Other techniques have been used to cause drums to make sound, such as the "Thumb roll". Drums are the world's oldest and most ubiquitous musical instruments, and the basic design has remained virtually unchanged for thousands of years.[1] Most drums are considered "untuned instruments", however many modern musicians are beginning to tune drums to songs; Terry Bozzio has constructed a kit using diatonic and chromatically tuned drums. A few such as timpani are always tuned to a certain pitch. Often, several drums are arranged together to create a drum kit that can be played by one musician with all four limbs [2].

Sunday, March 22, 2009

Playstation Portable PSP 2000

Playstation portable PSP 2000


By John P. Falcone, CNET.com

The good: Lighter, slimmer, and sleeker update of the original PSP; AV output for video and game playback on TVs; improved load times for games; retains all of the impressive media and online features of the original PSP; deep lineup of great game titles that offer better graphics than Nintendo DS games.

The bad: Despite improvements, problems and annoyances remain: UMD load times still poky compared to Flash-based DS games; volume levels still less than optimal; limited gameplay options via video output; USB charging option is cumbersome; screen is still too reflective and a magnet for fingerprints; subtle redesign missed the opportunity to add even more features.

History Mobile Phone

History

A 1991 GSM mobile phone

In 1908, U.S. Patent 887,357 for a wireless telephone was issued in to Nathan B. Stubblefield of Murray, Kentucky. He applied this patent to "cave radio" telephones and not directly to cellular telephony as the term is currently understood.[2] Cells for mobile phone base stations were invented in 1947 by Bell Labs engineers at AT&T and further developed by Bell Labs during the 1960s. Radiophones have a long and varied history going back to Reginald Fessenden's invention and shore-to-ship demonstration of radio telephony, through the Second World War with military use of radio telephony links and civil services in the 1950s, while hand-held cellular radio devices have been available since 1973. A patent for the first wireless phone as we know today was issued in US Patent Number 3,449,750 to George Sweigert of Euclid, Ohio on June 10, 1969.

In 1945, the zero generation (0G) of mobile telephones was introduced. 0G mobile phones, such as Mobile Telephone Service, were not cellular, and so did not feature "handover" from one base station to the next and reuse of radio frequency channels.[citation needed] Like other technologies of the time, it involved a single, powerful base station covering a wide area, and each telephone would effectively monopolize a channel over that whole area while in use. The concepts of frequency reuse and handoff as well as a number of other concepts that formed the basis of modern cell phone technology are first described in U.S. Patent 4,152,647 , issued May 1, 1979 to Charles A. Gladden and Martin H. Parelman, both of Las Vegas, Nevada and assigned by them to the United States Government.

This is the first embodiment of all the concepts that formed the basis of the next major step in mobile telephony, the Analog cellular telephone. Concepts covered in this patent (cited in at least 34 other patents) also were later extended to several satellite communication systems. Later updating of the cellular system to a digital system credits this patent.

Martin Cooper, a Motorola researcher and executive is widely considered to be the inventor of the first practical mobile phone for handheld use in a non-vehicle setting. Cooper is the inventor named on "Radio telephone system" filed on October 17, 1973 with the US Patent Office and later issued as US Patent 3,906,166.[3] Using a modern, if somewhat heavy portable handset, Cooper made the first call on a handheld mobile phone on April 3, 1973 to a rival, Dr. Joel S. Engel of Bell Labs.[4]

The first commercial citywide cellular network was launched in Japan by NTT in 1979. Fully automatic cellular networks were first introduced in the early to mid 1980s (the 1G generation). The Nordic Mobile Telephone (NMT) system went online in Denmark, Finland, Norway and Sweden in 1981.[5]

Personal Handy-phone System mobiles and modems used in Japan around 1997-2003

In 1983, Motorola DynaTAC was the first approved mobile phone by FCC in the United States. In 1984, Bell Labs developed modern commercial cellular technology (based, to a large extent, on the Gladden, Parelman Patent), which employed multiple, centrally controlled base stations (cell sites), each providing service to a small area (a cell). The cell sites would be set up such that cells partially overlapped. In a cellular system, a signal between a base station (cell site) and a terminal (phone) only need be strong enough to reach between the two, so the same channel can be used simultaneously for separate conversations in different cells.

Cellular systems required several leaps of technology, including handover, which allowed a conversation to continue as a mobile phone traveled from cell to cell. This system included variable transmission power in both the base stations and the telephones (controlled by the base stations), which allowed range and cell size to vary. As the system expanded and neared capacity, the ability to reduce transmission power allowed new cells to be added, resulting in more, smaller cells and thus more capacity. The evidence of this growth can still be seen in the many older, tall cell site towers with no antennae on the upper parts of their towers. These sites originally created large cells, and so had their antennae mounted atop high towers; the towers were designed so that as the system expanded—and cell sizes shrank—the antennae could be lowered on their original masts to reduce range.

The first "modern" network technology on digital 2G (second generation) cellular technology was launched by Radiolinja (now part of Elisa Group) in 1991 in Finland on the GSM standard which also marked the introduction of competition in mobile telecoms when Radiolinja challenged incumbent Telecom Finland (now part of TeliaSonera) who ran a 1G NMT network.

The first data services appeared on mobile phones starting with person-to-person SMS text messaging in Finland in 1993. First trial payments using a mobile phone to pay for a Coca Cola vending machine were set in Finland in 1998. The first commercial payments were mobile parking trialled in Sweden but first commercially launched in Norway in 1999. The first commercial payment system to mimick banks and credit cards was launched in the Philippines in 1999 simultaneously by mobile operators Globe and Smart. The first content sold to mobile phones was the ringing tone, first launched in 1998 in Finland. The first full internet service on mobile phones was i-Mode introduced by NTT DoCoMo in Japan in 1999.

In 2001 the first commercial launch of 3G (Third Generation) was again in Japan by NTT DoCoMo on the WCDMA standard.[6]

Until the early 1990s, most mobile phones were too large to be carried in a jacket pocket, so they were typically installed in vehicles as car phones. With the miniaturization of digital components and the development of more sophisticated batteries, mobile phones have become smaller and lighter.

Saturday, March 21, 2009

History Motorcycle

History

Replica of the Daimler-Maybach Reitwagen
A 1913 Fabrique National in-line four with shaft drive from Belgium
A pre-war Polish Sokół 1000

Arguably, the first motorcycle was designed and built by the German inventors Gottlieb Daimler and Wilhelm Maybach in Bad Cannstatt (since 1905 a city district of Stuttgart) in 1885.[3] The first petroleum-powered vehicle was essentially a motorised bicycle, although the inventors called their invention the Reitwagen ("riding car"). However, if a two-wheeled vehicle with steam propulsion is considered a motorcycle, then the first one may have been American. One such machine was demonstrated at fairs and circuses in the eastern U.S. in 1867, built by Sylvester Howard Roper of Roxbury, Massachusetts.[3]

In 1894, Hildebrand & Wolfmüller became the first motorcycle available for purchase.[4] In the early period of motorcycle history, many producers of bicycles adapted their designs to accommodate the new internal combustion engine. As the engines became more powerful and designs outgrew the bicycle origins, the number of motorcycle producers increased.

An historic 1941 Crocker

Until the First World War, the largest motorcycle manufacturer in the world was Indian, producing over 20,000 bikes per year. By 1920, this honour went to Harley-Davidson, with their motorcycles being sold by dealers in 67 countries. In 1928, DKW took over as the largest manufacturer.

After the Second World War, the BSA Group became the largest producer of motorcycles in the world, producing up to 75,000 bikes per year in the 1950s. The German company NSU Motorenwerke AG held the position of largest manufacturer from 1955 until the 1970s.

NSU Sportmax streamlined motorcycle, 250 cc class winner of the 1955 Grand Prix season

In the 1950s, streamlining began to play an increasing part in the development of racing motorcycles and held out the possibility of radical changes to motorcycle design. NSU and Moto-Guzzi were in the vanguard of this development both producing very radical designs well ahead of their time.[5] NSU produced the most advanced design, but because of the deaths of four NSU riders in the 1954–1956 seasons, they abandoned further development and quit Grand Prix motorcycle racing.[6] Moto-Guzzi produced competitive race machines, and by 1957 nearly all the Grand Prix races were being won by streamlined machines.[citation needed]

From the 1960s through the 1990s, small two-stroke motorcycles were popular worldwide, partly as a result of East German Walter Kaaden's engine work in the 1950s.[7]

Today, the Japanese manufacturers, Honda, Kawasaki, Suzuki, and Yamaha dominate the motorcycle industry, although Harley-Davidson still maintains a high degree of popularity in the United States. Apart from these high capacity motorcycles, there is a very large market for low capacity (less than 300 cc) motorcycles, mostly concentrated in Asian and African countries. This area is dominated by mostly Indian companies with Hero Honda being a large manufacturer of two wheelers, e.g. its Splendor model which has sold more than 8.5 million to date.[8] Ultimately, the highest selling motorcycle of all time is the Honda Super Cub, which has sold more than 60 million units and is still in production after 50 years.[9]

Recent years have also seen a resurgence in the popularity of several other brands sold in the U.S. market, including BMW, KTM, Triumph, Aprilia, Moto-Guzzi, MV Agusta and Ducati.

Outside of the U.S., these brands have enjoyed continued and sustained success, although Triumph, for example, has been re-incarnated from its former self into a modern world-class manufacturer. In overall numbers, however, the Chinese currently manufacture and sell more motorcycles than any other country and exports are rising.[citation needed]

Additionally, the small-capacity scooter is very popular through most of the world. The Piaggio group of Italy, for example, is one of the world's largest producers of two-wheeled vehicles.

History Amerika-PLads


History

An attractive area with a fascinating history

At the end of the 1800s, the area around Dampfærgevej and Amerikakaj in the Port of Copenhagen was established as a free port docking area for the import and export of grain, commodities and other goods. Following the construction of the old warehouses along the quay, this section of the port quickly developed into a vibrant business environment. Later, it became the port of call for the large passenger ships sailing to and from New York. Hence the name Amerikakaj (the American quay).

The Port of Copenhagen Authority established harbour basins and quays as a geographically integrated part of the Port of Copenhagen, but the Free Port was operated by the independent public limited company, Københavns Frihavns-Aktieselskab, incorporated on 25 April 1891.


The Free Port of Copenhagen ? 1893


The establishment of the Free Port represented an engineering feat fully comparable with the establishment of the Copenhagen-Malmö Fixed Link and other major contemporary civil engineering projects. The Free Port was officially inaugurated on 8 November 1894, less than four years after the decision to establish it was adopted.

The harbour basins had a water depth of up to nine metres, which allowed even large ocean-going vessels to call at the port. After a difficult start, the traffic volume increased following the turn of the century, and the Free Port has since been enlarged several times.

As a mark of respect for history, we have christened one of the port?s largest development areas Amerika Plads.

Biography of Bill Gates

Bill GatesBill Gates Biography (William Henry Gates III): Microsoft Founder
Famous for : Being the richest man in the world, a cofounder of the software company Microsoft, and for being one of the world's most generous philanthropists.
Gates details : Born - USA October 28, 1955 Lives - United States of America

More Gates : Buffett Gives to Gates Foundation - Person of the Year 2005 - Melinda Gates - Richest Man in the World
Bill Gates is one of the most influential people in the world. He is cofounder of one of the most recognized brands in the computer industry with nearly every desk top computer using at least one software program from Microsoft. According to the Forbes magazine, Bill Gates is the richest man in the world and has held the number one position for many years.

Gates was born and grew up in Seattle, Washington USA. His father, William H. Gates II was a Seattle attorney and his mother, Mary Maxwell Gates was a school teacher and chairperson of the United Way charity. Gates and his two sisters had a comfortable upbringing, with Gates being able to attend the exclusive secondary "Lakeside School".

Bill Gates started studying at Harvard University in 1973 where he met up with Paul Allen. Gates and Allen worked on a version of the programming language BASIC, that was the basis for the MITS Altair (the first microcomputer available). He did not go on to graduate from Harvard University as he left in his junior year to start what was to become the largest computer software company in the world; Microsoft Corporation.

Bill Gates and the Microsoft Corporation
"To enable people and businesses throughout the world to realize their full potential." Microsoft Mission Statement
After dropping out of Harvard Bill Gates and his partner Paul Allen set about revolutionizing the computer industry. Gates believed there should be a computer on every office desk and in every home.

In 1975 the company Micro-soft was formed, which was an abbreviation of microcomputer software. It soon became simply "Microsoft"® and went on to completely change the way people use computers.

Microsoft helped to make the computer easier to use with its developed and purchased software, and made it a commercial success. The success of Microsoft began with the MS-DOS computer operating system that Gates licensed to IBM. Gates also set about protecting the royalties that he could acquire from computer software by aggressively fighting against all forms of software piracy, effectively creating the retail software market that now exists today. This move was quite controversial at the time as it was the freedom of sharing that produced much innovation and advances in the newly forming software industry. But it was this stand against software piracy, that was to be central in the great commercial success that Microsoft went on to achieve.

Bill Gates retired as Microsoft CEO in 2008.

Bill Gates Criticism
With his great success in the computer software industry also came many criticisms. With his ambitious and aggressive business philosophy, Gates or his Microsoft lawyers have been in and out of courtrooms fighting legal battles almost since Microsoft began.

The Microsoft monopoly sets about completely dominating every market it enters through either acquisition, aggressive business tactics or a combination of them. Many of the largest technology companies have fought legally against the actions of Microsoft, including Apple Computer, Netscape, Opera, WordPerfect, and sun Microsystems.

Bill Gates Net Worth
With an estimated wealth of $53 billion in 2006, Bill Gates is the richest man in the world and he should be starting to get used to the number spot as he has been there from the mid-ninties up until now. The famous investor Warren Buffett is gaining on Gates though with an estimated $46 billion in 2006.

Microsoft hasn't just made Bill Gates very wealthy though. According to the Forbes business magazine in 2004 Paul Allen, Microsoft cofounder was the 5th richest man in the world with an estimated $21 billion. While Bill Gates' long time friend and Microsoft CEO, Steve Ballmer was the 19th richest man in the world at $12.4 billion.

See more information the Bill Gates Net Worth page.

Bill Gates Philanthropy
Being the richest man in the world has also enabled Gates to create one of the world's largest charitable foundations. The Bill and Melinda Gates Foundation has an endowment of more than $28 billion, with donations totaling more than $1 billion every year. The foundation was formed in 2000 after merging the "Gates Learning Foundation" and "William H. Gates Foundation". Their aim is to "bring innovations in health and learning to the global community".

Bill Gates continues to play a very active role in the workings of the Microsoft Company, but has handed the position of CEO over to Steve Ballmer. Gates now holds the positions of "Chairman" and "Chief Software Architect". He has started that he plans to take on fewer responsibilities at Microsoft and will eventually devote all his time to the Bill & Melinda Gates Foundation.

In 2006, the second richest man in the world, Warren Buffett pledged to give much of his vast fortune to the Bill and Melinda Gates Foundation.

Bill Gates Receives a KBE
In March 2005 William H. Gates received an "honorary" knighthood from the queen of England.
Gates was bestowed with the KBE Order (Knight Commander of the Most Excellent Order of the British Empire) for his services in reducing poverty and improving health in the developing countries of the world.
After the privately held ceremony in Buckingham Palace with Her Majesty Queen Elizabeth II, Gates commented on the recognition..
"I am humbled and delighted. I am particularly pleased that this honor helps recognize the real heroes our foundation (Bill and Melinda Gates Foundation) supports to improve health in poor countries. Their incredible work is helping ensure that one day all people, no matter where they are born, will have the same opportunity for a healthy life, and I'm grateful to share this honor with them."

The KBE Order of the British Empire is the second highest Order given out, but it is only an honorary knighthood as only citizens that are British or a part of the Commonwealth receive the full Order. This means that Gates does not become Sir Bill Gates.


Bill Gates lives near Lake Washington with his wife Melinda French Gates and their three children. Interests of Gates include reading, golf and playing bridge.

COMPUTER

Computer

From Wikipedia, the free encyclopedia

Jump to: navigation, search

A computer is a machine that manipulates data according to a list of instructions.

The first devices that resemble modern computers date to the mid-20th century (1940–1945), although the computer concept and various machines similar to computers existed earlier. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers (PC).[1] Modern computers are based on tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space.[2] Today, simple computers may be made small enough to fit into a wristwatch and be powered from a watch battery. Personal computers, in various forms, are icons of the Information Age and are what most people think of as "a computer"; however, the most common form of computer in use today is the embedded computer. Embedded computers are small, simple devices that are used to control other devices—for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and children's toys.

The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks given enough time and storage capacity.

Contents

[hide]

History of computing

The Jacquard loom was one of the first programmable devices.

It is difficult to identify any one device as the earliest computer, partly because the term "computer" has been subject to varying interpretations over time. Originally, the term "computer" referred to a person who performed numerical calculations (a human computer), often with the aid of a mechanical calculating device.

The history of the modern computer begins with two separate technologies—that of automated calculation and that of programmability.

Examples of early mechanical calculating devices included the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150-100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.[3] This is the essence of programmability.

The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer.[4] It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour,[5][6] and five robotic musicians who play music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed every day in order to account for the changing lengths of day and night throughout the year.[4]

The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers. However, none of those devices fit the modern definition of a computer because they could not be programmed.

In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched paper cards as a template to allow his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The Analytical Engine".[7] Due to limited finances, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine.

Large-scale automated data processing of punched cards was performed for the U.S. Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating Recording Corporation, which later became IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November of 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.[8]

Defining characteristics of some early digital computers of the 1940s (In the history of computing hardware)
Name First operational Numeral system Computing mechanism Programming Turing complete
Zuse Z3 (Germany) May 1941 Binary Electro-mechanical Program-controlled by punched film stock Yes (1998)
Atanasoff–Berry Computer (US) 1942 Binary Electronic Not programmable—single purpose No
Colossus Mark 1 (UK) February 1944 Binary Electronic Program-controlled by patch cables and switches No
Harvard Mark I – IBM ASCC (US) May 1944 Decimal Electro-mechanical Program-controlled by 24-channel punched paper tape (but no conditional branch) No
Colossus Mark 2 (UK) June 1944 Binary Electronic Program-controlled by patch cables and switches No
ENIAC (US) July 1946 Decimal Electronic Program-controlled by patch cables and switches Yes
Manchester Small-Scale Experimental Machine (UK) June 1948 Binary Electronic Stored-program in Williams cathode ray tube memory Yes
Modified ENIAC (US) September 1948 Decimal Electronic Program-controlled by patch cables and switches plus a primitive read-only stored programming mechanism using the Function Tables as program ROM Yes
EDSAC (UK) May 1949 Binary Electronic Stored-program in mercury delay line memory Yes
Manchester Mark 1 (UK) October 1949 Binary Electronic Stored-program in Williams cathode ray tube memory and magnetic drum memory Yes
CSIRAC (Australia) November 1949 Binary Electronic Stored-program in mercury delay line memory Yes

A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult (Shannon 1940). Notable achievements include:

EDSAC was one of the first computers to implement the stored program (von Neumann) architecture.

Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of these being completed in Great Britain. The first to be demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM or "Baby"), while the EDSAC, completed a year after SSEM, was the first practical implementation of the stored program design. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was completed but did not see full-time use for an additional two years.

Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture.

Microprocessors are miniaturized devices that often implement stored program CPUs.

Computers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953.[10] In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the 1980s, computers became sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as washing machines. The 1980s also witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.

Modern smartphones are fully-programmable computers in their own right, in a technical sense, and as of 2009 may well be the most common form of such computers in existence.

Stored program architecture

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that a list of instructions (the program) can be given to the computer and it will store them and carry them out at some time in the future.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time—with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example:

        mov      #0,sum     ; set sum to 0
mov #1,num ; set num to 1
loop: add num,sum ; add num to sum
add #1,num ; add 1 to num
cmp num,#1000 ; compare num to 1000
ble loop ; if num <= 1000, go back to 'loop'
halt ; end of program. stop running

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.[11]

However, computers cannot "think" for themselves in the sense that they only solve problems in exactly the way they are programmed to. An intelligent human faced with the above addition task might soon realize that instead of actually adding up all the numbers one can simply use the equation

1+2+3+...+n = {{n(n+1)} \over 2}

and arrive at the correct answer (500,500) with little work.[12] In other words, a computer programmed to add up the numbers one by one as in the example above would do exactly that without regard to efficiency or alternative solutions.

Programs

A 1970s punched card containing one line from a FORTRAN program. The card reads: "Z(1) = Y + W(1)" and is labelled "PROJ039" for identification purposes.

In practical terms, a computer program may run from just a few instructions to many millions of instructions, as in a program for a word processor or a web browser. A typical modern computer can execute billions of instructions per second (gigahertz or GHz) and rarely make a mistake over many years of operation. Large computer programs comprising several million instructions may take teams of programmers years to write, thus the probability of the entire program having been written without error is highly unlikely.

Errors in computer programs are called "bugs". Bugs may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases they may cause the program to "hang" - become unresponsive to input such as mouse clicks or keystrokes, or to completely fail or "crash". Otherwise benign bugs may sometimes may be harnessed for malicious intent by an unscrupulous user writing an "exploit" - code designed to take advantage of a bug and disrupt a program's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[13]

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from—each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer just as if they were numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and this technique was used with many early computers,[14] it is extremely tedious to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember—a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[15]

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and error prone. Therefore, most complicated programs are written in more abstract high-level programming languages that are able to express the needs of the computer programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[16] Since high level languages are more abstract than assembly language, it is possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

The task of developing large software systems is an immense intellectual effort. Producing software with an acceptably high reliability on a predictable schedule and budget has proved historically to be a great challenge; the academic and professional discipline of software engineering concentrates specifically on this problem.

Example

A traffic light showing red.

Suppose a computer is being employed to drive a traffic signal at an intersection between two streets. The computer has the following three basic instructions.

  1. ON(Streetname, Color) Turns the light on Streetname with a specified Color on.
  2. OFF(Streetname, Color) Turns the light on Streetname with a specified Color off.
  3. WAIT(Seconds) Waits a specifed number of seconds.
  4. START Starts the program
  5. REPEAT Tells the computer to repeat a specified part of the program in a loop.

Comments are marked with a // on the left margin. Assume the streetnames are Broadway and Main.

   START

//Let Broadway traffic go
OFF(Broadway, Red)
ON(Broadway, Green)
WAIT(60 seconds)

//Stop Broadway traffic
OFF(Broadway, Green)
ON(Broadway, Yellow)
WAIT(3 seconds)
OFF(Broadway, Yellow)
ON(Broadway, Red)

//Let Main traffic go
OFF(Main, Red)
ON(Main, Green)
WAIT(60 seconds)

//Stop Main traffic
OFF(Main, Green)
ON(Main, Yellow)
WAIT(3 seconds)
OFF(Main, Yellow)
ON(Main, Red)

//Tell computer to continuously repeat the program.
REPEAT ALL

With this set of instructions, the computer would cycle the light continually through red, green, yellow and back to red again on both streets.

However, suppose there is a simple on/off switch connected to the computer that is intended to be used to make the light flash red while some maintenance operation is being performed. The program might then instruct the computer to:

   START

IF Switch == OFF then: //Normal traffic signal operation
{
//Let Broadway traffic go
OFF(Broadway, Red)
ON(Broadway, Green)
WAIT(60 seconds)

//Stop Broadway traffic
OFF(Broadway, Green)
ON(Broadway, Yellow)
WAIT(3 seconds)
OFF(Broadway, Yellow)
ON(Broadway, Red)

//Let Main traffic go
OFF(Main, Red)
ON(Main, Green)
WAIT(60 seconds)

//Stop Main traffic
OFF(Main, Green)
ON(Main, Yellow)
WAIT(3 seconds)
OFF(Main, Yellow)
ON(Main, Red)

//Tell the computer to repeat this section continuously.
REPEAT THIS SECTION
}

IF Switch == ON THEN: //Maintenance Mode
{
//Turn the red lights on and wait 1 second.
ON(Broadway, Red)
ON(Main, Red)
WAIT(1 second)

//Turn the red lights off and wait 1 second.
OFF(Broadway, Red)
OFF(Main, Red)
WAIT(1 second)

//Tell the comptuer to repeat the statements in this section.
REPEAT THIS SECTION
}

In this manner, the traffic signal will run a flash-red program when the switch is on, and will run the normal program when the switch is off. Both of these program examples show the basic layout of a computer program in a simple, familiar context of a traffic signal. Any experienced programmer can spot many software bugs in the program, for instance, not making sure that the green light is off when the switch is set to flash red. However, to remove all possible bugs would make this program much longer and more complicated, and would be confusing to nontechnical readers: the aim of this example is a simple demonstration of how computer instructions are laid out.

How computers work

A general purpose computer has four main sections: the arithmetic and logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by busses, often made of groups of wires.

The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.

Control unit

The control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer.[17] Control systems in advanced computers may change the order of some instructions so as to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[18]

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system.

The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

  1. Read the code for the next instruction from the cell indicated by the program counter.
  2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
  3. Increment the program counter so it points to the next instruction.
  4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
  5. Provide the necessary data to an ALU or register.
  6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
  7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
  8. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program—and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen.

Arithmetic/logic unit (ALU)

The ALU is capable of performing two classes of operations: arithmetic and logic.[19]

The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?").

Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating complicated conditional statements and processing boolean logic.

Superscalar computers may contain multiple ALUs so that they can process several instructions at the same time.[20] Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices.

Memory

Magnetic core memory was popular main memory for computers through the 1960s until it was completely replaced by semiconductor memory.

A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is up to the software to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. Since data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.

Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC , the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. Software that is stored in ROM is often called firmware because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required.[21]

In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

Input/output (I/O)

Hard disks are common I/O devices used with computers.

I/O is the means by which a computer exchanges information with the outside world.[22] Devices that provide input or output to the computer are called peripherals.[23] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.

Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics[citation needed]. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

Multitasking

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[24]

One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.[25]

Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly - in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.

Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily.

Some computers may divide their work between one or more separate CPUs, creating a multiprocessing configuration. Traditionally, this technique was utilized only in large and powerful computers such as supercomputers, mainframe computers and servers. However, multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers have become widely available and are beginning to see increased usage in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[26] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.

Networking and the Internet

Visualization of a portion of the routes on the Internet.

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.[27]

In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET.[28] The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

Further topics

Hardware

The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.

History of computing hardware
First Generation (Mechanical/Electromechanical) Calculators Antikythera mechanism, Difference Engine, Norden bombsight
Programmable Devices Jacquard loom, Analytical Engine, Harvard Mark I, Z3
Second Generation (Vacuum Tubes) Calculators Atanasoff–Berry Computer, IBM 604, UNIVAC 60, UNIVAC 120
Programmable Devices Colossus, ENIAC, Manchester Small-Scale Experimental Machine, EDSAC, Manchester Mark 1, CSIRAC, EDVAC, UNIVAC I, IBM 701, IBM 702, IBM 650, Z22
Third Generation (Discrete transistors and SSI, MSI, LSI Integrated circuits) Mainframes IBM 7090, IBM 7080, System/360, BUNCH
Minicomputer PDP-8, PDP-11, System/32, System/36
Fourth Generation (VLSI integrated circuits) Minicomputer VAX, IBM System i
4-bit microcomputer Intel 4004, Intel 4040
8-bit microcomputer Intel 8008, Intel 8080, Motorola 6800, Motorola 6809, MOS Technology 6502, Zilog Z80
16-bit microcomputer Intel 8088, Zilog Z8000, WDC 65816/65802
32-bit microcomputer Intel 80386, Pentium, Motorola 68000, ARM architecture
64-bit microcomputer[29] Alpha, MIPS, PA-RISC, PowerPC, SPARC, x86-64
Embedded computer Intel 8048, Intel 8051
Personal computer Desktop computer, Home computer, Laptop computer, Personal digital assistant (PDA), Portable computer, Tablet computer, Wearable computer
Theoretical/experimental Quantum computer, Chemical computer, DNA computing, Optical computer, Spintronics based computer
Other Hardware Topics
Peripheral device (Input/output) Input Mouse, Keyboard, Joystick, Image scanner
Output Monitor, Printer
Both Floppy disk drive, Hard disk, Optical disc drive, Teleprinter
Computer busses Short range RS-232, SCSI, PCI, USB
Long range (Computer networking) Ethernet, ATM, FDDI

Software

Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. When software is stored in hardware that cannot easily be modified (such as BIOS ROM in an IBM PC compatible), it is sometimes called "firmware" to indicate that it falls into an uncertain area somewhere between hardware and software.

Computer software
Operating system Unix and BSD UNIX System V, AIX, HP-UX, Solaris (SunOS), IRIX, List of BSD operating systems
GNU/Linux List of Linux distributions, Comparison of Linux distributions
Microsoft Windows Windows 95, Windows 98, Windows NT, Windows 2000, Windows XP, Windows Vista, Windows CE
DOS 86-DOS (QDOS), PC-DOS, MS-DOS, FreeDOS
Mac OS Mac OS classic, Mac OS X
Embedded and real-time List of embedded operating systems
Experimental Amoeba, Oberon/Bluebottle, Plan 9 from Bell Labs
Library Multimedia DirectX, OpenGL, OpenAL
Programming library C standard library, Standard template library
Data Protocol TCP/IP, Kermit, FTP, HTTP, SMTP
File format HTML, XML, JPEG, MPEG, PNG
User interface Graphical user interface (WIMP) Microsoft Windows, GNOME, KDE, QNX Photon, CDE, GEM
Text-based user interface Command-line interface, Text user interface
Application Office suite Word processing, Desktop publishing, Presentation program, Database management system, Scheduling & Time management, Spreadsheet, Accounting software
Internet Access Browser, E-mail client, Web server, Mail transfer agent, Instant messaging
Design and manufacturing Computer-aided design, Computer-aided manufacturing, Plant management, Robotic manufacturing, Supply chain management
Graphics Raster graphics editor, Vector graphics editor, 3D modeler, Animation editor, 3D computer graphics, Video editing, Image processing
Audio Digital audio editor, Audio playback, Mixing, Audio synthesis, Computer music
Software Engineering Compiler, Assembler, Interpreter, Debugger, Text Editor, Integrated development environment, Performance analysis, Revision control, Software configuration management
Educational Edutainment, Educational game, Serious game, Flight simulator
Games Strategy, Arcade, Puzzle, Simulation, First-person shooter, Platform, Massively multiplayer, Interactive fiction
Misc Artificial intelligence, Antivirus software, Malware scanner, Installer/Package management systems, File manager

Programming languages

Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine language by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of different programming languages—some intended to be general purpose, others useful only for highly specialized applications.

Programming Languages
Lists of programming languages Timeline of programming languages, Categorical list of programming languages, Generational list of programming languages, Alphabetical list of programming languages, Non-English-based programming languages
Commonly used Assembly languages ARM, MIPS, x86
Commonly used High level languages Ada, BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal, Object Pascal
Commonly used Scripting languages Bourne script, JavaScript, Python, Ruby, PHP, Perl

Professions and organizations

As the use of computers has spread throughout society, there are an increasing number of careers involving computers. Following the theme of hardware, software and firmware, the brains of people who work in the industry are sometimes known irreverently as wetware or "meatware".

Computer-related professions
Hardware-related Electrical engineering, Electronics engineering, Computer engineering, Telecommunications engineering, Optical engineering, Nanoscale engineering
Software-related Computer science, Human-computer interaction, Information technology, Software engineering, Scientific computing, Web design, Desktop publishing

The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.

Organizations
Standards groups ANSI, IEC, IEEE, IETF, ISO, W3C
Professional Societies ACM, ACM Special Interest Groups, IET, IFIP
Free/Open source software groups Free Software Foundation, Mozilla Foundation, Apache Software Foundation

See also


External links

Notes

  1. ^ In 1946, ENIAC consumed an estimated 174 kW. By comparison, a typical personal computer may use around 400 W; over four hundred times less. (Kempf 1961)
  2. ^ Early computers such as Colossus and ENIAC were able to process between 5 and 100 operations per second. A modern "commodity" microprocessor (as of 2007) can process billions of operations per second, and many of these operations are more complicated and useful than early computer operations.
  3. ^ "Heron of Alexandria". http://www.mlahanas.de/Greeks/HeronAlexandria2.htm. Retrieved on 2008-01-15.
  4. ^ a b Ancient Discoveries, Episode 11: Ancient Robots, History Channel, http://www.youtube.com/watch?v=rxjbaQl0ad8, retrieved on 2008-09-06
  5. ^ Howard R. Turner (1997), Science in Medieval Islam: An Illustrated Introduction, p. 184, University of Texas Press, ISBN 0292781490
  6. ^ Donald Routledge Hill, "Mechanical Engineering in the Medieval Near East", Scientific American, May 1991, pp. 64-9 (cf. Donald Routledge Hill, Mechanical Engineering)
  7. ^ The Analytical Engine should not be confused with Babbage's difference engine which was a non-programmable mechanical calculator.
  8. ^ "Inventor Profile: George R. Stibitz". National Inventors Hall of Fame Foundation, Inc.. http://www.invent.org/hall_of_fame/140.html.
  9. ^ B. Jack Copeland, ed., Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford University Press, 2006
  10. ^ Lavington 1998, p. 37
  11. ^ This program was written similarly to those for the PDP-11 minicomputer and shows some typical things a computer can do. All the text after the semicolons are comments for the benefit of human readers. These have no significance to the computer and are ignored. (Digital Equipment Corporation 1972)
  12. ^ Attempts are often made to create programs that can overcome this fundamental limitation of computers. Software that mimics learning and adaptation is part of artificial intelligence.
  13. ^ It is not universally true that bugs are solely due to programmer oversight. Computer hardware may fail or may itself have a fundamental problem that produces unexpected results in certain situations. For instance, the Pentium FDIV bug caused some Intel microprocessors in the early 1990s to produce inaccurate results for certain floating point division operations. This was caused by a flaw in the microprocessor design and resulted in a partial recall of the affected devices.
  14. ^ Even some later computers were commonly programmed directly in machine code. Some minicomputers like the DEC PDP-8 could be programmed directly from a panel of switches. However, this method was usually used only as part of the booting process. Most modern computers boot entirely automatically by reading a boot program from some non-volatile memory.
  15. ^ However, there is sometimes some form of machine language compatibility between different computers. An x86-64 compatible microprocessor like the AMD Athlon 64 is able to run most of the same programs that an Intel Core 2 microprocessor can, as well as programs designed for earlier microprocessors like the Intel Pentiums and Intel 80486. This contrasts with very early commercial computers, which were often one-of-a-kind and totally incompatible with other computers.
  16. ^ High level languages are also often interpreted rather than compiled. Interpreted languages are translated into machine code on the fly by another program called an interpreter.
  17. ^ The control unit's rule in interpreting instructions has varied somewhat in the past. While the control unit is solely responsible for instruction interpretation in most modern computers, this is not always the case. Many computers include some instructions that may only be partially interpreted by the control system and partially interpreted by another device. This is especially the case with specialized computing hardware that may be partially self-contained. For example, EDVAC, the first modern stored program computer to be designed, used a central control unit that only interpreted four instructions. All of the arithmetic-related instructions were passed on to its arithmetic unit and further decoded there.
  18. ^ Instructions often occupy more than one memory address, so the program counters usually increases by the number of memory locations required to store one instruction.
  19. ^ David J. Eck (2000). The Most Complex Machine: A Survey of Computers and Computing. A K Peters, Ltd.. p. 54. ISBN 9781568811284.
  20. ^ Erricos John Kontoghiorghes (2006). Handbook of Parallel Computing and Statistics. CRC Press. p. 45. ISBN 9780824740672.
  21. ^ Flash memory also may only be rewritten a limited number of times before wearing out, making it less useful for heavy random access usage. (Verma 1988)
  22. ^ Donald Eadie (1968). Introduction to the Basic Computer. Prentice-Hall. p. 12.
  23. ^ Arpad Barna; Dan I. Porat (1976). Introduction to Microcomputers and the Microprocessors. Wiley. p. 85. ISBN 9780471050513.
  24. ^ Jerry Peek; Grace Todino, John Strang (2002). Learning the UNIX Operating System: A Concise Guide for the New User. O'Reilly. p. 130. ISBN 9780596002619.
  25. ^ Gillian M. Davis (2002). Noise Reduction in Speech Applications. CRC Press. p. 111. ISBN 9780849309496.
  26. ^ However, it is also very common to construct supercomputers out of many pieces of cheap commodity hardware; usually individual computers connected by networks. These so-called computer clusters can often provide supercomputer performance at a much lower cost than customized designs. While custom architectures are still used for most of the most powerful supercomputers, there has been a proliferation of cluster computers in recent years. (TOP500 2006)
  27. ^ Agatha C. Hughes (2000). Systems, Experts, and Computers. MIT Press. p. 161. ISBN 9780262082853. "The experience of SAGE helped make possible the first truly large-scale commercial real-time network: the SABRE computerized airline reservations system..."
  28. ^ "A Brief History of the Internet". Internet Society. http://www.isoc.org/internet/history/brief.shtml. Retrieved on 2008-09-20.
  29. ^ Most major 64-bit instruction set architectures are extensions of earlier designs. All of the architectures listed in this table, except for Alpha, existed in 32-bit forms before their 64-bit incarnations were introduced.