Computers
A computer is a device or machine for making
calculations or controlling operations that are expressible in numerical or
logical terms. Computers are made from components that perform simple
well-defined functions. The complex interactions of these components endow
computers with the ability to process information. If correctly configured
(usually by programming) a computer can be made to represent some aspect of a
problem or part of a system. If a computer configured in this way is given
appropriate input data, then it can automatically solve the problem or predict
the behavior of the system. The discipline which studies the theory, design, and
application of computers is called computer science.
Contents
1 General principles
2 Etymology
3 The exponential progress of computer development
4 Classification of computers
4.1 Classification by intended use
4.2 Classification by implementation technology
4.3 Classification by design features
4.3.1 Digital versus analog
4.3.2 Binary versus decimal
4.3.3 Programmability
4.3.4 Storage
4.4 Classification by capability
4.4.1 General-purpose computers
4.4.1.1 Stored-program computers
4.4.2 Special-purpose computers
4.4.3 Single-purpose computers
4.5 Classification by type of operation
5 Computer applications
5.1 The Internet
6 How computers work
6.1 Memory
6.2 Processing (Processor)
6.2.1 Instructions
6.3 Input and output
6.4 Architecture
6.5 Programs
6.5.1 Operating system
General principles
Computers can work through the movement of mechanical parts, electrons, photons,
quantum particles or any other well-understood physical phenomenon. Although
computers have been built out of many different technologies, nearly all popular
types of computers have electronic components.
Computers may directly model the problem being solved, in the sense that the
problem being solved is mapped as closely as possible onto the physical
phenomena being exploited. For example, electron flows might be used to model
the flow of water in a dam. Such analog computers were once common in the 1960s
but are now rarer.
In most computers today, the problem is first translated into mathematical terms
by rendering all relevant information into the binary base-two numeral system
(ones and zeros). Next, all operations on that information are reduced to simple
Boolean algebra.
Electronic circuits are then used to represent Boolean operations. Since almost
all of mathematics can be reduced to Boolean operations, a sufficiently fast
electronic computer is capable of attacking the majority of mathematical
problems (and the majority of information processing problems that can be
translated into mathematical ones). This basic idea, which made modern digital
computers possible, was formally identified and explored by Claude E. Shannon.
Computers cannot solve all mathematical problems. Alan Turing identified which
problems could and could not be solved by computers, and in doing so founded
theoretical computer science.
When the computer is finished calculating the problem, the result must be
displayed to the user as output through output devices like light bulbs, LEDs,
monitors, and printers.
Novice users, especially children, often have difficulty understanding the
important idea that the computer is only a machine, and cannot "think" or
"understand" the words it displays. The computer is simply performing a
mechanical lookup on preprogrammed tables of lines and colors, which are then
translated into arbitrary patterns of light by the output device. It is the
human brain which recognizes that those patterns form letters and numbers, and
attaches meaning to them. All that existing computers do is manipulate electrons
that are logically equivalent to ones and zeroes; there are no known ways to
successfully emulate human comprehension or self-awareness. See artificial
intelligence.
- XP activation removal tool
- Problems with guest account
- Zipped (compressed folders) in directory tree Windos XP SP2
- How do you stop 'All Programs' list from scrolling?
- Can Spider cache cleaner be used safely with windows xp?
- start menu
- Win XP network...
- Dialog Box
- Worm or not?
- Need Your Help: SDDM12 -- what is it?..
- windows activation period expired computer wont fire up
- Cannot reboot
- Can't contact some email addrresses!
- IBM ThinkPad 600X Sound Problem
- Runtime error, adsNPopups
- When "show window contents when dragging" > I lose the [+] sign from CURSOR
Etymology
The word was originally used to describe a person who performed the arts and
this usage is still valid (although it is becoming quite rare in the United
States and UK). The OED2 lists the year 1897 as the first year the word was used
to refer to a mechanical calculating device. By 1946 several qualifiers were
introduced by the OED2 to differentiate between the different types of machine.
These qualifiers included analogue, digital and electronic. However, from the
context of the citation, it is obvious these terms were in use prior to 1946.
The exponential progress of computer development
Computing devices have doubled in capacity (instructions processed per second
per $1000) every 18 to 24 months since 1900. Gordon E. Moore, co-founder of
Intel, first described this property of computer development in 1965. His
observation has become known as Moore's Law, although it of course is not
actually a law, but rather a significant trend. Hand-in-hand with this increase
in capacity per unit cost has been an equally dramatic process of
miniaturization. The first electronic computers, such as Colossus in 1943 and
the ENIAC announced in 1946, were huge devices that weighed tons, occupied
entire rooms, and required many operators to function successfully. They were so
expensive that only governments and large research organizations could afford
them and were considered so exotic that only a handful would ever be required to
satisfy global demand. By contrast, modern computers are orders of magnitude
more powerful, less expensive, smaller and have become ubiquitous in many areas.
The exponential progress of computer development makes classification of
computers problematic since modern computers are many orders of magnitude more
powerful than earlier devices.
Classification of computers
The following sections describe different approaches to classifying computers.
Classification by intended use
Supercomputer
Minisupercomputer
Mainframe computer
Enterprise application server
Minicomputer
Workstation
Personal computer (PC)
Desktop computer
Laptop computer
Tablet computer
Personal Digital Assistant (PDA)
Personal Video Recorder (PVR) e.g.: TiVo
Wearable computer
The colloquial nature of this classification approach means it is ambiguous. It
is usual for only current, commonly available devices to be included. The rapid
nature of computer development means new uses for computers are frequently found
and current definitions quickly become outdated. Many classes of computer that
are no longer used, such as differential analyzers, are not commonly included in
such lists. Other classification schemes are required to unambiguously define
the word "computer".
Classification by implementation technology
A less ambiguous approach for classifying computing machines is by their
implementation technology. The earliest computers were purely mechanical. In the
1930s electro-mechanical components (relays) were introduced from the
telecommunications industry, and in the 1940s the first purely electronic
computers were constructed from thermionic valves (tubes). In the 1950s and
1960s valves were gradually replaced with transistors and in the late 1960s and
early 1970s semiconductor integrated circuits (silicon chips) were adopted and
have been the mainstay of computing technology ever since.
This description of implementation technologies is not exhaustive; it only
covers the mainstream of development. Historically many exotic technologies have
been explored and abandoned. For example, economic models have been constructed
using water flowing through multiple-constricted channels, and between 1903 and
1909 Percy E. Ludgate developed a design for a programmable analytical machine
based on weaving technologies in which variables were carried in shuttles.
Efforts are currently underway to develop optical computers that use light
rather than electricity. The possibility that DNA can be used for computing is
also being explored. One radical new area of research that could lead to
computers with dramatic new capabilities is the field of quantum computing, but
this is presently in its early stages. With the exception of quantum computers,
the implementation technology of a computer is not as important for
classification purposes as the features that the machine implements.
Classification by design features
Modern computers combine fundamental design features that have been developed by
various contributors over many years. These features are often independent of
implementation technology. Modern computers derive their overall capabilities
from the way these features interact. Some of the most important design features
are listed below.
Digital versus analog
There are two main types of computers: digital and analog. Other approaches,
such as pulse computing and quantum computing may be possible but are either
used for special purposes or are still experimental.
Digital computers use and store information that has been encoded into binary
strings. The encoded data avoids the troubles of analog computers, which use and
store information that is encoded as a position or voltage.
Since the 1940s digital computers have become by far the most common for reasons
of convenience. The signal-to-noise ratio of binary logic coupled with
modularity and many advances in manufacturing have made the so-called digital
computer synonymous with computer in the vernacular.
It is important to note that digital and analog computers are just the recent
implementation of computer. The fundamental idea of a computer is independent of
the implementation.
Binary versus decimal
A significant design development in digital computing was the introduction of
binary as the internal numeral system. This removed the need for the complex
carry mechanisms required for computers based on other numeral systems, such as
the decimal system. The adoption of binary resulted in simplified designs for
implementing arithmetic functions and logic operations. The adoption of binary
seemed to be a fantastic idea because '0' and '1' are the most suitable symbols
for the 'on' and 'off' states of most electronic components.
Programmability
The ability to program a computer — provide it with a set of instructions for
execution — without physically reconfiguring the machine is a fundamental design
feature of most computers. This feature was significantly extended when machines
were developed that could dynamically control the flow of execution of the
program. This allowed computers to control the order in which the program of
instructions was executed based on data calculated by the program as it
executed. This major design advance was dramatically simplified by the
introduction of binary arithmetic which can be used to represent various logic
operations.
Storage
During the course of a calculation it is often necessary to store intermediate
values for use in later calculations. The performance of many computers is
largely dictated by the speed with which they can read and write values to and
from this memory, and the overall capacity of the memory. Originally memory was
used only for intermediate values but in the 1940s it was suggested that the
program itself could be stored in this way. This advance led to the development
of the first stored-program computers of the type used today.
Classification by capability
Perhaps the best way to classify the various types of computing device is by
their intrinsic capabilities rather than their usage, implementation technology
or design features. Computers can be subdivided into three main types based on
capability: Single-Purpose devices that can compute only one function (e.g. The
Antikythera Mechanism 87 BC, and Lord Kelvin's Tide predictor 1876),
Special-Purpose devices that can compute a limited range of functions (e.g.
Charles Babbage's Difference Engine No 1. 1832 and Vannevar Bush's Differential
analyser 1932), and General-Purpose devices of the type used today. Historically
the word computer has been used to describe all these types of machine but
modern colloquial usage usually restricts the term to general-purpose machines.
General-purpose computers
By definition a general-purpose computer can solve any problem that can be
expressed as a program and executed within the practical limits set by: the
storage capacity of the computer, the size of program, the speed of program
execution, and the reliability of the machine. In 1934 Alan Turing proved that,
given the right program, any general-purpose computer could emulate the behavior
of any other computer. This mathematical proof was purely theoretical as no
general-purpose computers existed at the time. The implications of this proof
are profound; for example, any existing general-purpose computer is
theoretically able to emulate, albeit slowly, any general-purpose computer that
may be built in the future.
Computers with general-purpose capabilities are called Turing-complete and this
status is often used as the threshold capability that defines modern computers,
however, this definition is problematic. Several computing devices with
simplistic designs have been shown to be Turing-complete. The Z3, developed by
Konrad Zuse in 1941 is the earliest working computer that has been shown to be
Turing-complete, so far (the proof was developed in 1998). While the Z3 and
possibly other early devices may be theoretically Turing-complete they are
impractical as general-purpose computers. They lie in what is humorously known
as the Turing Tar-Pit - "a place where anything is possible but nothing of
interest is practical" (See The Jargon File). Modern computers are more than
theoretically general-purpose; they are also practical general-purpose tools.
The modern, digital, electronic, general-purpose computer was developed, by many
contributors, over an extended period from the mid 1930s to the late 1940s,
during this period many experimental machines were built that were not
Turing-complete (ABC, Harvard Mk I etc see the History of computing hardware).
All these machines have been claimed, at one time or another, as the first
computer, but they all had limited utility as general-purpose problem-solving
devices and their designs have been discarded. ENIAC was a special case in that
it was indeed Turing-complete. However, programming it was done by, essentially,
rewiring the machine. While immensely faster and more capable than earlier
designs, it was very difficult to use.
Stored-program computers
During the late 1940s the first design for a Stored-Program Computer was
developed and documented (see The first draft) at the Moore School of Electrical
Engineering at The University of Pennsylvania. The approach described by this
document has become known as the von Neumann architecture, after its only named
author John von Neumann although others at the Moore School essentially invented
the design. The von Neumann architecture solved problems inherent in the design
of the ENIAC, which was then under construction, by storing the machines program
in its own memory. Von Neumann made the design available to other researchers
shortly after the ENIAC was announced in 1946. Plans were developed to implement
the design at the Moore School in a machine called the EDVAC. The EDVAC was not
operational until 1953 due to technical difficulties in implementing a reliable
memory. Other research institutes, who had obtained copies of the design, solved
the considerable technical problems of implementing a working memory before the
Moore School team and implemented their own stored-program computers. In order
of first successful operation the first 5 stored-program computers, that
implemented the main features of the von Neumann Architecture were:
Manchester Mk I Prototype (Baby) University of Manchester UK. June 21, 1948,
EDSAC. Cambridge University. UK. May 6, 1949
BINAC United States, April 1949 or August, 1949.
CSIR Mk 1 Australia November, 1949
SEAC US May 9, 1950
The Stored Program design defined by the von Neumann architecture finally
allowed computers to readily exploit their general-purpose potential. By storing
the computer's program in its own memory it became possible to rapidly "jump"
from one instruction to another based on the result of evaluating a condition
defined within the program. This condition usually evaluated data values
calculated by the program and allowed programs to become highly dynamic. The
design also supported the ability to automatically re-write the program as it
executed - a powerful feature that must be used carefully. These features are
fundamental to the way modern computers work.
To be precise, most modern computers are binary, electronic, stored-program,
general-purpose, computing devices.
Special-purpose computers
The special-purpose computers that were popular in the 1930s and early 1940s
have not been completely replaced by General-Purpose computers. As the cost and
size of computers has fallen and their capabilities have increased it has become
cost effective to use them for special-purpose applications. Many domestic and
industrial devices including; mobile telephones, video recorders, automotive
ignition systems, etc now contain special-purpose computers. In some cases these
computers are Turing-complete (Video Games, PDAs) but many are programmed once
in the factory and only seldom, if ever, reprogrammed. The program that these
devices execute is often contained in a Read Only Memory (ROM chip) which would
need to be replaced to change the operation of the machine. Computers embedded
inside other devices are commonly referred to as microcontrollers or embedded
computers.
Single-purpose computers
Single-purpose computers were the earliest computing devices. Given some inputs
they could calculate the result of the single function that was implemented by
their mechanism. General-Purpose computers have almost completely replaced
single-purpose computers and in doing so have created a completely new field of
human endeavor - Software Development. General-purpose computers must be
programmed with a set of instructions specific to the task they are required to
perform and these instructions are collectively know as computer software. The
design of single-purpose computing devices and many special-purpose computing
devices is now a conceptual exercise that consists solely of designing software.
- Windows Explorer and Incorrect volume labels.
- MRU in Reg.
- system restore
- Outlook
- Unable to run older game OR MS-DOS
- Turn Off Print Completion Message
- Sound recorder problem
- Windows is finding Virtual Memory
- Re: Internet Explorer
- Re: Internet Explorer
- screensaver
- .net Framework version
- Proper Video Card?
- Re: Newbie Questions
- Windows MovieMaker
- blocked internet sites
- In Sufficient Memory
- runtime error
Classification by type of operation
Computers may be classified according to the way they are operated by the users.
Two main types exist: batch processing and interactive processing.
Computer applications
The first electronic digital computers, with their large size and cost, mainly
performed scientific calculations, often to support military objectives. The
ENIAC was originally designed to calculate ballistics firing tables for
artillery, but it was also used to calculate neutron cross-sectional densities
to help in the design of the hydrogen bomb. This calculation, performed in
December, 1945 through January, 1946 and involving over a million punch cards of
data, showed the design then under consideration would fail. (Many of the most
powerful supercomputers available today are also used for nuclear weapons
simulations.) The CSIR Mk I, the first Australian stored-program computer,
evaluated rainfall patterns for the catchment area of the Snowy Mountains
Scheme, a large hydroelectric generation project. Others were used in
cryptanalysis, for example the world's first programmable (though not
general-purpose) digital electronic computer, Colossus, built in 1943 during
World War II. Despite this early focus of scientific applications, computers
were quickly used in other areas.
From the beginning, stored program computers were applied to business problems.
The LEO, a stored program-computer built by J. Lyons and Co. in the United
Kingdom, was operational and being used for inventory management and other
purposes 3 years before IBM built their first commercial stored-program
computer. Continual reductions in the cost and size of computers saw them
adopted by ever-smaller organizations. And with the invention of the
microprocessor in the 1970s, it became possible to produce inexpensive
computers. In the 1980s, personal computers became popular for many tasks,
including book-keeping, writing and printing documents, calculating forecasts
and other repetitive mathematical tasks involving spreadsheets.
The Internet
In the 1970s, computer engineers at research institutions throughout the US
began to link their computers together using telecommunications technology. This
effort was funded by ARPA, and the computer network that it produced was called
the ARPANET. The technologies that made the Arpanet possible spread and evolved.
In time, the network spread beyond academic institutions and became known as the
Internet. In the 1990s, the development of World Wide Web technologies enabled
non-technical people to use the internet, and it grew rapidly to become a global
communications medium.
- Right-click no longer works with IE
- What causes iframes stop showing in IE?
- How to save
- IE6 on win XP not downloading pdf file
- Internet Explorer Quandry
- Linking Problem
- Bookmarking Two different Pages
- Proxy server 127.0 without network
- IE drops off randomly
- Internet Explorer won't close
- [BUG?] IE change file extension during download
- IE6 download and hyperlink problem
- I can't close the screen 'this page cannot be displayed'
- connection error
- looking for "command line" to set IE5+ as default browser on win2k
- Getting lots of "X's" on screen
- IE Explorer Option settings
- Hotfix for KB822232 maybe
- Cannot access MS Community Newsgroups
How computers work
While the technologies used in computers have changed dramatically since the
first electronic, general-purpose, computers of the 1940s (see History of
computing hardware for more details), most still use the von Neumann
architecture.
The functioning of such a computer is in principle quite straightforward.
Typically, on each clock cycle, the computer fetches instructions and data from
its memory. The instructions are executed, the results are stored, and the next
instruction is fetched. This procedure repeats until a halt instruction is
encountered.
The von Neumann architecture describes a computer with four main sections: the
Arithmetic and Logic Unit (ALU), the control circuitry, the memory, and the
input and output devices (collectively termed I/O). These parts are
interconnected by a bundle of wires (a "bus") and are usually driven by a timer
or clock (although other events could drive the control circuitry).
Memory
RAM memory modulesThe memory is a sequence of numbered cells, each containing a
small piece of information. The information may be an instruction to tell the
computer what to do. The cell may contain data that the computer needs to
perform the instruction. Any cell may contain either, and indeed what is at one
time data might be instructions later.
In general, the contents of a memory cell can be changed at any time - it is a
scratchpad rather than a stone tablet.
The size of each cell, and the number of cells, varies greatly from computer to
computer, and the technologies used to implement memory have varied greatly -
from electromechanical relays, to mercury-filled tubes (and later springs) in
which acoustic pulses were formed, to matrices of permanent magnets, to
individual transistors, to integrated circuits with millions of capacitors on a
single chip.
Processing (Processor)
A CPUIn von Neumann's original original architectural, he described an
Arithmetic and Logic Unit (ALU), and a control unit. In modern computers, these
are located within the same integrated circuit, typically referred to as the
CPU.
The arithmetic and logical unit, or ALU, is the device that performs elementary
operations such as arithmetic operations (addition, subtraction, and so on),
logical operations (AND, OR, NOT), and comparison operations (for example,
comparing the contents of two bytes for equality). This unit is where the "real
work" is done.
The control unit keeps track of which bytes in memory contain the current
instruction that the computer is performing, decodes the instruction and thus
telling the ALU what operation to perform and retrieving the information (from
memory) that it needs to perform it, and transfers the result back to the
appropriate memory location. Once that occurs, the control unit goes to the next
instruction (typically located in the next slot (memory address), unless the
instruction is a jump instruction informing the computer that the next
instruction is located in another location).
Instructions
The instructions interpreted by the control unit, and executed by the ALU, are
not nearly as rich as a human language. A computer only has a limited number of
well-defined, simple instructions, but they are not ambiguous. Typical sorts of
instructions supported by most computers are "copy the contents of memory cell 5
and place the copy in cell 10", "add the contents of cell 7 to the contents of
cell 13 and place the result in cell 20", "if the contents of cell 999 are 0,
the next instruction is at cell 30".
Instructions are represented within the computer as binary code - a base two
system of counting. For example, the code for one kind of "copy" operation in
the Intel line of microprocessors is 10110000. The particular instruction set
that a specific computer supports is known as that computer's machine language.
In practice, people do not normally write the instructions for computers
directly in machine language but rather use a "high level" programming language
which is then translated into the machine language automatically by special
computer programs (interpreters and compilers). Some programming languages map
very closely to the machine language, such as Assembly Language (low level
languages); at the other end, languages like Prolog are based on abstract
principles far removed from the details of the machine's actual operation (high
level languages).
Input and output
The I/O allows the computer to obtain information from the outside world, and
send the results of its work back there. There is a broad range of I/O devices,
from the familiar keyboards, mice, monitors, touchscreens and floppy disk
drives, CD/DVD Drives, printers to the more unusual such as scanners & webcams.
What all input devices have in common is that they encode (convert) information
of some type into data which can further be processed by the digital computer
system. Output devices on the other hand, decode the data into information which
can be understood by the computer user. In this sense, a digital computer system
is an example of a data processing system.
Architecture
Contemporary computers put the ALU and control unit into a single integrated
circuit known as the Central Processing Unit or CPU. Typically, the computer's
memory is located on a few small integrated circuits near the CPU. The
overwhelming majority of the computer's mass is either ancillary systems (for
instance, to supply electrical power) or I/O devices.
Some larger computers differ from the above model in one major respect - they
have multiple CPUs and control units working simultaneously. Additionally, a few
computers, used mainly for research purposes and scientific computing, have
differed significantly from the above model, but they have found little
commercial application, because their programming model has not yet
standardized.
Programs
Computer programs are simply large lists of instructions for the computer to
execute, perhaps with tables of data. Many computer programs contain millions of
instructions, and many of those instructions are executed repeatedly. A typical
modern PC (in the year 2005) can execute around 3 billion instructions per
second. Computers do not gain their extraordinary capabilities through the
ability to execute complex instructions. Rather, they do millions of simple
instructions arranged by people known as "programmers." Good programmers develop
sets of instructions to do common tasks (for instance, draw a dot on screen) and
then make those sets of instructions available to other programmers.
Nowadays, most computers appear to execute several programs at the same time.
This is usually referred to as multitasking. In reality, the CPU executes
instructions from one program, then after a short period of time, it switches to
a second program and executes some of its instructions. This small interval of
time is often referred to as a time slice. This creates the illusion of multiple
programs being executed simultaneously by sharing the CPU's time between the
programs. This is similar to how a movie is simply a rapid succession of still
frames. The operating system is the program that usually controls this time
sharing.
Operating system
When a computer is running it needs a program, whether or not there is useful
work to do. In a typical desktop computer, this program is the operating system
(OS). The operating system decides which programs are run, when, and what
resources (such as memory or input/output - I/O) the programs will get to use.
The operating system also provides a layer of abstraction over the hardware, and
gives access by providing services to other programs, such as code ("drivers")
which allow programmers to write programs for a machine without needing to know
the intimate details of all the attached electronic devices.
Most operating systems that have hardware abstraction layers also provide a
standardized user interface. The most popular OS remains the Microsoft Windows
family of operating systems.
Most computers are very small, very inexpensive computers embedded in other
machinery. These embedded systems have programs, but often lack a recognizable
operating system.
Hardware
Hardware comprises all of the physical parts of a computer, as distinguished from the data it contains or operates on, and the software that provides instructions for the hardware to accomplish tasks. The boundary between hardware and software is slightly blurry—firmware is software that is "built-in" to the hardware. Firmware is usually the province of computer programmers and computer engineers in any case and not an issue that computer users need to concern themselves with.Contents
1 typical computer
2 personal computer
- Burning to DVD
- Wireless card drivers
- Scanner Problem
- Monitor serious trouble
- How to get graphics card driver
- Command to eject a USB Mass storage device
- HARDWARE monitoring tools?
- Cd Rom Applications Dont Work!!!
- How to detect if an USB Flash Stick/other device supports USB 2.0 (or only 1.1) ?
- Num Lock or Caps Lock resets computer
- CD drive doesnt notice that a Disc has changed
- Bizarre problem with CD drives
- Cloned disk won't work
- CD/DVD Recording in XP Home
- Hard Disc Testing: Command Line Options
- C-Media Sound Deviec - No Sound
- External Hard Drive FAT32 Partiton Size Limit.
- USB memory stick causes WinXP screen freeze
typical computer
The vast majority of computers are hidden, or "embedded", in embedded systems
such as automobiles, microwave ovens, electrocardiograph machines, compact disk
players, and cell phones.
A small minority of computers (about 0.2% of all new computers produced in 2003)
are desktop or laptop personal computers[1].
personal computer
A typical PC (personal computer) consists of a desktop or tower case (chassis)
and the following parts:
System board/Motherboard which holds the CPU, Random Access Memory and other
parts, and has slots for expansion cards
RAM (Random Access Memory)- for program execution and short term data-storage,
so the computer doesn't have to take the time to access the hard drive to find
something. More RAM can contibute to a faster PC
Buses :
PCI bus
PCI-E bus
ISA bus (outdated)
USB
AGP
power supply - a case that holds a transformer, voltage control and fan
storage controllers, of IDE, SCSI or other type, that control hard disk, floppy
disk, CD-ROM and other drives; the controllers sit directly on the motherboard
(on-board) or on expansion cards
video display controller that produces the output for the computer display
computer bus controllers (parallel, serial, USB, Firewire) to connect the
computer to external peripheral devices such as printers or scanners
Some type of a removable media writer:
CD - the most common type of removable media, cheap but fragile.
CD-ROM
CD-RW
CD-R
DVD
DVD-ROM
DVD-RW
DVD-R
DVD-RAM
DVD+RW
DVD+R
Floppy disk
Tape Drive - mainly for backup and long-term storage
Internal storage - keeps data inside the computer for later use.
Hard disk - for medium-term storage of data.
Disk array controller
Sound card - translates signals from the system board into analog voltage
levels, and has terminals to plug in speakers.
Networking - to connect the computer to the Internet and/or other computers
Modem - for dial-up connections
Network card - for DSL/Cable internet, and/or connecting to other computers.
Other peripherals
In addition, hardware can include external components of a computer system. The
following are either standard or very common.
Input
Keyboard
Pointing devices
Mouse
Trackball
Joystick
Gamepad
Image scanner
Webcam
Output
Printer
Speakers
Monitor
Networking
Modem
Network card
Software
Computer software (or simply software) refers to one or more computer programs held in the storage of a computer for some purpose. Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software.The term software was first used in this sense by John W. Tukey in 1957; computer science and software engineering, computer software is all information processed by computer system, programs and data.
Software has historically been considered an interface between hardware and data; more specifically it has been considered to be an interface composed of a binary representation of electronics readable code or logic. The purpose of software is to cause a task, process, or computation to be performed. A task can include the retrieval, storage, or display of information.
As computational science becomes increasingly complex, the distinction between software and data becomes less precise. Data has generally been considered to be either the output of or input for software (n.b. that "data" is not the only possible output or input; for example, configuration information can also be considered input, though not necessarily considered to be data). The output of a particular piece of software may be the input for another piece of software. Therefore, software may be considered to be an interface between hardware, data, or software.
It is generally accepted that software interfaces with electronic devices, or electronics. The terms electronics recently can be defined to include devices which have biological components or biological interfaces. Instructions processed by an electronic device which cause a muscle to contract, for example, may be considered software. The instruction from the electronic device to the muscle may also be considered software because it is the output, a task, of electronics readable code or logic.
Computer software is so called in contrast to computer hardware, which is the physical substrate which stores and executes (or "runs") the software.
For other uses of the word software see Software (disambiguation).
Contents
1 System and application software
2 Users see three layers of software
3 Software in operation
4 Software creation
5 Software patents
- Office 2003 stopped!
- Reinstallbackups
- Clear desktop
- Protected storage service missing
- Where are files temporarily downloaded to?
- MICROPHONE VOLUME ISSUES
- XP Slow shutdown
- Two Windows Folders
- Help! Windows could not properly load the keyboard
- On startup, XP tries to reinstall everything
- replacement XP disk
- new hard drive on laptop
- can't delete files any more !!
- Workgroups
- Removing Master Boot Record on old disk
- Missing "Display Properties Dialog Box"
- Hard Disc Testing: Command Line Options
- Help & support centre
- WinXP Batch File ???
System and application software
Computer science divides software into two major classes: system software and
application software.
System software helps run the computer hardware and computer system. It includes
operating systems, device drivers, programming tools, servers, windowing
systems, utilities and more.
Application software allows a user to accomplish one or more specific tasks.
Typical applications include office suites, business software, educational
software, databases and computer games. Most application software has a
graphical user interface (GUI).
Users see three layers of software
Users often see things differently than programmers. People who use modern
general purpose computers (as opposed to embedded systems) usually see three
layers of software performing a variety of tasks: platform, application, and
user software.
Platform software
Platform includes the basic input-output system (often described as firmware
rather than software), device drivers, an operating system, and typically a
graphical user interface which, in total, allow a user to interact with the
computer and its peripherals (associated equipment). Platform software often
comes bundled with the computer, and users may not realize that it exists or
that they have a choice to use different platform software.
Application software
Applications are what most people think of when they think of software. Typical
examples include office suites and video games. Application software is often
purchased separately from computer hardware. Sometimes applications are bundled
with the computer, but that does not change the fact that they run as
independent applications. Applications are almost always independent programs
from the operating system, though they are often tailored for specific
platforms. Most users think of compilers, databases, and other "system software"
as applications.
User-written software
User software tailors systems to meet the users specific needs. User software
include spreadsheet templates, word processor macros, scientific simulations,
graphics and animation scripts. Even email filters are a kind of user software.
Users create this software themselves and often overlook how important it is.
See also: Three-tier application, Software architecture.
Software in operation
Computer software has to be "loaded" into the computer's storage (also known as
memory and RAM).
Once the software is loaded, the computer is able to operate the software.
Computers operate by executing the computer program. This involves passing
instructions from the application software, through the system software, to the
hardware which ultimately receives the instruction as machine code. Each
instruction causes the computer to carry out an operation -- moving data,
carrying out a computation, or altering the flow of instructions.
Kinds of software by operation: computer program as executable, source code or
script, configuration.
Software creation
Software is created with programming languages and related utilities, which may
come in several of the above forms: single programs like script interpreters,
packages containing a compiler, linker, and other tools; and large suites (often
called Integrated Development Environments) that include editors, debuggers, and
other tools for multiple languages.
See also: Computer programming, Software engineering, Software architecture
Software patents
The issue of software patents is very controversial, since while patents protect
the ideas of "inventors", they are widely believed to hinder software
development.
- Articles
- Basics
- Virus
- Messenger
- Embedded
- Print and Fax
- Movie Maker
- Games
- Windows Update
- Photos
- Music
- Video
- Internet Explorer
- Security
- Outlook Express
- Word
- Work Remotely
- Customization
- Accessibility
- Help and Support
- Hardware
- General
- Network and Web
- New Users
- Performance and Maintenance
- Setup and Deployment
- Security and Administration