Other Free Encyclopedias » Online Encyclopedia » Encyclopedia - Featured Articles » Contributed Topics from A-E

Computer Software - Historical Background, Programming Languages, Influence of Software on Computer Markets, Conclusion

programs programmers instructions memory

Computer hardware, consisting mainly of the central processing unit (CPU), random access memory (RAM), and various peripheral devices, provides the physical components needed for computation, but hardware by itself can do nothing useful without the explicit step-by-step instructions provided by computer software.

Computer software consists of sequences of instructions for the CPU. A sequence of instructions for the CPU is typically referred to as a program. Programs vary in size and complexity and can be as simple as a utility that prints the current time and date on the screen or as large and complicated as a spreadsheet application or a full-featured word processor. Each instruction in a program directs the CPU to do one simple task, such as accessing the contents of a memory location, adding two numbers, or jumping to a different part of the program depending on the value contained in a register. Because individual instructions are so simple, it takes many of them to create a program. Complicated programs, such as word processors, contain literally millions of instructions and may require years of development time.

Historical Background

In 1944, the Hungarian-born mathematician John von Neumann developed the first practical way for computers to use software. The machine he designed to use this fundamental advance in computing was called the EDVAC (Electronic Discrete Variable Automatic Computer). All previous computers—including the Electronic Numerical Integrator and Computer (ENIAC), the first electronic digital computer—had to be rewired every time a different program was run. Obviously, this rewiring was a time-consuming, tedious, and error-prone task. With the EDVAC, programs could be loaded from external storage (typically by means of punched paper tape) exactly the same way as data. In fact, at the level of memory storage, programs were indistinguishable from data. By clever manipulation of the binary codes used to represent instructions as they resided in the computer memory, it was even possible to write programs that modified themselves as they ran. Such is the infinitely malleable and somewhat schizophrenic world of software that, for many individuals, makes it a fascinating and sometimes consuming passion.

During the very earliest years of general-pur-pose computing, programmers explicitly wrote all the instructions that the computer would execute in the course of running a program, including the instructions needed for input and output of the data on which the program performed its computations. This resulted in large numbers of programs that contained sections of identical code commonly used for reading data at the beginning of the run and writing results at the end. Programmers soon realized that these commonly used sections of code could be stored in a “library” that their programs could access whenever they needed to perform a common system-level function. For example, a library of routines to send output to the printer could be provided for programmers who were writing data processing software. The remainder of the code in each program would therefore be only that needed for the unique requirements of the specific task that the program was intended to perform. This allowed programmers to focus on the problem-solving, applied aspects of their programs and led the way not only for increased programmer productivity but also for the advent of software production and computing on a large-scale global basis.

Programming Languages

A single CPU instruction does very little, and useful programs require thousands or even millions of instructions. If a programmer had to write each instruction individually, the task of developing software would be labor intensive and error prone. To expedite the development, as well as to improve the reliability of software, programmers use programming languages instead of writing machine instructions directly.

To see how a programming language can simplify the process of software development, consider the following sequence of machine instructions that adds two numbers. First, the computer loads the value stored at memory location X into a CPU register, then adds the number stored at memory location Y to it, and finally stores the result in memory location Z:

LOAD X

ADD Y

STORE Z

In a programming language such as FORTRAN (Formula Translator), for example, the programmer merely has to write Z = X + Y. This is easier for the programmer to write, but it is also easier for other programmers to read. This is important for the long-term maintenance of large systems, where a program may continue to go through development cycles years after the original programmer has left the company.

Even though most software is written using programming languages, computers still only understand CPU instructions. Thus, a special program called a “compiler” translates the programming language statements written by the programmer into the equivalent CPU instructions. Each programming language requires its own special compiler. Due to the generalized nature of programming languages, compilers are large, complicated programs that require months or even years of development time by teams of expert programmers. Developing a new programming language is therefore a long and expensive undertaking.

Hundreds of computer languages have been developed since the late 1950s, each with specialized purposes or particular kinds of users in mind. A few of the most common ones are briefly described below.

The first version of FORTRAN, which was developed for use in numerical and scientific applications, was released in 1958. LISP (List Programming), released the same year, was developed for use in artificial intelligence research. Both of these languages have been used extensively since that time and are still widely used.

COBOL (Common Business Oriented Language) was released in 1960 as a language to be   used for business applications. It was designed to have an English-like structure and vocabulary so nonprogrammers (e.g., managers, accountants) could read it. While its effectiveness in this regard has been subject to debate, COBOL continues to be used extensively in business, particularly for large mainframe-based computer applications in banking and insurance.

Most programmers learn BASIC (Beginner’s All-purpose Symbolic Instructional Code) as a first language. BASIC has an easy-to-learn syntax and does not require the use of complicated system software as do FORTRAN and COBOL. Because the syntax and resource usage of rudimentary versions of BASIC are so modest, it was the choice of microcomputer manufacturers in the early 1980s. At that point, the RAM supplied with such computers was as small as 8 kilobytes, and the BASIC interpreter and system software were permanently stored in read-only memory (ROM). More recent versions of BASIC take advantage of the larger capacity and increased speed of personal computers to offer more language features. Some versions of BASIC now resemble Pascal, a language that was developed in the late 1960s for programming education and that included sophisticated language features to help programmers avoid common errors in the structure of their programs.

During the 1980s, C became a widely used language. Although it has a sophisticated syntax, allowing programmers to write statements that would be very difficult and tedious to construct in machine language, it also allows the programmer to manipulate directly the register bits and memory addresses. Most other programming languages do not allow such direct manipulation. C was developed in 1972 at AT&T Bell Laboratories and was flexible and powerful enough that it was used to implement the UNIX operating system.

C continues to be used to implement operating systems and applications and is the principal language used to implement Linux, the freely distributed UNIX-like operating system developed by Linus Torvalds and other developers. The idea behind Linux is to provide computer users with a technically sophisticated alternative to other operating systems, particularly Microsoft Windows, and to do it in such a way that no large corporation or other centralized entity can control its licensing or otherwise dictate its terms of use.

Beginning in the late 1960s, attention turned to “object orientation” as a way to improve programmer productivity, software reliability, and the portability of programs between different operating systems. Object orientation allows programmers to think about program code and data much as individuals think about objects in the real world, with their inner workings being hidden and only the parts intended for user access being visible. An example of a real-world object that relates well to software objects is a portable radio; it is a self-contained object that has complicated inner workings but only presents a few simple controls to people. The radio is made to work by manipulating the external controls, not by tinkering with the complicated electronics inside of it. Software objects simplify software development by only allowing users (including programmers who use objects to build other programs) to manipulate the external attributes of objects. Only the developer of an object can tinker with its inner workings. Smalltalk, publicly released in 1980, was one of the first languages designed to be specifically object oriented, and it continues to be the model for object-oriented systems. C++ was developed as an object-oriented extension of C, but it has proved complicated and difficult to manage for large projects.

In 1995, Sun Microsystems introduced Java, an object-oriented language designed for developing Internet-based applications. Java’s syntax is based on C, to encourage C and C++ developers to use it, and it does not require programmers who build network-aware applications to do complicated network programming. Java is also designed as a platform-independent language, allowing programmers to write code once and run it on any operating system and hardware. Java does this by running within a special environment that makes all platforms appear the same. Java “applets” are little programs that run in World Wide Web browsers and are often used as part of web-pages. Java “applications” run outside of web browsers, just like other applications do.

Influence of Software on Computer Markets

Even though people tend to think of computer hardware as the more substantial part of the computing package, software is often the more important consideration when buying a computer. Software is also more valuable because of the greater effort and expense involved in its development. Developing hardware is by no means trivial, but advances in hardware are typically manifested by increased speed and memory capacity. Advances in software are more difficult to measure, and they may only become apparent to users who have specialized needs or who are in particular circumstances.

Once users find software that satisfies their needs, they typically do not want to change, even when a new version or a competitor’s version becomes available that may better serve them. The use of a particular piece of software represents not only an investment in that software and the hardware to run it, but also in training of the personnel who use it and do collaborative work with it. In this way, the use of particular kinds of software (as with any other tool) becomes enmeshed in the culture of the community that uses it. As time goes on, changing to a different system becomes more expensive. Therefore, the software company that establishes a greater market share first will most likely dominate the market thereafter.

This was the case with IBM, which dominated the computer hardware and software market from the 1950s through most of the 1980s, even though its hardware and software were not particularly advanced. They were sufficient to do the data processing jobs their customers required, and IBM’s sales and service staff made up for any deficiencies that remained.

By the late 1990s, Microsoft had taken over the dominant position in the software market that IBM had once held. Microsoft uses its dominance in the operating system market to leverage sales of its application software, which it claims makes the best use of its operating system features. However, the extent to which Microsoft has pursued this dominance has caused them to have legal difficulties. Many competing software companies have successfully sued Microsoft for predatory business practices as well as for breach of licensing agreements. Most significantly, in late 1999, the U.S. Department of Justice found Microsoft in violation of antitrust statutes.

Conclusion

Computer software is an integral part of everyday life, not only in the use of personal computers but also behind the scenes of every business transaction, telephone call, and even in the use of everyday devices such as automobiles. Even devices that are not ostensibly computers may contain one or more small, embedded computers to enhance their operation in some way. All of these computers depend on the proper operation of software to accomplish their tasks. In certain respects, the operation of software is indistinguishable from the operation of the device that houses it, and some devices that were introduced in the late 1990s (such as DVD players and other digital media devices) would not even be possible without software.

Software will continue to be an ever more pervasive presence for the foreseeable future; it will become increasingly difficult, if not impossible altogether, to do anything without using software in some way. This presents challenges on a number of fronts, the most important being that the software be reliable and that people have control over how it is used, especially when it involves the transmission of personal information over data networks such as the Internet.

The Open Source Initiative (OSI), of which the Linux operating system is a part, seeks to improve the software development and distribution process by having a large and loosely knit group of developers write code that is freely available to all who use it, as well as freely modifiable by developers who discover bugs and other problems with the software. This is in contrast to the corporate model of software development, where only the company that produces the software maintains it and users of the software have no access to the source code.

Computer Software, Educational - Basic Applications, Software Production Feature, Multimedia Teaching Methods [next] [back] Computer Programmer - Types ofs, Creating a Computer’s Brain, Step-by-Step Instructions, Programming for Flight

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or

Vote down Vote up

over 5 years ago

hello,

I found this article is useful for reseach..for example Im an Algerian student at university..I needed this for my essay "are we too dependent on computers?"

Thanks a lot..

hope you can give more and more for people especially teachers, students and pupils

Vote down Vote up

over 6 years ago

computers in the past

Vote down Vote up

almost 7 years ago

do you have a features of software????