We take microprocessors for granted. Most people who aren’t deeply involved in technology may have no idea all of the places where a microprocessor shows up in their lives. It’s not just their computers; it’s innumerable devices they use on a daily basis.
What Is a Microprocessor?
A microprocessor integrates three essential elements onto a single chip:
- A central processing unit, or CPU. This is the engine that does the actual computing work.
- Memory, which stores both the inputs to and the outputs from the computing operations.
- Input/output, or I/O: capabilities that make it possible both to provide the microprocessor with the data it needs to do its work (the inputs) and to read out the results of that work (the outputs).
The definition of a microprocessor can get a bit murky for those that want to parse things closely, but the critical element that distinguishes a microprocessor (which is largely built out of logic) from other logic chips is that microprocessors use software to define their functions.
So, when was the first single-chip processor was born? If we focus on commercial availability, that nod goes to the Intel 4004, which was introduced in 1971 – 50 years ago.
The microprocessors we use today bear little resemblance to the first ones. As we’ve been able to place more transistors on a chip, the architecture of the microprocessor has become increasingly sophisticated. The “best” architecture at any time depends on what that microprocessor’s job will be, but, in general, the microprocessor has evolved along four lines.
- The number of “bits.” This specifies the size of the minimum chunk of data. In general, the bigger the chunk, then the more work can be done in a given period of time. Starting from 4 bits, we’ve moved through 8, 16, 32, and 64 bits. Specialized processors may be bigger, but the standard size for today’s general-purpose high-performance microprocessors is 64 bits.
- The length of the “pipeline.” In order to make processors work with faster clocks, one major approach has been to divide the computing task into subtasks. The idea is that, the smaller the subtask, the faster that portion can be done, which means you can run the clock faster and deliver results faster.
- Specialized features for speed. For instance, when a computer program has a decision to make, you don’t know what the result of that decision will be until you run the actual program. Sophisticated processors can now make good guesses as to what the likely result will be so that it can prepare for that decision early and keep things moving quickly.
- More than one CPU. This is referred to as “multicore” computing, and the idea is simple: if one CPU can finish a job in a certain amount of time, then more CPUs can do the job even faster. This is often true, although it’s a tricky business. Not all jobs can be shared easily between CPUs. Nonetheless, major mainstream microprocessors today typically contain more than one CPU.
New Devices, New Tasks, New Priorities
Of course, speed isn’t the only consideration anymore. Especially when it comes to battery-powered devices, power is equally important. Keeping them small is also critical if they’re going to be used in space-constrained devices like smart watches. And combining them with more memory and specialized other circuits gives us microcontrollers, which are used in innumerable devices.
All in all, some version of the microprocessor exists in pretty much any electronic gadget we use these days. And it all started 50 years ago.
So, we wish the microprocessor a very Happy Golden Anniversary.