I doubt that anyone reading this post right now can bear spending more than a few hours away from the warmth and comfort of the modern electronic device, whether that be a phone, a computer screen, a television, or even a washing machine. Upon light inspection, these devices may seem relatively simple (maybe not the washing machine, I have no idea how to use that), but the inner workings that allow these devices to operate as efficiently and as smoothly as they do are extremely complex.
All modern ‘computers’ (i.e. anything that is programmed to carry out operations automatically) will operate in the same sort of way, using a method known as classical computation which was first conceived in the first half of the 20th century and is still used today. This concept relies on bits – entities with two discrete states (usually referred to as 0 and 1, or ‘on’ and ‘off’) which can be used to encode information. This is brought into reality by the use of transistors – semiconductor components which experience a huge drop in resistance once a certain voltage is applied. This drop in resistance means that a transistor can either restrict current (state zero) or allow it to flow (state one). By controlling the voltage through each transistor on a circuit, the state of the transistor can be flipped, thereby allowing us to encode information very easily. You could say that transistors are the fundamental building blocks of all electronic devices.
Ever since the end of the Second World War, the development of computer speed has skyrocketed year on year. One major contributing factor to this rapid development is the ability of manufacturers to cram more and more transistors into a very small space, which is why phones and laptops can be made extremely thin. An observation was made by Gordon Moore, the co-founder of Intel, in 1965 that the number of transistors in a dense integrated circuit doubles approximately every two years, known as Moore’s Law. He predicted that this exponential rate of growth would continue for at least another decade, and this has held true up until about 2012.
Currently, the smallest transistors are about 20 nanometres across (about 10,000 times thinner than a human hair). However, transistors much smaller than this begin to pose a problem. A transistor carrying a current will contain electrons flowing through it that are small enough to be radically affected by Heisenberg’s uncertainty principle (see Quantum of Energy). Their exact position can therefore not be determined. If the transistor is so small that the uncertainty principle would allow a flowing electron to pass out of it in a process known as quantum tunnelling, then any stored information would be lost as the flow of electrons cannot be controlled, rendering the computer ineffective. This poses a fundamental limit on the number of transistors that can be placed onto a processor and hence on the power of a classical computer.
The key word there, however, is ‘classical’. What happens if we start to make direct use quantum mechanical phenomena in computers? For the past 10 years or so, a few people have been working on designing and creating a functional quantum computer, but as of now, its development is still in its infancy. Scientists have already built basic quantum computers that are able to perform certain calculations, but a practical quantum computer is still years away. But mark my words, quantum computing will bring forward a new, unparalleled era of computing. Want to find out how? Well you’re gonna have to wait till this time next week. Sorry.