What Is a Capacitor?

Ben Franklin used a Leyden jar in his famous kite experiment.
Ben Franklin used a Leyden jar in his famous kite experiment.

A capacitor is a tool consisting of two conductive plates, each of which hosts an opposite charge. These plates are separated by a dielectric or other form of insulator, which helps them maintain an electric charge. There are several types of insulators used in capacitors, including ceramic, polyester, tantalum air, and polystyrene. Other common insulators include air, paper, and plastic. Each effectively prevents the plates from touching each other.

There are a number of different ways to use a capacitor, such as to store analog signals and digital data. Another type is used in the telecommunications equipment industry to adjust the frequency and tuning of telecommunications equipment. This is often referred to a variable capacitor. A capacitor is also ideal for storing electrons, but it cannot make them.

[ads1]

The first capacitor was the Leyden jar, invented at the Netherlands University in the 18th century. It consists of a glass jar coated with metal on the inside and outside. A rod is connected to the inner coat of metal, passed through the lid, and topped off with a metal ball. As with all capacitors, the jar contains an oppositely charged electrode and a plate that is separated by an insulator. The Leyden jar has been used to conduct experiments in electricity for hundreds of years.

A capacitor can be measured in voltage, which differs on each of the two interior plates. Both plates are charged, but the current flows in opposite directions. A capacitor contains 1.5 volts, which is the same voltage found in a common AA battery. As voltage is used, one of the two plates becomes filled with a steady flow of current. At the same time, the current flows away from the other plate.

To understand the flow of voltage in a capacitor, it is helpful to look at naturally occurring examples. Lightning, for example, works in a similar way. The cloud represents one of the plates and the ground represents the other. The lightning is the charging factor moving between the ground and the cloud.

 

Source / Courtesy : WiseGeek

What is a Computer Chip?

013212014
Computer chips are one of the basic components of most electronic devices.

A computer chip is a small electronic circuit, also known as an integrated circuit, which is one of the basic components of most kinds of electronic devices, especially computers. Computer chips are small and are made of semiconductors that is usually composed of silicon, on which several tiny components including transistors are embedded and used to transmit electronic data signals. They became popular in the latter half of the 20th century because of their small size, low cost, high performance and ease to produce.

The modern computer chip saw its beginning in the 1950s through two separate researchers who were not working together, but developed similar chips. The first was developed at Texas Instruments by Jack Kilby in 1958, and the second was developed at Fairchild Semiconductor by Robert Noyce in 1958. These first computer chips used relatively few transistors, usually around ten, and were known as small-scale integration chips. As time went on through the century, the amount of transistors that could be attached to the computer chip increased, as did their power, with the development of medium-scale and large-scale integration computer chips. The latter could contain thousands of tiny transistors and led to the first computer microprocessors.

There are several basic classifications of computer chips, including analog, digital and mixed signal varieties. These different classifications of computer chips determine how they transmit signals and handle power. Their size and efficiency are also dependent upon their classification, and the digital computer chip is the smallest, most efficient, most powerful and most widely used, transmitting data signals as a combination of ones and zeros.

Robert Noyce was one of the first developers of the modern computer chip.
Robert Noyce was one of the first developers of the modern computer chip.

Today, large-scale integration chips can actually contain millions of transistors, which is why computers have become smaller and more powerful than ever. Not only this, but computer chips are used in just about every electronic application including home appliances, cell phones, transportation and just about every aspect of modern living. It has been posited that the invention of the computer chip has been one of the most important events in human history. The future of the computer chip will include smaller, faster and even more powerful integrated circuits capable of doing amazing things, even by today’s standards.

Source / Courtesy : WiseGeek

What Is a Transistor?

[dropcap]A[/dropcap] transistor is a semiconductor, differentiated from a vacuum tube primarily by its use of a solid, non-moving part to pass a charge. They are crucial components in virtually every piece of modern electronics, and are considered by many to be the most important invention of the modern age (as well as a herald of the Information Age).

The development of the transistor grew directly out of huge advances in diode technology during World War II. In 1947, scientists at Bell Laboratories unveiled the first functional model after a number of false starts and technological stumbling blocks.

The first important use of the transistor was in hearing aids, by military contractor Raytheon, inventors of the microwave oven and producer of many widely-used missiles, including the Sidewinder and Patriot missiles.

The first transistor radio was released in 1954 by Texas Instruments, and by the beginning of the 1960s, these radios had become a mainstay of the worldwide electronics market. Also in the 1960s, transistors were integrated into silicon chips, laying the groundwork for the technology that would eventually allow personal computers to become a reality. In 1956, Bill Shockley, Walter Brattain, and John Bardee won the Nobel Prize for physics for their development of the transistor.

The first important use of the transistor was in hearing aids.
The first important use of the transistor was in hearing aids.

The primary type currently in use is known as a bipolar junction transistor, which consists of three layers of semi-conductor material, two of which have extra electrons, and one which has gaps in it. The two with extra electrons (N-Type) sandwich the one with gaps (P-Type). This configuration allows the transistor to be a switch, closing and opening rapidly like an electronic gate, allowing voltage to pass at a determined rate. If it is not shielded from light, the light may be used to open or close the gate, in which case it is referred to as a phototransistor, functioning as a highly-sensitive photodiode.

The secondary type is known as a field-effect transistor, and consists either entirely of N-Type semi-conductive material or P-Type semi-conductive material, with the current controlled by the amount of voltage applied to it.

Source / Courtesy : WiseGeek

How Do Microprocessors Work?

A microprocessor acts through a series of instructions.
A microprocessor acts through a series of instructions.

Microprocessors use a number of different processes to function. Their main purpose is to process a series of numbers placed into sequences which make up a program. Each of these sequences gives some sort of instruction to the microprocessor which, in turn, relates information to other parts of the computer. This facilitates the actions necessary for the program to function. Microprocessors are types of central processing units (CPUs), essentially the central brain of a computer. A microprocessor takes the form of a computer chip that is placed in a motherboard, which operates as the relay center for all the higher functions processed from the CPU.

When a microprocessor is activated, it performs a series of actions, each one defining an exact point of communication. This communication gives instructions in the form of binary code, a series of ones and zeros. The CPU then responds to the instructions by processing the code, taking the necessary actions requested by the code, and relaying to the responsible input section that the action has successfully taken place.

The first step in this process is known as the fetch action. A program will elicit a series of ones and zeroes that define an exact action. Part of the sequence is responsible for informing microprocessors of the location of the necessary code within the program. This is the portion in which random access memory (RAM) is used. The RAM provides the memory for the CPU to be able to hold the instructions long enough for them to be used. When there is not enough RAM in a computer, the computer slows down.

The next step involving the workload of a microprocessor is known as the decoding action. Each set of numbers within the sequence is responsible for a certain action. In order for the CPU to order the correct components to do their jobs, each part of the sequence of numbers must be identified and given the correct operational parameters. For example, if a user is burning a DVD, the CPU needs to communicate certain numerical values to the DVD unit that burns the disk, the hard drive which supplies the information and the video card for display of the status for the user.

Microprocessors work with the computer's hard drive.
Microprocessors work with the computer’s hard drive.

Execution is the next step in the function of microprocessors. Essentially, the CPU tells the computer components to do their jobs. During the execution phase, the microprocessor stays in constant contact with the components, making sure each portion of the activity is successfully completed according to the instructions gathered and sent during the previous two steps.

The final action for microprocessors involves the writeback function. This is simply the CPU making a copy of the actions and their results onto the computer’s main memory, usually found in the hard drive. The writeback step is essential to determining problematic issues when something goes wrong. For example, if the DVD did not burn correctly, a user can access the writeback files and find out which step occurred without success. These files are placed in a section of the memory known as the registry, which often suffers from increased levels of corruption as redundant actions are completed regularly.

What Is a Digital Computer? (Interview Question Explained)

Most computers operate using binary code and could be considered digital.
Most computers operate using binary code and could be considered digital.

A digital computer is machine that stores data in a numerical format and performs operations on that data using mathematical manipulation. This type of computer typically includes some sort of device to store information, some method for input and output of data, and components that allow mathematical operations to be performed on stored data. Digital computers are almost always electronic but do not necessarily need to be so.

There are two main methods of modeling the world with a computing machine. Analog computers use some physical phenomenon, such as electrical voltage, to model a different phenomenon, and perform operations by directly modifying the stored data. A digital computer, however, stores all data as numbers and performs operations on that data arithmetically. Most computers use binary numbers to store data, as the ones and zeros that make up these numbers are easily represented with simple on-off electrical states.

Computers based on analog principles have advantages in some specialized areas, such as their ability to continuously model an equation. A digital computer, however, has the advantage of being easily programmable. This means that they can process many different sets of instructions without being physically reconfigured.

Digital computers store data in a numerical format.
Digital computers store data in a numerical format.

The earliest digital computers date back to the 19th century. An early example is the analytical engine theorized by Charles Babbage. This machine would have stored and processed data mechanically. That data, however, would not have been stored mechanically but rather as a series of digits represented by discrete physical states. This computer would have been programmable, a first in computing.

Digital computing came into widespread use during the 20th century. The pressures of war led to great advances in the field, and electronic computers emerged from the Second World War. This sort of digital computer generally used arrays of vacuum tubes to store information for active use in computation. Paper or punch cards were used for longer-term storage. Keyboard input and monitors emerged later in the century.

In the early 21st century, computers rely on integrated circuits rather than vacuum tubes. They still employ active memory, long-term storage, and central processing units. Input and output devices have multiplied greatly but still serve the same basic functions.

In 2011, computers are beginning to push the limits of conventional circuitry. Circuit pathways in a digital computer can now be printed so close together that effects like electron tunneling must be taken into consideration. Work on digital optical computers, which process and store data using light and lenses, may help in overcoming this limitation.

Nanotechnology may lead to a whole new variety of mechanical computing. Data might be stored and processed digitally at the level of single molecules or small groups of molecules. An astonishing number of molecular computing elements would fit into a comparatively tiny space. This could greatly increase the speed and power of digital computers.

Circuit pathways in a digital computer can now be printed extremely close together.
Circuit pathways in a digital computer can now be printed extremely close together.
Early analog computers used to take up entire rooms.
Early analog computers used to take up entire rooms, Year 1949.
Digital components like processors are typically more versatile than analog ones.
Digital components like processors are typically more versatile than analog ones.
Inventor Charles Babbage conceived the idea of the steam-powered Difference Engine in 1822.
Inventor Charles Babbage conceived the idea of the steam-powered Difference Engine in 1822.

Source / Courtesy : WiseGeek

What are Integrated Circuits (ICs)?

 

A central processing unit, a type of integrated circuit.

An integrated circuit (IC), popularly known as a silicon chip, computer chip or microchip, is a miniature electronic circuit rendered on a sliver of semiconducting material, typically silicon, but sometimes sapphire. Owing to their tiny measurements and incredible processing power — modern integrated circuits host millions of transistors on boards as small as 5 millimeters (about 0.2 inches) square and 1 millimeter (0.04 inches) thick — they are to be found in virtually every modern-day appliance and device, from credit cards, computers, and mobile phones to satellite navigation systems, traffic lights and airplanes.

Essentially, an integrated circuit is a composite of various electronic components, namely, transistors, resistors, diodes and capacitors, that are organized and connected in a way that produces a specific effect. Each unit in this ‘team’ of electronic components has a unique function within the integrated circuit. The transistor acts like a switch and determines the ‘on’ or ‘off’ status of the circuit; the resistor controls the flow of electricity; the diode permits the flow of electricity only when some condition on the circuit has been met; and finally the capacitor stores electricity prior to its release in a sustained burst.

The first integrated circuit was demonstrated by Texas Instruments’ employee Jack Kilby in 1958. This prototype, measuring about 11.1 by 1.6 millimeters, consisted of a strip of germanium and just one transistor. The advent of silicon coupled with the ever diminishing size of integrated circuits and the rapid increase in the number of transistors per millimeter meant that integrated circuits underwent massive proliferation and gave rise to the age of modern computing.

From its inception in the 1950s to the present day, integrated circuit technology has known various ‘generations’ that are now commonly referred to as Small Scale Integration (SSI), Medium Scale Integration (MSI), Large Scale Integration (LSI), and Very Large Scale Integration (VSLI). These progressive technological generations describe an arc in the progress of IC design that goes to illustrate the prescience of Intel head, George Moore, who coined ‘Moore’s Law’ in the 1960s which asserted that integrated circuits double in complexity every two years.

Integrated circuits have become increasingly complex.
Integrated circuits have become increasingly complex.

This doubling in complexity is borne out by the generational movement of the technology that saw SSI’s tens of transistors increase to MSI’s hundreds, then to LSI’s tens of thousands, and finally to VSLI’s millions. The next frontier that integrated circuits promise to breach is that of ULSI, or Ultra-Large Scale Integration, which entails the deployment of billions of microscopic transistors and has already been heralded by the Intel project codenamed Tukwila, which is understood to employ over two billion transistors.

If more proof were needed of the persisting veracity of Moore’s dictum, we have only to look at the modern day integrated circuit which is faster, smaller and more ubiquitous than ever. As of 2008, the semiconductor industry produces more than 267 billion chips a year and this figure is raised to 330 billion by the end of 2012.