Magnetic mirrors enable new technologies by reflecting light in uncanny ways

Artist's impression of a comparison between a magnetic mirror with cube shaped resonators (left) and a standard metallic mirror (right). The incoming and outgoing electric field of light (shown as alternating red and white bands) illustrates that the magnetic mirror retains light's original signature while a standard metallic mirror reverses it upon reflection. Credit: S. Liu et al.
Artist’s impression of a comparison between a magnetic mirror with cube shaped resonators (left) and a standard metallic mirror (right). The incoming and outgoing electric field of light (shown as alternating red and white bands) illustrates that the magnetic mirror retains light’s original signature while a standard metallic mirror reverses it upon reflection.
Credit: S. Liu et al.

As in Alice’s journey through the looking-glass to Wonderland, mirrors in the real world can sometimes behave in surprising and unexpected ways, including a new class of mirror that works like no other.

As reported today in The Optical Society’s (OSA) new journal Optica, scientists have demonstrated, for the first time, a new type of mirror that forgoes a familiar shiny metallic surface and instead reflects infrared light by using an unusual magnetic property of a non-metallic metamaterial.

By placing nanoscale antennas at or very near the surface of these so-called “magnetic mirrors,” scientists are able to capture and harness electromagnetic radiation in ways that have tantalizing potential in new classes of chemical sensors, solar cells, lasers, and other optoelectronic devices.

“We have achieved a new milestone in magnetic mirror technology by experimentally demonstrating this remarkable behavior of light at infrared wavelengths. Our breakthrough comes from using a specially engineered, non-metallic surface studded with nanoscale resonators,” said Michael Sinclair, co-author on the Optica paper and a scientist at Sandia National Laboratories in Albuquerque, New Mexico, USA who co-led a research team with fellow author and Sandia scientist Igal Brener.

These nanoscale cube-shaped resonators, based on the element tellurium, are each considerably smaller than the width of a human hair and even tinier than the wavelengths of infrared light, which is essential to achieve magnetic-mirror behavior at these incredibly short wavelengths.

“The size and shape of the resonators are critical,” explained Sinclair “as are their magnetic and electrical properties, all of which allow them to interact uniquely with light, scattering it across a specific range of wavelengths to produce a magnetic mirror effect.”


Early Magnetic Mirror Designs

Conventional mirrors reflect light by interacting with the electrical component of electromagnetic radiation. Because of this, however, they do more than reverse the image; they also reverse light’s electrical field. Though this has no impact on the human eye, it does have major implications in physics, especially at the point of reflection where the opposite incoming and outgoing electrical fields produce a canceling effect. This temporary squelching of light’s electrical properties prevents components like nanoscale antennas and quantum dots from interacting with light at the mirror’s surface.

A magnetic mirror, in contrast, reflects light by interacting with its magnetic field, preserving its original electrical properties. “A magnetic mirror, therefore, produces a very strong electric field at the mirror surface, enabling maximum absorption of the electromagnetic wave energy and paving the way for exciting new applications,” said Brener.

Unlike silver and other metals, however, there is no natural material that reflects light magnetically. Magnetic fields can reflect and even bottle-up charged particles like electrons and protons. But photons, which have no charge, pass through freely.

“Nature simply doesn’t provide a way to magnetically reflect light,” explained Brener. Scientists, therefore, are developing metamaterials (materials not found in nature, engineered with specific properties) that are able to produce the magnetic-mirror effect.

Initially, this could only be achieved at long microwave frequencies, which would enable only a few applications, such as microwave antennas.

More recently, other researchers have achieved limited success at shorter wavelengths using “fish-scale” shaped metallic components. These designs, however, experienced considerable loss of signal, as well as an uneven response due to their particular shapes.

Mirrors Without Metals

To overcome these limitations, the team developed a specially engineered two-dimensional array of non-metallic dielectric resonators — nanoscale structures that strongly interact with the magnetic component of incoming light. These resonators have a number of important advantages over the earlier designs . First, the dielectric material they use, tellurium, has much lower signal loss than do metals, making the new design much more reflective at infrared wavelengths and creating a much stronger electrical field at the mirror’s surface. Second, the nanoscale resonators can be manufactured using standard deposition-lithography and etching processes, which are already widely used in industry.

The reflective properties of the resonators emerge because they behave, in some respects, like artificial atoms, absorbing and then reemitting photons. Atoms naturally do this by absorbing photons with their outer electrons and then reemitting the photons in random directions. This is how molecules in the atmosphere scatter specific wavelengths of light, causing the sky to appear blue during the day and red at sunrise and sunset.

The metamaterials in the resonators achieve a similar effect, but absorb and reemit photons without reversing their electric fields.


Proof of the Process

Confirming that the team’s design was actually behaving like a magnetic mirror required exquisite measurements of how the light waves overlap as they pass each other coming in and reflecting off of the mirror surface. Since normal mirrors reverse the phase of light upon reflection, evidence that the phase signature of the wave was not reversed would be the “smoking gun” that the sample was behaving as a true magnetic mirror.

To make this detection, the Sandia team used a technique called time-domain spectroscopy, which has been widely used to measure phase at longer terahertz wavelengths. According to the researchers, only a few groups in the world have demonstrated this technique at shorter wavelengths (less than 10 microns). The power of this technique is that it can map both the amplitude and phase information of light’s electric field.

“Our results clearly indicated that there was no phase reversal of the light,” remarked Sheng Liu, Sandia postdoctoral associate and lead author on the Optica paper. “This was the ultimate demonstration that this patterned surface behaves like an optical magnetic mirror.”

Next steps

Looking to the future, the researchers will investigate other materials to demonstrate magnetic mirror behavior at even shorter, optical wavelengths, where extremely broad applications can be found. “If efficient magnetic mirrors could be scaled to even shorter wavelengths, then they could enable smaller photodetectors, solar cells, and possibly lasers,” Liu concluded.

Story Source:

The above story is based on materials provided by The Optical Society. Note: Materials may be edited for content and length.

Journal Reference:

  1. Sheng Liu, Michael B. Sinclair, Thomas S. Mahony, Young Chul Jun, Salvatore Campione, James Ginn, Daniel A. Bender, Joel R. Wendt, Jon F. Ihlefeld, Paul G. Clem, Jeremy B. Wright, Igal Brener. Optical magnetic mirrors without metals.Optica, 2014; 1 (4): 250 DOI: 10.1364/OPTICA.1.000250

Ultra-fast charging batteries that can be 70% recharged in just two minutes

NTU Assoc Prof Chen holding the ultrafast rechargable batteries in his right hand, with the battery test station to his left. Credit: Image courtesy of Nanyang Technological University
NTU Assoc Prof Chen holding the ultrafast rechargable batteries in his right hand, with the battery test station to his left.
Credit: Image courtesy of Nanyang Technological University

Scientists from Nanyang Technological University (NTU Singapore) have developed a new battery that can be recharged up to 70 per cent in only 2 minutes. The battery will also have a longer lifespan of over 20 years.

Expected to be the next big thing in battery technology, this breakthrough has a wide-ranging impact on many industries, especially for electric vehicles which are currently inhibited by long recharge times of over 4 hours and the limited lifespan of batteries.

This next generation of lithium-ion batteries will enable electric vehicles to charge 20 times faster than the current technology. With it, electric vehicles will also be able to do away with frequent battery replacements. The new battery will be able to endure more than 10,000 charging cycles — 20 times more than the current 500 cycles of today’s batteries.

NTU Singapore’s scientists replaced the traditional graphite used for the anode (negative pole) in lithium-ion batteries with a new gel material made from titanium dioxide, an abundant, cheap and safe material found in soil. It is commonly used as a food additive or in sunscreen lotions to absorb harmful ultraviolet rays.

Naturally found in a spherical shape, NTU Singapore developed a simple method to turn titanium dioxide particles into tiny nanotubes that are a thousand times thinner than the diameter of a human hair.

This nanostructure is what helps to speeds up the chemical reactions taking place in the new battery, allowing for superfast charging.

Invented by Associate Professor Chen Xiaodong from the School of Materials Science and Engineering at NTU Singapore, the science behind the formation of the new titanium dioxide gel was published in the latest issue of Advanced Materials, a leading international scientific journal in materials science.

NTU professor Rachid Yazami, who was the co-inventor of the lithium-graphite anode 34 years ago that is used in most lithium-ion batteries today, said Prof Chen’s invention is the next big leap in battery technology.

“While the cost of lithium-ion batteries has been significantly reduced and its performance improved since Sony commercialised it in 1991, the market is fast expanding towards new applications in electric mobility and energy storage,” said Prof Yazami.

“There is still room for improvement and one such key area is the power density — how much power can be stored in a certain amount of space — which directly relates to the fast charge ability. Ideally, the charge time for batteries in electric vehicles should be less than 15 minutes, which Prof Chen’s nanostructured anode has proven to do.”

Prof Yazami, who is Prof Chen’s colleague at NTU Singapore, is not part of this research project and is currently developing new types of batteries for electric vehicle applications at the Energy Research Institute at NTU (ERI@N).


Commercialisation of technology

Moving forward, Prof Chen’s research team will be applying for a Proof-of-Concept grant to build a large-scale battery prototype. The patented technology has already attracted interest from the industry.

The technology is currently being licensed to a company and Prof Chen expects that the new generation of fast-charging batteries will hit the market in two years’ time. It holds a lot of potential in overcoming the longstanding power issues related to electro-mobility.

“With our nanotechnology, electric cars would be able to increase their range dramatically with just five minutes of charging, which is on par with the time needed to pump petrol for current cars,” added Prof Chen.

“Equally important, we can now drastically cut down the waste generated by disposed batteries, since our batteries last ten times longer than the current generation of lithium-ion batteries.”

The long-life of the new battery also means drivers save on the cost of a battery replacement, which could cost over USD$5,000 each.

Easy to manufacture

According to Frost & Sullivan, a leading growth-consulting firm, the global market of rechargeable lithium-ion batteries is projected to be worth US$23.4 billion in 2016.

Lithium-ion batteries usually use additives to bind the electrodes to the anode, which affects the speed in which electrons and ions can transfer in and out of the batteries.

However, Prof Chen’s new cross-linked titanium dioxide nanotube-based electrodes eliminate the need for these additives and can pack more energy into the same amount of space.

“Manufacturing this new nanotube gel is very easy,” Prof Chen added. “Titanium dioxide and sodium hydroxide are mixed together and stirred under a certain temperature. Battery manufacturers will find it easy to integrate our new gel into their current production processes.”

This battery research project took the team of four NTU Singapore scientists three years to complete and is funded by Singapore’s National Research Foundation.

Last year, Prof Yazami was awarded the Draper Prize by the National Academy of Engineering for his ground-breaking work in developing the lithium-ion battery with three other scientists.

Story Source:

The above story is based on materials provided by Nanyang Technological University. Note: Materials may be edited for content and length.

Journal Reference:

  1. Yuxin Tang, Yanyan Zhang, Jiyang Deng, Jiaqi Wei, Hong Le Tam, Bevita Kallupalathinkal Chandran, Zhili Dong, Zhong Chen, Xiaodong Chen. Nanotubes: Mechanical Force-Driven Growth of Elongated Bending TiO2-based Nanotubular Materials for Ultrafast Rechargeable Lithium Ion Batteries (Adv. Mater. 35/2014). Advanced Materials, 2014; 26 (35): 6046 DOI:10.1002/adma.201470238

Smallest Nanoantennas For High-speed Data Networks

Nano dipole antennas under the microscope: The colors reflect the different trans-mission frequencies. Credit: Photo by LTI
Nano dipole antennas under the microscope: The colors reflect the different trans-mission frequencies.
Credit: Photo by LTI

[dropcap]M[/dropcap]ore than 120 years after the discovery of the electromagnetic character of radio waves by Heinrich Hertz, wireless data transmission dominates information technology. Higher and higher radio frequencies are applied to transmit more data within shorter periods of time. Some years ago, scientists found that light waves might also be used for radio transmission. So far, however, manufacture of the small antennas has required an enormous expenditure. KIT scientists have now succeeded for the first time in specifically and reproducibly manufacturing smallest optical nanoantennas from gold.

In 1887, Heinrich Hertz discovered the electromagnetic waves at the former Technical College of Karlsruhe, the predecessor of Universität Karlsruhe (TH). Specific and directed generation of electromagnetic radiation allows for the transmission of information from a place A to a remote location B. The key component in this transmission is a dipole antenna on the transmission side and on the reception side. Today, this technology is applied in many areas of everyday life, for instance, in mobile radio communication or satellite reception of broadcasting programs. Communication between the transmitter and receiver reaches highest efficiency, if the total length of the dipole antennas corresponds to about half of the wavelength of the electromagnetic wave.

Radio transmission by high-frequency electromagnetic light waves in the frequency range of several 100,000 gigahertz (500,000 GHz correspond to yellow light of 600 nm wavelength) requires minute antennas that are not longer than half the wavelength of light, i.e. 350 nm at the maximum (1 nm = 1 millionth of a millimeter). Controlled manufacture of such optical transmission antennas on the nanoscale so far has been very challenging worldwide, because such small structures cannot be produced easily by optical exposure methods for physical reasons, i.e. due to the wave character of the light. To reach the precision required for the manufacture of gold antennas that are smaller than 100 nm, the scientists working in the “Nanoscale Science” DFG-Heisenberg Group at the KIT Light Technology Institute (LTI) used an electron beam process, the so-called electron beam lithography. The results were published recently in the journal Nanotechnology (Nanotechnology 20 (2009) 425203).

These gold antennas act physically like radio antennas. However, the latter are 10 million times as large, they have a length of about 1 m. Hence, the frequency received by nanoantennas is 1 million times higher than radio frequency, i.e. several 100,000 GHz rather than 100 MHz.

These nanoantennas shall transmit information at extremely high data rates, because the high frequency of the waves allows for an extremely rapid modulation of the signal. For the future of wireless data transmission, this means acceleration by a factor of 10,000 at reduced energy consumption. Hence, nanoantennas are considered a major basis of new optical high-speed data networks. The positive side-effect: Light in the range of 1000 to 400 nm is not hazardous for man, animals, and plants.

In the future, nanoantennas from Karlsruhe may not only be used for information transmission, but also as tools for optical microscopy: “With the help of these small nano light emitters, we can study individual biomolecules, which has not been established so far,” says Dr. Hans-Jürgen Eisler, who heads the DFG Heisenberg group at the Light Technology Institute. Moreover, the nanoantennas may serve as tools to characterize nanostructures from semiconductors, sensor structures, and integrated circuits. The reason is the efficient capture of light by nanoantennas. Thereafter, they are turned into light emitters and emit light quantums (photons).

The LTI scientists are presently also working on the specific and efficient capture of visible light by means of these antennas and on focusing this light on a few 10 nm, the objective being e.g. the optimization of photovoltaic modules.

Story Source:

The above story is based on materials provided by Helmholtz Association of German Research Centres. Note: Materials may be edited for content and length.

What is a Computer Chip?

Computer chips are one of the basic components of most electronic devices.

A computer chip is a small electronic circuit, also known as an integrated circuit, which is one of the basic components of most kinds of electronic devices, especially computers. Computer chips are small and are made of semiconductors that is usually composed of silicon, on which several tiny components including transistors are embedded and used to transmit electronic data signals. They became popular in the latter half of the 20th century because of their small size, low cost, high performance and ease to produce.

The modern computer chip saw its beginning in the 1950s through two separate researchers who were not working together, but developed similar chips. The first was developed at Texas Instruments by Jack Kilby in 1958, and the second was developed at Fairchild Semiconductor by Robert Noyce in 1958. These first computer chips used relatively few transistors, usually around ten, and were known as small-scale integration chips. As time went on through the century, the amount of transistors that could be attached to the computer chip increased, as did their power, with the development of medium-scale and large-scale integration computer chips. The latter could contain thousands of tiny transistors and led to the first computer microprocessors.

There are several basic classifications of computer chips, including analog, digital and mixed signal varieties. These different classifications of computer chips determine how they transmit signals and handle power. Their size and efficiency are also dependent upon their classification, and the digital computer chip is the smallest, most efficient, most powerful and most widely used, transmitting data signals as a combination of ones and zeros.

Robert Noyce was one of the first developers of the modern computer chip.
Robert Noyce was one of the first developers of the modern computer chip.

Today, large-scale integration chips can actually contain millions of transistors, which is why computers have become smaller and more powerful than ever. Not only this, but computer chips are used in just about every electronic application including home appliances, cell phones, transportation and just about every aspect of modern living. It has been posited that the invention of the computer chip has been one of the most important events in human history. The future of the computer chip will include smaller, faster and even more powerful integrated circuits capable of doing amazing things, even by today’s standards.

Source / Courtesy : WiseGeek

What Is a Transistor?

[dropcap]A[/dropcap] transistor is a semiconductor, differentiated from a vacuum tube primarily by its use of a solid, non-moving part to pass a charge. They are crucial components in virtually every piece of modern electronics, and are considered by many to be the most important invention of the modern age (as well as a herald of the Information Age).

The development of the transistor grew directly out of huge advances in diode technology during World War II. In 1947, scientists at Bell Laboratories unveiled the first functional model after a number of false starts and technological stumbling blocks.

The first important use of the transistor was in hearing aids, by military contractor Raytheon, inventors of the microwave oven and producer of many widely-used missiles, including the Sidewinder and Patriot missiles.

The first transistor radio was released in 1954 by Texas Instruments, and by the beginning of the 1960s, these radios had become a mainstay of the worldwide electronics market. Also in the 1960s, transistors were integrated into silicon chips, laying the groundwork for the technology that would eventually allow personal computers to become a reality. In 1956, Bill Shockley, Walter Brattain, and John Bardee won the Nobel Prize for physics for their development of the transistor.

The first important use of the transistor was in hearing aids.
The first important use of the transistor was in hearing aids.

The primary type currently in use is known as a bipolar junction transistor, which consists of three layers of semi-conductor material, two of which have extra electrons, and one which has gaps in it. The two with extra electrons (N-Type) sandwich the one with gaps (P-Type). This configuration allows the transistor to be a switch, closing and opening rapidly like an electronic gate, allowing voltage to pass at a determined rate. If it is not shielded from light, the light may be used to open or close the gate, in which case it is referred to as a phototransistor, functioning as a highly-sensitive photodiode.

The secondary type is known as a field-effect transistor, and consists either entirely of N-Type semi-conductive material or P-Type semi-conductive material, with the current controlled by the amount of voltage applied to it.

Source / Courtesy : WiseGeek

Here’s why I implanted an NFC chip in my hand

Courtesy : Connectedly Credit:  Robert J Nelson
Courtesy : Connectedly Credit: Robert J Nelson

[dropcap]I[/dropcap] have recently taken a deeper step into the connected world. A step that some will describe as being interesting, and that some will describe as being crazy. To be honest, as happy as I am since I have taken this step, I have to admit that I fall on both the interesting and crazy sides myself. Before getting any further, I should mention that the “step” I took was implanting an NFC chip in my hand.

The chip was implanted in my left hand and sits between my thumb and pointer finger. Surprisingly — or thanks to the internet not so surprisingly — a kit that included all the necessary gear was easy to purchase. I made the purchase through a company called Dangerous Things, and for those curious — I paid $99 for a 13.56MHz ISO14443A & NFC Type 2 NTAG216 RFID chipset that is encased in a 2x12mm cylindrical biocompatible glass casing. Essentially that means the chip is safe to implant, and that it will work with all NFC compliant reader/writer devices. That includes USB devices as well as NFC capable mobile phones. This chip is pre-loaded in an injection syringe assembly, and while I wouldn’t trust (or suggest you trust) just anyone to do the procedure, I will say the process was quick, easy — and despite the large size of the needle — relatively painless.


And for reference, aside from the chip and injection syringe, the remaining items in the kit are medical related (gloves and such). Another personal reason for choosing Dangerous Things was the background on the company. The founder, Amal Graafstra, has been doing this for roughly a decade, and even nicer for those looking to get this done — there is solid documentation offered which may really help make this a reality for some.

On becoming a cyborg

I should make it clear that I am not trying to become a cyborg or anything like that. For me, getting this implant came down to having a strong interest in technology and the connected space, and more to the point is that I am someone who likes seeing technology integrated into life. Or in this case, my body. Along with not considering myself a cyborg, I do not feel comfortable using another common term here, biohacker. I basically think of this implant as another form of wearable, albeit, a semi-permanent form of wearable. Along with my interest in technology, I also have a strong interest in body modification and tattoos.

For me, getting this chip implanted seemed a good way to bridge those interests. And while I realize getting an NFC chip implanted in your hand is not common, I have to mention how hard it was to find people in the piercing and body modification industry (at least locally) that were interested in performing this procedure. Of course, that could also be due to their lack of interest in technology, or maybe due to them not knowing me personally. Putting that aside, how about we get more into my thought process in the lead up to the implant.

Leading up to the implant


It would be hard for me to recommend this procedure to many people. In fact, I would describe this as a procedure that should be given considerable thought. I had been reading about similar procedures for several years, and had been strongly considering it myself for a little more than a year. In the past, I had put it off due to not wanting to do the research. There was also the case of getting the necessary hardware and the somewhat limited use potential. Well, I finally did the research and that lead me to Dangerous Things — who as mentioned earlier — offer a kit with everything you need. As for the limited use potential, we’ll get into that more in a bit, but I can say that will come down to how much you are willing to spend, and to how much you are willing and able to build.

Implanting the Chip

I mentioned this isn’t a procedure that just anyone should get done, but actually finding someone willing to implant the chip took some effort. As I found, not everyone is going to be willing to implant an NFC chip in your hand. In my case I ended up getting the procedure done by a friend of a friend. And to clarify, that friend of a friend is a medical professional. They had never implanted an NFC chip, however they have been in the medical profession for many years. Furthermore, said medical professional was also very happy to read the documentation provided by the folks at *Dangerous Things. To that point, that documentation also made things much more comfortable for me. Bottom line here, once you come to a decision to get this done, make sure you have someone you can really trust.


Assuming you find someone willing and able, the actual process of implanting the chip is easy. As I said earlier, it was quick, easy, and relatively painless. Setting up the work area and cleaning my hand took much longer than the actual needle stick. The needle is large and the person getting the chip implanted can expect to feel the initial stick, a push (to get the needle in deeper), a bit of a pull back (of the needle), and then the deposit of the chip which is followed by the removal of the needle. And as you may have noticed in some of the images, there was a bit of blood. Overall pretty simple — that is provided you found someone comfortable and capable of doing the procedure.



There really isn’t much to the healing process. The needle is likely the biggest needle you’ve seen or been stuck with, but it is just that — a needle stick. That means there isn’t any cutting or stitches involved. Essentially, you’ll just have a red mark where the needle was inserted. This mark will heal up, and in my case, a month later I can only see a faint mark on my skin. You will also be able to feel the chip under your skin.

Aside from making sure your hand is clean before the stick and the site is kept clean afterward during the healing process, the main thing you want to keep in mind is to take it easy during the first few weeks. The folks at Dangerous Things suggest not messing with the tag, or pressing on the tag, and light use of your hand for the first two weeks. I followed those suggestions myself and things healed nicely and without issue.


That brings another point to consider. I’ve already mentioned putting serious thought in before getting this done, and also making sure you find someone you trust to do the procedure, however you should also consider what happens if things go bad. In my case, having a medical professional (friend of a friend) do the procedure would not have mattered much if it got infected afterward due to my lack of care. Basically, had things gone bad and I needed the chip removed — that could have created a rather awkward situation (with uncomfortable questions) if I had to go to a doctors office or emergency room.

Putting the Chip to Use

Up until this point I have been using my chip to secure my phone. Pretty boring and basic, but I had a reason for that. First though, I am currently using a 2013 Moto X, and have the chip programmed as a Motorola Skip. I also tested, and would suggest those without a Moto X use the NFC Secure Unlock app which is available from the Google Play Store.


I decided to keep the use simple and limited to securing my phone initially to keep costs down. This meant I could buy the chip kit (for $99) and spend no additional money until I knew there wasn’t going to be any issues. Now that the chip has been in and has fully healed, I am exploring other options. I’ve been considering a few options including opening my garage door, unlocking my front door, or unlocking my car. The catch here is that I am happy having my phone secured using the chip, and will likely implant another chip in my right hand for whatever I want to control (open or unlock) next. The fact that I am willing to get another chip implanted should speak to how easy the procedure was.

Finally…The Why

Actually, the why is the hardest part to answer. And to be perfectly honest I am not sure I have a good answer. My best response would be to satisfy my curiosity. I also wouldn’t suggest anyone get an implant just for this, but showing people how I can unlock my phone has turned into somewhat of a fun trick.


We don’t condone or encourage anyone to get an NFC implant, and take no responsibility if you do. Be sure to think it through and know what you’re getting into should you choose to do so.

Courtesy : Connectedly       ||      Credit:  Robert J Nelson

Nanoparticles can act like liquid on the outside, crystal on the inside

A surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.
A surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.

[dropcap]A[/dropcap] surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.

The research team behind the finding, led by MIT professor Ju Li, says the work could have important implications for the design of components in nanotechnology, such as metal contacts for molecular electronic circuits.

The results, published in the journal Nature Materials, come from a combination of laboratory analysis and computer modeling, by an international team that included researchers in China, Japan, and Pittsburgh, as well as at MIT.

The experiments were conducted at room temperature, with particles of pure silver less than 10 nanometers across — less than one-thousandth of the width of a human hair. But the results should apply to many different metals, says Li, senior author of the paper and the BEA Professor of Nuclear Science and Engineering.

Silver has a relatively high melting point — 962 degrees Celsius, or 1763 degrees Fahrenheit — so observation of any liquidlike behavior in its nanoparticles was “quite unexpected,” Li says. Hints of the new phenomenon had been seen in earlier work with tin, which has a much lower melting point, he says.

The use of nanoparticles in applications ranging from electronics to pharmaceuticals is a lively area of research; generally, Li says, these researchers “want to form shapes, and they want these shapes to be stable, in many cases over a period of years.” So the discovery of these deformations reveals a potentially serious barrier to many such applications: For example, if gold or silver nanoligaments are used in electronic circuits, these deformations could quickly cause electrical connections to fail.

Only skin deep

The researchers’ detailed imaging with a transmission electron microscope and atomistic modeling revealed that while the exterior of the metal nanoparticles appears to move like a liquid, only the outermost layers — one or two atoms thick — actually move at any given time. As these outer layers of atoms move across the surface and redeposit elsewhere, they give the impression of much greater movement — but inside each particle, the atoms stay perfectly lined up, like bricks in a wall.

“The interior is crystalline, so the only mobile atoms are the first one or two monolayers,” Li says. “Everywhere except the first two layers is crystalline.”

By contrast, if the droplets were to melt to a liquid state, the orderliness of the crystal structure would be eliminated entirely — like a wall tumbling into a heap of bricks.

Technically, the particles’ deformation is pseudoelastic, meaning that the material returns to its original shape after the stresses are removed — like a squeezed rubber ball — as opposed to plasticity, as in a deformable lump of clay that retains a new shape.

The phenomenon of plasticity by interfacial diffusion was first proposed by Robert L. Coble, a professor of ceramic engineering at MIT, and is known as “Coble creep.” “What we saw is aptly called Coble pseudoelasticity,” Li says.

Now that the phenomenon has been understood, researchers working on nanocircuits or other nanodevices can quite easily compensate for it, Li says. If the nanoparticles are protected by even a vanishingly thin layer of oxide, the liquidlike behavior is almost completely eliminated, making stable circuits possible.

Possible benefits

On the other hand, for some applications this phenomenon might be useful: For example, in circuits where electrical contacts need to withstand rotational reconfiguration, particles designed to maximize this effect might prove useful, using noble metals or a reducing atmosphere, where the formation of an oxide layer is destabilized, Li says.

The new finding flies in the face of expectations — in part, because of a well-understood relationship, in most materials, in which mechanical strength increases as size is reduced.

“In general, the smaller the size, the higher the strength,” Li says, but “at very small sizes, a material component can get very much weaker. The transition from ‘smaller is stronger’ to ‘smaller is much weaker’ can be very sharp.”

That crossover, he says, takes place at about 10 nanometers at room temperature — a size that microchip manufacturers are approaching as circuits shrink. When this threshold is reached, Li says, it causes “a very precipitous drop” in a nanocomponent’s strength.

The findings could also help explain a number of anomalous results seen in other research on small particles, Li says.

“The … work reported in this paper is first-class,” says Horacio Espinosa, a professor of manufacturing and entrepreneurship at Northwestern University who was not involved in this research. “These are very difficult experiments, which revealed for the first time shape recovery of silver nanocrystals in the absence of dislocation. … Li’s interpretation of the experiments using atomistic modeling illustrates recent progress in comparing experiments and simulations as it relates to spatial and time scales. This has implications to many aspects of mechanics of materials, so I expect this work to be highly cited.”

The research team included Jun Sun, Longbing He, Tao Xu, Hengchang Bi, and Litao Sun, all of Southeast University in Nanjing, China; Yu-Chieh Lo of MIT and Kyoto University; Ze Zhang of Zhejiang University; and Scott Mao of the University of Pittsburgh. It was supported by the National Basic Research Program of China; the National Natural Science Foundation of China; the Chinese Ministry of Education; the National Science Foundation of Jiangsu Province, China; and the U.S. National Science Foundation.

Story Source:

The above story is based on materials provided by Massachusetts Institute of Technology. The original article was written by David L. Chandler. Note: Materials may be edited for content and length.

Journal Reference:

  1. Jun Sun, Longbing He, Yu-Chieh Lo, Tao Xu, Hengchang Bi, Litao Sun, Ze Zhang, Scott X. Mao, Ju Li. Liquid-like pseudoelasticity of sub-10-nm crystalline silver particles. Nature Materials, 2014; DOI: 10.1038/nmat4105


New records set for silicon quantum computing

Artist impression of an electron wave function (blue), confined in a crystal of nuclear-spin-free 28-silicon atoms (black), controlled by a nanofabricated metal gate (silver). 

[dropcap]T[/dropcap]wo research teams working in the same laboratories at UNSW Australia have found distinct solutions to a critical challenge that has held back the realisation of super powerful quantum computers.

The teams created two types of quantum bits, or “qubits” — the building blocks for quantum computers — that each process quantum data with an accuracy above 99%. The two findings have been published simultaneously today in the journal Nature Nanotechnology.

“For quantum computing to become a reality we need to operate the bits with very low error rates,” says Scientia Professor Andrew Dzurak, who is Director of the Australian National Fabrication Facility at UNSW, where the devices were made.

“We’ve now come up with two parallel pathways for building a quantum computer in silicon, each of which shows this super accuracy,” adds Associate Professor Andrea Morello from UNSW’s School of Electrical Engineering and Telecommunications.

The UNSW teams, which are also affiliated with the ARC Centre of Excellence for Quantum Computation & Communication Technology, were first in the world to demonstrate single-atom spin qubits in silicon, reported in Nature in 2012 and 2013.

Now the team led by Dzurak has discovered a way to create an “artificial atom” qubit with a device remarkably similar to the silicon transistors used in consumer electronics, known as MOSFETs. Post-doctoral researcher Menno Veldhorst, lead author on the paper reporting the artificial atom qubit, says, “It is really amazing that we can make such an accurate qubit using pretty much the same devices as we have in our laptops and phones.”

Meanwhile, Morello’s team has been pushing the “natural” phosphorus atom qubit to the extremes of performance. Dr Juha Muhonen, a post-doctoral researcher and lead author on the natural atom qubit paper, notes: “The phosphorus atom contains in fact two qubits: the electron, and the nucleus. With the nucleus in particular, we have achieved accuracy close to 99.99%. That means only one error for every 10,000 quantum operations.”

Dzurak explains that, “even though methods to correct errors do exist, their effectiveness is only guaranteed if the errors occur less than 1% of the time. Our experiments are among the first in solid-state, and the first-ever in silicon, to fulfill this requirement.”

The high-accuracy operations for both natural and artificial atom qubits is achieved by placing each inside a thin layer of specially purified silicon, containing only the silicon-28 isotope. This isotope is perfectly non-magnetic and, unlike those in naturally occurring silicon, does not disturb the quantum bit. The purified silicon was provided through collaboration with Professor Kohei Itoh from Keio University in Japan.

The next step for the researchers is to build pairs of highly accurate quantum bits. Large quantum computers are expected to consist of many thousands or millions of qubits and may integrate both natural and artificial atoms.

Morello’s research team also established a world-record “coherence time” for a single quantum bit held in solid state. “Coherence time is a measure of how long you can preserve quantum information before it’s lost,” Morello says. The longer the coherence time, the easier it becomes to perform long sequences of operations, and therefore more complex calculations.

The team was able to store quantum information in a phosphorus nucleus for more than 30 seconds. “Half a minute is an eternity in the quantum world. Preserving a ‘quantum superposition’ for such a long time, and inside what is basically a modified version of a normal transistor, is something that almost nobody believed possible until today,” Morello says.

“For our two groups to simultaneously obtain these dramatic results with two quite different systems is very special, in particular because we are really great mates,” adds Dzurak.


Story Source:

The above story is based on materials provided by University of New South Wales.Note: Materials may be edited for content and length.

Journal References:

  1. M. Veldhorst, J. C. C. Hwang, C. H. Yang, A. W. Leenstra, B. de Ronde, J. P. Dehollain, J. T. Muhonen, F. E. Hudson, K. M. Itoh, A. Morello, A. S. Dzurak. An addressable quantum dot qubit with fault-tolerant control-fidelity. Nature Nanotechnology, 2014; DOI: 10.1038/nnano.2014.216
  2. Juha T. Muhonen, Juan P. Dehollain, Arne Laucht, Fay E. Hudson, Rachpon Kalra, Takeharu Sekiguchi, Kohei M. Itoh, David N. Jamieson, Jeffrey C. McCallum, Andrew S. Dzurak, Andrea Morello. Storing quantum information for 30 seconds in a nanoelectronic device. Nature Nanotechnology, 2014; DOI:10.1038/nnano.2014.211


How Do Microprocessors Work?

A microprocessor acts through a series of instructions.
A microprocessor acts through a series of instructions.

Microprocessors use a number of different processes to function. Their main purpose is to process a series of numbers placed into sequences which make up a program. Each of these sequences gives some sort of instruction to the microprocessor which, in turn, relates information to other parts of the computer. This facilitates the actions necessary for the program to function. Microprocessors are types of central processing units (CPUs), essentially the central brain of a computer. A microprocessor takes the form of a computer chip that is placed in a motherboard, which operates as the relay center for all the higher functions processed from the CPU.

When a microprocessor is activated, it performs a series of actions, each one defining an exact point of communication. This communication gives instructions in the form of binary code, a series of ones and zeros. The CPU then responds to the instructions by processing the code, taking the necessary actions requested by the code, and relaying to the responsible input section that the action has successfully taken place.

The first step in this process is known as the fetch action. A program will elicit a series of ones and zeroes that define an exact action. Part of the sequence is responsible for informing microprocessors of the location of the necessary code within the program. This is the portion in which random access memory (RAM) is used. The RAM provides the memory for the CPU to be able to hold the instructions long enough for them to be used. When there is not enough RAM in a computer, the computer slows down.

The next step involving the workload of a microprocessor is known as the decoding action. Each set of numbers within the sequence is responsible for a certain action. In order for the CPU to order the correct components to do their jobs, each part of the sequence of numbers must be identified and given the correct operational parameters. For example, if a user is burning a DVD, the CPU needs to communicate certain numerical values to the DVD unit that burns the disk, the hard drive which supplies the information and the video card for display of the status for the user.

Microprocessors work with the computer's hard drive.
Microprocessors work with the computer’s hard drive.

Execution is the next step in the function of microprocessors. Essentially, the CPU tells the computer components to do their jobs. During the execution phase, the microprocessor stays in constant contact with the components, making sure each portion of the activity is successfully completed according to the instructions gathered and sent during the previous two steps.

The final action for microprocessors involves the writeback function. This is simply the CPU making a copy of the actions and their results onto the computer’s main memory, usually found in the hard drive. The writeback step is essential to determining problematic issues when something goes wrong. For example, if the DVD did not burn correctly, a user can access the writeback files and find out which step occurred without success. These files are placed in a section of the memory known as the registry, which often suffers from increased levels of corruption as redundant actions are completed regularly.

What Is a Digital Computer? (Interview Question Explained)

Most computers operate using binary code and could be considered digital.
Most computers operate using binary code and could be considered digital.

A digital computer is machine that stores data in a numerical format and performs operations on that data using mathematical manipulation. This type of computer typically includes some sort of device to store information, some method for input and output of data, and components that allow mathematical operations to be performed on stored data. Digital computers are almost always electronic but do not necessarily need to be so.

There are two main methods of modeling the world with a computing machine. Analog computers use some physical phenomenon, such as electrical voltage, to model a different phenomenon, and perform operations by directly modifying the stored data. A digital computer, however, stores all data as numbers and performs operations on that data arithmetically. Most computers use binary numbers to store data, as the ones and zeros that make up these numbers are easily represented with simple on-off electrical states.

Computers based on analog principles have advantages in some specialized areas, such as their ability to continuously model an equation. A digital computer, however, has the advantage of being easily programmable. This means that they can process many different sets of instructions without being physically reconfigured.

Digital computers store data in a numerical format.
Digital computers store data in a numerical format.

The earliest digital computers date back to the 19th century. An early example is the analytical engine theorized by Charles Babbage. This machine would have stored and processed data mechanically. That data, however, would not have been stored mechanically but rather as a series of digits represented by discrete physical states. This computer would have been programmable, a first in computing.

Digital computing came into widespread use during the 20th century. The pressures of war led to great advances in the field, and electronic computers emerged from the Second World War. This sort of digital computer generally used arrays of vacuum tubes to store information for active use in computation. Paper or punch cards were used for longer-term storage. Keyboard input and monitors emerged later in the century.

In the early 21st century, computers rely on integrated circuits rather than vacuum tubes. They still employ active memory, long-term storage, and central processing units. Input and output devices have multiplied greatly but still serve the same basic functions.

In 2011, computers are beginning to push the limits of conventional circuitry. Circuit pathways in a digital computer can now be printed so close together that effects like electron tunneling must be taken into consideration. Work on digital optical computers, which process and store data using light and lenses, may help in overcoming this limitation.

Nanotechnology may lead to a whole new variety of mechanical computing. Data might be stored and processed digitally at the level of single molecules or small groups of molecules. An astonishing number of molecular computing elements would fit into a comparatively tiny space. This could greatly increase the speed and power of digital computers.

Circuit pathways in a digital computer can now be printed extremely close together.
Circuit pathways in a digital computer can now be printed extremely close together.
Early analog computers used to take up entire rooms.
Early analog computers used to take up entire rooms, Year 1949.
Digital components like processors are typically more versatile than analog ones.
Digital components like processors are typically more versatile than analog ones.
Inventor Charles Babbage conceived the idea of the steam-powered Difference Engine in 1822.
Inventor Charles Babbage conceived the idea of the steam-powered Difference Engine in 1822.

Source / Courtesy : WiseGeek