Administrator

Fusion reactor concept could be cheaper than coal

The UW’s current fusion experiment, HIT-SI3. It is about one-tenth the size of the power-producing dynomak concept. Credit: Image courtesy of University of Washington
The UW’s current fusion experiment, HIT-SI3. It is about one-tenth the size of the power-producing dynomak concept.
Credit: Image courtesy of University of Washington

[dropcap]F[/dropcap]usion energy almost sounds too good to be true — zero greenhouse gas emissions, no long-lived radioactive waste, a nearly unlimited fuel supply.

Perhaps the biggest roadblock to adopting fusion energy is that the economics haven’t penciled out. Fusion power designs aren’t cheap enough to outperform systems that use fossil fuels such as coal and natural gas.

University of Washington engineers hope to change that. They have designed a concept for a fusion reactor that, when scaled up to the size of a large electrical power plant, would rival costs for a new coal-fired plant with similar electrical output.

The team published its reactor design and cost-analysis findings last spring and will present results Oct. 17 at the International Atomic Energy Agency’s Fusion Energy Conference in St. Petersburg, Russia.

“Right now, this design has the greatest potential of producing economical fusion power of any current concept,” said Thomas Jarboe, a UW professor of aeronautics and astronautics and an adjunct professor in physics.

The UW’s reactor, called the dynomak, started as a class project taught by Jarboe two years ago. After the class ended, Jarboe and doctoral student Derek Sutherland — who previously worked on a reactor design at the Massachusetts Institute of Technology — continued to develop and refine the concept.

The design builds on existing technology and creates a magnetic field within a closed space to hold plasma in place long enough for fusion to occur, allowing the hot plasma to react and burn. The reactor itself would be largely self-sustaining, meaning it would continuously heat the plasma to maintain thermonuclear conditions. Heat generated from the reactor would heat up a coolant that is used to spin a turbine and generate electricity, similar to how a typical power reactor works.

“This is a much more elegant solution because the medium in which you generate fusion is the medium in which you’re also driving all the current required to confine it,” Sutherland said.

There are several ways to create a magnetic field, which is crucial to keeping a fusion reactor going. The UW’s design is known as a spheromak, meaning it generates the majority of magnetic fields by driving electrical currents into the plasma itself. This reduces the amount of required materials and actually allows researchers to shrink the overall size of the reactor.

Other designs, such as the experimental fusion reactor project that’s currently being built in France — called Iter — have to be much larger than the UW’s because they rely on superconducting coils that circle around the outside of the device to provide a similar magnetic field. When compared with the fusion reactor concept in France, the UW’s is much less expensive — roughly one-tenth the cost of Iter — while producing five times the amount of energy.

The UW researchers factored the cost of building a fusion reactor power plant using their design and compared that with building a coal power plant. They used a metric called “overnight capital costs,” which includes all costs, particularly startup infrastructure fees. A fusion power plant producing 1 gigawatt (1 billion watts) of power would cost $2.7 billion, while a coal plant of the same output would cost $2.8 billion, according to their analysis.

“If we do invest in this type of fusion, we could be rewarded because the commercial reactor unit already looks economical,” Sutherland said. “It’s very exciting.”

Right now, the UW’s concept is about one-tenth the size and power output of a final product, which is still years away. The researchers have successfully tested the prototype’s ability to sustain a plasma efficiently, and as they further develop and expand the size of the device they can ramp up to higher-temperature plasma and get significant fusion power output.

The team has filed patents on the reactor concept with the UW’s Center for Commercialization and plans to continue developing and scaling up its prototypes.

Other members of the UW design team include Kyle Morgan of physics; Eric Lavine, Michal Hughes, George Marklin, Chris Hansen, Brian Victor, Michael Pfaff, and Aaron Hossack of aeronautics and astronautics; Brian Nelson of electrical engineering; and, Yu Kamikawa and Phillip Andrist formerly of the UW.

The research was funded by the U.S. Department of Energy.


Story Source:

The above story is based on materials provided by University of Washington. The original article was written by Michelle Ma. Note: Materials may be edited for content and length.


Journal Reference:

  1. D.A. Sutherland, T.R. Jarboe, K.D. Morgan, M. Pfaff, E.S. Lavine, Y. Kamikawa, M. Hughes, P. Andrist, G. Marklin, B.A. Nelson. The dynomak: An advanced spheromak reactor concept with imposed-dynamo current drive and next-generation nuclear power technologies. Fusion Engineering and Design, 2014; 89 (4): 412 DOI: 10.1016/j.fusengdes.2014.03.072

Nanoparticles can act like liquid on the outside, crystal on the inside

A surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.
A surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.

[dropcap]A[/dropcap] surprising phenomenon has been found in metal nanoparticles: They appear, from the outside, to be liquid droplets, wobbling and readily changing shape, while their interiors retain a perfectly stable crystal configuration.

The research team behind the finding, led by MIT professor Ju Li, says the work could have important implications for the design of components in nanotechnology, such as metal contacts for molecular electronic circuits.

The results, published in the journal Nature Materials, come from a combination of laboratory analysis and computer modeling, by an international team that included researchers in China, Japan, and Pittsburgh, as well as at MIT.

The experiments were conducted at room temperature, with particles of pure silver less than 10 nanometers across — less than one-thousandth of the width of a human hair. But the results should apply to many different metals, says Li, senior author of the paper and the BEA Professor of Nuclear Science and Engineering.

Silver has a relatively high melting point — 962 degrees Celsius, or 1763 degrees Fahrenheit — so observation of any liquidlike behavior in its nanoparticles was “quite unexpected,” Li says. Hints of the new phenomenon had been seen in earlier work with tin, which has a much lower melting point, he says.

The use of nanoparticles in applications ranging from electronics to pharmaceuticals is a lively area of research; generally, Li says, these researchers “want to form shapes, and they want these shapes to be stable, in many cases over a period of years.” So the discovery of these deformations reveals a potentially serious barrier to many such applications: For example, if gold or silver nanoligaments are used in electronic circuits, these deformations could quickly cause electrical connections to fail.

Only skin deep

The researchers’ detailed imaging with a transmission electron microscope and atomistic modeling revealed that while the exterior of the metal nanoparticles appears to move like a liquid, only the outermost layers — one or two atoms thick — actually move at any given time. As these outer layers of atoms move across the surface and redeposit elsewhere, they give the impression of much greater movement — but inside each particle, the atoms stay perfectly lined up, like bricks in a wall.

“The interior is crystalline, so the only mobile atoms are the first one or two monolayers,” Li says. “Everywhere except the first two layers is crystalline.”

By contrast, if the droplets were to melt to a liquid state, the orderliness of the crystal structure would be eliminated entirely — like a wall tumbling into a heap of bricks.

Technically, the particles’ deformation is pseudoelastic, meaning that the material returns to its original shape after the stresses are removed — like a squeezed rubber ball — as opposed to plasticity, as in a deformable lump of clay that retains a new shape.

The phenomenon of plasticity by interfacial diffusion was first proposed by Robert L. Coble, a professor of ceramic engineering at MIT, and is known as “Coble creep.” “What we saw is aptly called Coble pseudoelasticity,” Li says.

Now that the phenomenon has been understood, researchers working on nanocircuits or other nanodevices can quite easily compensate for it, Li says. If the nanoparticles are protected by even a vanishingly thin layer of oxide, the liquidlike behavior is almost completely eliminated, making stable circuits possible.

Possible benefits

On the other hand, for some applications this phenomenon might be useful: For example, in circuits where electrical contacts need to withstand rotational reconfiguration, particles designed to maximize this effect might prove useful, using noble metals or a reducing atmosphere, where the formation of an oxide layer is destabilized, Li says.

The new finding flies in the face of expectations — in part, because of a well-understood relationship, in most materials, in which mechanical strength increases as size is reduced.

“In general, the smaller the size, the higher the strength,” Li says, but “at very small sizes, a material component can get very much weaker. The transition from ‘smaller is stronger’ to ‘smaller is much weaker’ can be very sharp.”

That crossover, he says, takes place at about 10 nanometers at room temperature — a size that microchip manufacturers are approaching as circuits shrink. When this threshold is reached, Li says, it causes “a very precipitous drop” in a nanocomponent’s strength.

The findings could also help explain a number of anomalous results seen in other research on small particles, Li says.

“The … work reported in this paper is first-class,” says Horacio Espinosa, a professor of manufacturing and entrepreneurship at Northwestern University who was not involved in this research. “These are very difficult experiments, which revealed for the first time shape recovery of silver nanocrystals in the absence of dislocation. … Li’s interpretation of the experiments using atomistic modeling illustrates recent progress in comparing experiments and simulations as it relates to spatial and time scales. This has implications to many aspects of mechanics of materials, so I expect this work to be highly cited.”

The research team included Jun Sun, Longbing He, Tao Xu, Hengchang Bi, and Litao Sun, all of Southeast University in Nanjing, China; Yu-Chieh Lo of MIT and Kyoto University; Ze Zhang of Zhejiang University; and Scott Mao of the University of Pittsburgh. It was supported by the National Basic Research Program of China; the National Natural Science Foundation of China; the Chinese Ministry of Education; the National Science Foundation of Jiangsu Province, China; and the U.S. National Science Foundation.


Story Source:

The above story is based on materials provided by Massachusetts Institute of Technology. The original article was written by David L. Chandler. Note: Materials may be edited for content and length.


Journal Reference:

  1. Jun Sun, Longbing He, Yu-Chieh Lo, Tao Xu, Hengchang Bi, Litao Sun, Ze Zhang, Scott X. Mao, Ju Li. Liquid-like pseudoelasticity of sub-10-nm crystalline silver particles. Nature Materials, 2014; DOI: 10.1038/nmat4105

 

New records set for silicon quantum computing

00113102014
Artist impression of an electron wave function (blue), confined in a crystal of nuclear-spin-free 28-silicon atoms (black), controlled by a nanofabricated metal gate (silver). 

[dropcap]T[/dropcap]wo research teams working in the same laboratories at UNSW Australia have found distinct solutions to a critical challenge that has held back the realisation of super powerful quantum computers.

The teams created two types of quantum bits, or “qubits” — the building blocks for quantum computers — that each process quantum data with an accuracy above 99%. The two findings have been published simultaneously today in the journal Nature Nanotechnology.

“For quantum computing to become a reality we need to operate the bits with very low error rates,” says Scientia Professor Andrew Dzurak, who is Director of the Australian National Fabrication Facility at UNSW, where the devices were made.

“We’ve now come up with two parallel pathways for building a quantum computer in silicon, each of which shows this super accuracy,” adds Associate Professor Andrea Morello from UNSW’s School of Electrical Engineering and Telecommunications.

The UNSW teams, which are also affiliated with the ARC Centre of Excellence for Quantum Computation & Communication Technology, were first in the world to demonstrate single-atom spin qubits in silicon, reported in Nature in 2012 and 2013.

Now the team led by Dzurak has discovered a way to create an “artificial atom” qubit with a device remarkably similar to the silicon transistors used in consumer electronics, known as MOSFETs. Post-doctoral researcher Menno Veldhorst, lead author on the paper reporting the artificial atom qubit, says, “It is really amazing that we can make such an accurate qubit using pretty much the same devices as we have in our laptops and phones.”

Meanwhile, Morello’s team has been pushing the “natural” phosphorus atom qubit to the extremes of performance. Dr Juha Muhonen, a post-doctoral researcher and lead author on the natural atom qubit paper, notes: “The phosphorus atom contains in fact two qubits: the electron, and the nucleus. With the nucleus in particular, we have achieved accuracy close to 99.99%. That means only one error for every 10,000 quantum operations.”

Dzurak explains that, “even though methods to correct errors do exist, their effectiveness is only guaranteed if the errors occur less than 1% of the time. Our experiments are among the first in solid-state, and the first-ever in silicon, to fulfill this requirement.”

The high-accuracy operations for both natural and artificial atom qubits is achieved by placing each inside a thin layer of specially purified silicon, containing only the silicon-28 isotope. This isotope is perfectly non-magnetic and, unlike those in naturally occurring silicon, does not disturb the quantum bit. The purified silicon was provided through collaboration with Professor Kohei Itoh from Keio University in Japan.

The next step for the researchers is to build pairs of highly accurate quantum bits. Large quantum computers are expected to consist of many thousands or millions of qubits and may integrate both natural and artificial atoms.

Morello’s research team also established a world-record “coherence time” for a single quantum bit held in solid state. “Coherence time is a measure of how long you can preserve quantum information before it’s lost,” Morello says. The longer the coherence time, the easier it becomes to perform long sequences of operations, and therefore more complex calculations.

The team was able to store quantum information in a phosphorus nucleus for more than 30 seconds. “Half a minute is an eternity in the quantum world. Preserving a ‘quantum superposition’ for such a long time, and inside what is basically a modified version of a normal transistor, is something that almost nobody believed possible until today,” Morello says.

“For our two groups to simultaneously obtain these dramatic results with two quite different systems is very special, in particular because we are really great mates,” adds Dzurak.

Video: http://www.youtube.com/watch?v=kq2QrTgCZ3U&feature=youtu.be


Story Source:

The above story is based on materials provided by University of New South Wales.Note: Materials may be edited for content and length.


Journal References:

  1. M. Veldhorst, J. C. C. Hwang, C. H. Yang, A. W. Leenstra, B. de Ronde, J. P. Dehollain, J. T. Muhonen, F. E. Hudson, K. M. Itoh, A. Morello, A. S. Dzurak. An addressable quantum dot qubit with fault-tolerant control-fidelity. Nature Nanotechnology, 2014; DOI: 10.1038/nnano.2014.216
  2. Juha T. Muhonen, Juan P. Dehollain, Arne Laucht, Fay E. Hudson, Rachpon Kalra, Takeharu Sekiguchi, Kohei M. Itoh, David N. Jamieson, Jeffrey C. McCallum, Andrew S. Dzurak, Andrea Morello. Storing quantum information for 30 seconds in a nanoelectronic device. Nature Nanotechnology, 2014; DOI:10.1038/nnano.2014.211

 

How Do Microprocessors Work?

A microprocessor acts through a series of instructions.
A microprocessor acts through a series of instructions.

Microprocessors use a number of different processes to function. Their main purpose is to process a series of numbers placed into sequences which make up a program. Each of these sequences gives some sort of instruction to the microprocessor which, in turn, relates information to other parts of the computer. This facilitates the actions necessary for the program to function. Microprocessors are types of central processing units (CPUs), essentially the central brain of a computer. A microprocessor takes the form of a computer chip that is placed in a motherboard, which operates as the relay center for all the higher functions processed from the CPU.

When a microprocessor is activated, it performs a series of actions, each one defining an exact point of communication. This communication gives instructions in the form of binary code, a series of ones and zeros. The CPU then responds to the instructions by processing the code, taking the necessary actions requested by the code, and relaying to the responsible input section that the action has successfully taken place.

The first step in this process is known as the fetch action. A program will elicit a series of ones and zeroes that define an exact action. Part of the sequence is responsible for informing microprocessors of the location of the necessary code within the program. This is the portion in which random access memory (RAM) is used. The RAM provides the memory for the CPU to be able to hold the instructions long enough for them to be used. When there is not enough RAM in a computer, the computer slows down.

The next step involving the workload of a microprocessor is known as the decoding action. Each set of numbers within the sequence is responsible for a certain action. In order for the CPU to order the correct components to do their jobs, each part of the sequence of numbers must be identified and given the correct operational parameters. For example, if a user is burning a DVD, the CPU needs to communicate certain numerical values to the DVD unit that burns the disk, the hard drive which supplies the information and the video card for display of the status for the user.

Microprocessors work with the computer's hard drive.
Microprocessors work with the computer’s hard drive.

Execution is the next step in the function of microprocessors. Essentially, the CPU tells the computer components to do their jobs. During the execution phase, the microprocessor stays in constant contact with the components, making sure each portion of the activity is successfully completed according to the instructions gathered and sent during the previous two steps.

The final action for microprocessors involves the writeback function. This is simply the CPU making a copy of the actions and their results onto the computer’s main memory, usually found in the hard drive. The writeback step is essential to determining problematic issues when something goes wrong. For example, if the DVD did not burn correctly, a user can access the writeback files and find out which step occurred without success. These files are placed in a section of the memory known as the registry, which often suffers from increased levels of corruption as redundant actions are completed regularly.

What Is a Digital Computer? (Interview Question Explained)

Most computers operate using binary code and could be considered digital.
Most computers operate using binary code and could be considered digital.

A digital computer is machine that stores data in a numerical format and performs operations on that data using mathematical manipulation. This type of computer typically includes some sort of device to store information, some method for input and output of data, and components that allow mathematical operations to be performed on stored data. Digital computers are almost always electronic but do not necessarily need to be so.

There are two main methods of modeling the world with a computing machine. Analog computers use some physical phenomenon, such as electrical voltage, to model a different phenomenon, and perform operations by directly modifying the stored data. A digital computer, however, stores all data as numbers and performs operations on that data arithmetically. Most computers use binary numbers to store data, as the ones and zeros that make up these numbers are easily represented with simple on-off electrical states.

Computers based on analog principles have advantages in some specialized areas, such as their ability to continuously model an equation. A digital computer, however, has the advantage of being easily programmable. This means that they can process many different sets of instructions without being physically reconfigured.

Digital computers store data in a numerical format.
Digital computers store data in a numerical format.

The earliest digital computers date back to the 19th century. An early example is the analytical engine theorized by Charles Babbage. This machine would have stored and processed data mechanically. That data, however, would not have been stored mechanically but rather as a series of digits represented by discrete physical states. This computer would have been programmable, a first in computing.

Digital computing came into widespread use during the 20th century. The pressures of war led to great advances in the field, and electronic computers emerged from the Second World War. This sort of digital computer generally used arrays of vacuum tubes to store information for active use in computation. Paper or punch cards were used for longer-term storage. Keyboard input and monitors emerged later in the century.

In the early 21st century, computers rely on integrated circuits rather than vacuum tubes. They still employ active memory, long-term storage, and central processing units. Input and output devices have multiplied greatly but still serve the same basic functions.

In 2011, computers are beginning to push the limits of conventional circuitry. Circuit pathways in a digital computer can now be printed so close together that effects like electron tunneling must be taken into consideration. Work on digital optical computers, which process and store data using light and lenses, may help in overcoming this limitation.

Nanotechnology may lead to a whole new variety of mechanical computing. Data might be stored and processed digitally at the level of single molecules or small groups of molecules. An astonishing number of molecular computing elements would fit into a comparatively tiny space. This could greatly increase the speed and power of digital computers.

Circuit pathways in a digital computer can now be printed extremely close together.
Circuit pathways in a digital computer can now be printed extremely close together.
Early analog computers used to take up entire rooms.
Early analog computers used to take up entire rooms, Year 1949.
Digital components like processors are typically more versatile than analog ones.
Digital components like processors are typically more versatile than analog ones.
Inventor Charles Babbage conceived the idea of the steam-powered Difference Engine in 1822.
Inventor Charles Babbage conceived the idea of the steam-powered Difference Engine in 1822.

Source / Courtesy : WiseGeek

What are Integrated Circuits (ICs)?

 

A central processing unit, a type of integrated circuit.

An integrated circuit (IC), popularly known as a silicon chip, computer chip or microchip, is a miniature electronic circuit rendered on a sliver of semiconducting material, typically silicon, but sometimes sapphire. Owing to their tiny measurements and incredible processing power — modern integrated circuits host millions of transistors on boards as small as 5 millimeters (about 0.2 inches) square and 1 millimeter (0.04 inches) thick — they are to be found in virtually every modern-day appliance and device, from credit cards, computers, and mobile phones to satellite navigation systems, traffic lights and airplanes.

Essentially, an integrated circuit is a composite of various electronic components, namely, transistors, resistors, diodes and capacitors, that are organized and connected in a way that produces a specific effect. Each unit in this ‘team’ of electronic components has a unique function within the integrated circuit. The transistor acts like a switch and determines the ‘on’ or ‘off’ status of the circuit; the resistor controls the flow of electricity; the diode permits the flow of electricity only when some condition on the circuit has been met; and finally the capacitor stores electricity prior to its release in a sustained burst.

The first integrated circuit was demonstrated by Texas Instruments’ employee Jack Kilby in 1958. This prototype, measuring about 11.1 by 1.6 millimeters, consisted of a strip of germanium and just one transistor. The advent of silicon coupled with the ever diminishing size of integrated circuits and the rapid increase in the number of transistors per millimeter meant that integrated circuits underwent massive proliferation and gave rise to the age of modern computing.

From its inception in the 1950s to the present day, integrated circuit technology has known various ‘generations’ that are now commonly referred to as Small Scale Integration (SSI), Medium Scale Integration (MSI), Large Scale Integration (LSI), and Very Large Scale Integration (VSLI). These progressive technological generations describe an arc in the progress of IC design that goes to illustrate the prescience of Intel head, George Moore, who coined ‘Moore’s Law’ in the 1960s which asserted that integrated circuits double in complexity every two years.

Integrated circuits have become increasingly complex.
Integrated circuits have become increasingly complex.

This doubling in complexity is borne out by the generational movement of the technology that saw SSI’s tens of transistors increase to MSI’s hundreds, then to LSI’s tens of thousands, and finally to VSLI’s millions. The next frontier that integrated circuits promise to breach is that of ULSI, or Ultra-Large Scale Integration, which entails the deployment of billions of microscopic transistors and has already been heralded by the Intel project codenamed Tukwila, which is understood to employ over two billion transistors.

If more proof were needed of the persisting veracity of Moore’s dictum, we have only to look at the modern day integrated circuit which is faster, smaller and more ubiquitous than ever. As of 2008, the semiconductor industry produces more than 267 billion chips a year and this figure is raised to 330 billion by the end of 2012.

Easy Lesson on 1-Tier vs 2-Tier vs 3-Tier

Three-tier or multi-tier architecture is often used when describing how clients connect to database servers. But what does it all mean?

Let me try to explain this in non-technical terms (or as close to it I can get).

 

Software 

Let’s first take a look how a database software program (the software) works.

There are three major tiers to the software:

  • User Interface (UI). This is what you see when you work with the software. You interact with it. There  might be buttons, icons, text boxes, radio buttons, etc. The UI passes on clicks and typed information to the Business Logic tier.
  • Business Logic (BL). The business logic is code that is executed to accomplish something. When a user clicks a button it will trigger the BL to run some code. The BL can send information back to the UI, so the user can see the result of clicking a button or typing something in a field. For instance when you enter something in a cell in Excel, the BL will recalculate other cells once you hit Enter and the UI will present the new information to you. The BL also needs to be able to store and retrieve data and that is handled in the Database tier.
  • Database (DB). The database is where the data is stored and where the BL can retrieve it again.

 

1-Tier Architecture

This architecture has the UI, the BL, and the DB in one single software package. Software applications like MS Access, MS Excel, QuickBooks, and Peachtree all have the same in common: the application handles all three tiers (BL, UI, and DB). The data is stored in a file on the local computer or a shared drive. This is the simplest and cheapest of all the architectures, but also the least secure. Since users have direct access to the files, they could accidentally move, modify, or even worse, delete the file by accident or on purpose. There is also usually an issue when multiple users access the same file at the same time: In many cases only one can edit the file while others only have read-only access.

Another issue is that 1-tier software packages are not very scalable and if the amount to data gets too big, the software may be very slow or stop working.

So 1-tier architecture is simple and cheap, but usually unsecured and data can easily be lost if you are not careful.

00223092014

2-Tier Architecture

This architecture is also called Client-Server architecture because of the two components: The client that runs the application and the server that handles the database back-end. The client handles the UI and the BL and the server handles the DB. When the client starts, it establishes a connection to the server and communicates as needed with the server while running the client. The client computer usually can’t see the database directly and can only access the data by starting the client. This means that the data on the server is much more secure. Now users are unable to change or delete data unless they have specific user rights to do so.

The client-server solution also allows multiple users to access the database at the same time as long as they are accessing data in different parts of the database. One other huge benefit is that the server is processing data (DB) that allows the client to work on the presentation (UI) and business logic (BL) only. This mean that the client and the server are sharing the workload and by scaling the server to be more powerful than the client, you are usually able to load many clients to the server allowing more users to work on the system at the same time and at a much greater speed.

00323092014

3-Tier Architecture

In this architecture all three tiers are separated onto different computers. The UI runs on the client (what the user is working with). The BL is running on a separate server, called the business logic tier, middle tier, or service tier. Finally the DB is running on its own database server.

In the client-server solution the client was handling the UI and the BL that makes the client “thick”. A thick client means that it requires heavy traffic with the server, thus making it difficult to use over slower network connections like Internet and Wireless (4G, LTE, or Wi-Fi).

By introducing the middle tier, the client is only handling presentation logic (UI). This means that only little communication is needed between the client and the middle tier (BL) making the client “thin” or “thinner”. An example of a thin client is an Internet browser that allows you to see and provide information fast and almost with no delay.

As more users access the system a three-tier solution is more scalable than the other solution because you can add as many middle tiers (running on each ownserver) as needed to ensure good performance (N-tier or multiple-tier).

Security is also the best in the three-tier architecture because the middle tier protects the database tier.

There is one major drawback to the N-tier architecture and that is that the additional tiers increase the complexity and cost of the installation.

00423092014

1-Tier

2-Tier Multi-Tier
Benefits Very simple

Inexpensive

No server needed

Good security

More scalable

Faster execution

 

Exceptional security

Fastest execution

“Thin” client

Very scalable

Issues

Poor security

Multi user issues

More costly

More complex

“Thick” client

Very costly

Very complex

Users

Usually 1 (or a few)

2-100

50-2000 (+)

 

 

Engineered proteins stick like glue — even in water

This image shows adhesion between the silica tip of an atomic force microscope and adhesive fibers made by fusing mussel foot proteins and curli amyloid fibers. Credit: Yan Liang
This image shows adhesion between the silica tip of an atomic force microscope and adhesive fibers made by fusing mussel foot proteins and curli amyloid fibers.
Credit: Yan Liang

[dropcap]S[/dropcap]hellfish such as mussels and barnacles secrete very sticky proteins that help them cling to rocks or ship hulls, even underwater. Inspired by these natural adhesives, a team of MIT engineers has designed new materials that could be used to repair ships or help heal wounds and surgical incisions.

To create their new waterproof adhesives, the MIT researchers engineered bacteria to produce a hybrid material that incorporates naturally sticky mussel proteins as well as a bacterial protein found in biofilms — slimy layers formed by bacteria growing on a surface. When combined, these proteins form even stronger underwater adhesives than those secreted by mussels.

This project, described in the Sept. 21 issue of the journal Nature Nanotechnology, represents a new type of approach that can be exploited to synthesize biological materials with multiple components, using bacteria as tiny factories.

“The ultimate goal for us is to set up a platform where we can start building materials that combine multiple different functional domains together and to see if that gives us better materials performance,” says Timothy Lu, an associate professor of biological engineering and electrical engineering and computer science (EECS) and the senior author of the paper.

The paper’s lead author is Chao Zhong, a former MIT postdoc who is now at ShanghaiTech University. Other authors are graduate student Thomas Gurry, graduate student Allen Cheng, senior Jordan Downey, postdoc Zhengtao Deng, and Collin Stultz, a professor in EECS.

Complex adhesives

The sticky substance that helps mussels attach to underwater surfaces is made of several proteins known as mussel foot proteins. “A lot of underwater organisms need to be able to stick to things, so they make all sorts of different types of adhesives that you might be able to borrow from,” Lu says.

Scientists have previously engineered E. coli bacteria to produce individual mussel foot proteins, but these materials do not capture the complexity of the natural adhesives, Lu says. In the new study, the MIT team wanted to engineer bacteria to produce two different foot proteins, combined with bacterial proteins called curli fibers — fibrous proteins that can clump together and assemble themselves into much larger and more complex meshes.

Lu’s team engineered bacteria so they would produce proteins consisting of curli fibers bonded to either mussel foot protein 3 or mussel foot protein 5. After purifying these proteins from the bacteria, the researchers let them incubate and form dense, fibrous meshes. The resulting material has a regular yet flexible structure that binds strongly to both dry and wet surfaces.

“The result is a powerful wet adhesive with independently functioning adsorptive and cohesive moieties,” says Herbert Waite, a professor of chemistry and biochemistry at the University of California at Santa Barbara who was not part of the research team. “The work is very creative, rigorous, and thorough.”

The researchers tested the adhesives using atomic force microscopy, a technique that probes the surface of a sample with a tiny tip. They found that the adhesives bound strongly to tips made of three different materials — silica, gold, and polystyrene. Adhesives assembled from equal amounts of mussel foot protein 3 and mussel foot protein 5 formed stronger adhesives than those with a different ratio, or only one of the two proteins on their own.

These adhesives were also stronger than naturally occurring mussel adhesives, and they are the strongest biologically inspired, protein-based underwater adhesives reported to date, the researchers say.

More adhesive strength

Using this technique, the researchers can produce only small amounts of the adhesive, so they are now trying to improve the process and generate larger quantities. They also plan to experiment with adding some of the other mussel foot proteins. “We’re trying to figure out if by adding other mussel foot proteins, we can increase the adhesive strength even more and improve the material’s robustness,” Lu says.

The team also plans to try to create “living glues” consisting of films of bacteria that could sense damage to a surface and then repair it by secreting an adhesive.

The research was funded by the Office of Naval Research, the National Science Foundation, and the National Institutes of Health.


Story Source:

The above story is based on materials provided by Massachusetts Institute of Technology. The original article was written by Anne Trafton. Note: Materials may be edited for content and length.


Journal Reference:

  1. Chao Zhong, Thomas Gurry, Allen A. Cheng, Jordan Downey, Zhengtao Deng, Collin M. Stultz, Timothy K. Lu. Strong underwater adhesives made by self-assembling multi-protein nanofibres. Nature Nanotechnology, 2014; DOI:10.1038/nnano.2014.199

Interview Question: Difference between 1-tier/2-tier & 3-tier architecture?

Tier” can be defined as “one of two or more rows, levels, or ranks arranged one above another“.

1-Tier Architecture is the simplest, single tier on single user, and is the equivalent of running an application on a personal computer. All the required component to run the application are located within it. User interface, business logic, and data storage are all located on the same machine. They are the easiest to design, but the least scalable. Because they are not part of a network, they are useless for designing web applications.

2-Tier Architectures supply a basic network between a client and a server. For example, the basic web model is a 2-Tier Architecture. A web browser makes a request from a web server, which then processes the request and returns the desired response, in this case, web pages. This approach improves scalability and divides the user interface from the data layers. However, it does not divide application layers so they can be utilized separately. This makes them difficult to update and not specialized. The entire application must be updated because layers aren’t separated.

3-Tier Architecture is most commonly used to build web applications. In this model, the browser acts like a client, middleware or an application server contains the business logic, and database servers handle data functions. This approach separates business logic from display and data.So the 3 layers commonly known as:Presentation Layer(PL/UI),Business Logic Layer(BLL) & Data Access Layer(DAL).

00123092014

 

Learn more about these architectures at Easy Lesson on 1-Tire vs 2-Tire vs 3-Tire – Winged Post

Smallest possible diamonds form ultra-thin nanothreads

For the first time, scientists have discovered how to produce ultra-thin 'diamond nanothreads' that promise extraordinary properties, including strength and stiffness greater than that of today's strongest nanotubes and polymers. The threads have a structure that has never been seen before. A paper describing this discovery by a research team led by John V. Badding, a professor of chemistry at Penn State University, will be published in the 21 Sept. 2014 issue of the journal Nature Materials. The core of the nanothreads that Badding's team made is a long, thin strand of carbon atoms arranged just like the fundamental unit of a diamond's structure -- zig-zag 'cyclohexane' rings of six carbon atoms bound together, in which each carbon is surrounded by others in the strong triangular-pyramid shape of a tetrahedron. Credit: Penn State University
For the first time, scientists have discovered how to produce ultra-thin ‘diamond nanothreads’ that promise extraordinary properties, including strength and stiffness greater than that of today’s strongest nanotubes and polymers. The threads have a structure that has never been seen before. A paper describing this discovery by a research team led by John V. Badding, a professor of chemistry at Penn State University, will be published in the 21 Sept. 2014 issue of the journal Nature Materials. The core of the nanothreads that Badding’s team made is a long, thin strand of carbon atoms arranged just like the fundamental unit of a diamond’s structure — zig-zag ‘cyclohexane’ rings of six carbon atoms bound together, in which each carbon is surrounded by others in the strong triangular-pyramid shape of a tetrahedron.
Credit: Penn State University

[dropcap]F[/dropcap]or the first time, scientists have discovered how to produce ultra-thin “diamond nanothreads” that promise extraordinary properties, including strength and stiffness greater than that of today’s strongest nanotubes and polymers. A paper describing this discovery by a research team led by John V. Badding, a professor of chemistry at Penn State University, will be published in the 21 September 2014 issue of the journalNature Materials.

“From a fundamental-science point of view, our discovery is intriguing because the threads we formed have a structure that has never been seen before,” Badding said. The core of the nanothreads that Badding’s team made is a long, thin strand of carbon atoms arranged just like the fundamental unit of a diamond’s structure — zig-zag “cyclohexane” rings of six carbon atoms bound together, in which each carbon is surrounded by others in the strong triangular-pyramid shape of a tetrahedron. “It is as if an incredible jeweler has strung together the smallest possible diamonds into a long miniature necklace,” Badding said. “Because this thread is diamond at heart, we expect that it will prove to be extraordinarily stiff, extraordinarily strong, and extraordinarily useful.”

The team’s discovery comes after nearly a century of failed attempts by other labs to compress separate carbon-containing molecules like liquid benzene into an ordered, diamondlike nanomaterial. “We used the large high-pressure Paris-Edinburgh device at Oak Ridge National Laboratory to compress a 6-millimeter-wide amount of benzene — a gigantic amount compared with previous experiments,” said Malcolm Guthrie of the Carnegie Institution for Science, a coauthor of the research paper. “We discovered that slowly releasing the pressure after sufficient compression at normal room temperature gave the carbon atoms the time they needed to react with each other and to link up in a highly ordered chain of single-file carbon tetrahedrons, forming these diamond-core nanothreads.”

Badding’s team is the first to coax molecules containing carbon atoms to form the strong tetrahedron shape, then link each tetrahedron end to end to form a long, thin nanothread. He describes the thread’s width as phenomenally small, only a few atoms across, hundreds of thousands of times smaller than an optical fiber, enormously thinner that an average human hair. “Theory by our co-author Vin Crespi suggests that this is potentially the strongest, stiffest material possible, while also being light in weight,” he said.

The molecule they compressed is benzene — a flat ring containing six carbon atoms and six hydrogen atoms. The resulting diamond-core nanothread is surrounded by a halo of hydrogen atoms. During the compression process, the scientists report, the flat benzene molecules stack together, bend, and break apart. Then, as the researchers slowly release the pressure, the atoms reconnect in an entirely different yet very orderly way. The result is a structure that has carbon in the tetrahedral configuration of diamond with hydrogens hanging out to the side and each tetrahedron bonded with another to form a long, thin, nanothread.

“It really is surprising that this kind of organization happens,” Badding said. “That the atoms of the benzene molecules link themselves together at room temperature to make a thread is shocking to chemists and physicists. Considering earlier experiments, we think that, when the benzene molecule breaks under very high pressure, its atoms want to grab onto something else but they can’t move around because the pressure removes all the space between them. This benzene then becomes highly reactive so that, when we release the pressure very slowly, an orderly polymerization reaction happens that forms the diamond-core nanothread.”

The scientists confirmed the structure of their diamond nanothreads with a number of techniques at Penn State, Oak Ridge, Arizona State University, and the Carnegie Institution for Science, including X-ray diffraction, neutron diffraction, Raman spectroscopy, first-principle calculations, transmission electron microscopy, and solid-state nuclear magnetic resonance (NMR). Parts of these first diamond nanothreads appear to be somewhat less than perfect, so improving their structure is a continuing goal of Badding’s research program. He also wants to discover how to make more of them. “The high pressures that we used to make the first diamond nanothread material limit our production capacity to only a couple of cubic millimeters at a time, so we are not yet making enough of it to be useful on an industrial scale,” Badding said. “One of our science goals is to remove that limitation by figuring out the chemistry necessary to make these diamond nanothreads under more practical conditions.”

The nanothread also may be the first member of a new class of diamond-like nanomaterials based on a strong tetrahedral core. “Our discovery that we can use the natural alignment of the benzene molecules to guide the formation of this new diamond nanothread material is really interesting because it opens the possibility of making many other kinds of molecules based on carbon and hydrogen,” Badding said. “You can attach all kinds of other atoms around a core of carbon and hydrogen. The dream is to be able to add other atoms that would be incorporated into the resulting nanothread. By pressurizing whatever liquid we design, we may be able to make an enormous number of different materials.”

Potential applications that most interest Badding are those that would be vastly improved by having exceedingly strong, stiff, and light materials — especially those that could help to protect the atmosphere, including lighter, more fuel-efficient, and therefore less-polluting vehicles. “One of our wildest dreams for the nanomaterials we are developing is that they could be used to make the super-strong, lightweight cables that would make possible the construction of a “space elevator,” which so far has existed only as a science-fiction idea,” Badding said.

In addition to Badding at Penn State and Guthrie at the Carnegie Institution, other members of the research team include George D. Cody at the Carnegie Institution, Stephen K. Davidowski, at Arizona State, and Thomas C. Fitzgibbons, En-shi Xu, Vincent H. Crespi, and Nasim Alem at Penn State. Penn State affiliations include the Department of Chemistry, the Materials Research Institute, the Department of Physics, and the Department of Materials Science and Engineering. This research received financial support as part of the Energy Frontier Research in Extreme Environments (EFree) Center, and Energy Frontier Research Center funded by the U.S. Department of Energy (Office of Science award #DE-SC0001057).


Story Source:

The above story is based on materials provided by Penn State. The original article was written by Barbara K. Kennedy. Note: Materials may be edited for content and length.