Your Ad Here

Friday, October 23, 2009

0 comments
MEMS or micro-electro-mechanical system a miniaturized mechanical device built with the materials and techniques used to make integrated circuits for computers. These microscopic materials are of the order of a few micrometers (a micrometer is one millionth of a meter). Micromachines combine sensors, levers, gears, and electronic elements such as transistors to perform various tasks. The extremely thin layers of silicon which are only few millionths of a meter thick are used to fabricate Micromachine Gearing Micromachines and can be shaped into can be shaped into levers, gears, and other mechanical devices. With the initiative of Sandia National Laboratories' Intelligent Micromachine, this technology is currently used in imaging systems and motion sensors, and is being developed for applications in biomedicine, computers, and telecommunications.

MEMS technology is currently used in devices such as air bag sensors and certain types of video screen systems. It is being adapted for uses in many other fields, such as medicine, computers, and communication as it has several benefits over larger machines in sensitivity, pace and energy consumption.
0 comments
According to this phenomenon of the 1940's two clean, flat surfaces of similar metal would strongly adhere if brought into contact under vacuum. Its lately known that by pressing the metals tightly together, increasing the duration of contact, raising the temperature of the workpieces, or any combination of the above could add on to the first contact. As per research, even for the very smooth metals, only the high points of each surface, called 'asperites,' touch the opposing piece which means that as little as a few thousandths of a percent of the total surface is involved in the adhesion and these small areas of taction develop strong molecular connections. If the surfaces are sufficiently smooth the metallic forces between them ultimately draw the two pieces completely together and eliminate even the macroscopic interface. Electron microscope investigations of contact points reveal that an actual welding of the two surfaces takes place after which it is impossible to separate the former asperitic interface. the cold welding effect is eliminated or reduced when exposed to oxygen or certain other reactive compounds which produces surface layers, for example, a metal oxide which has mechanical properties similar to those of the parent element (or softer), here surface deformations will not crack the oxide film.

Applications

Powders in powder metallurgy present large surface areas over which vacuum contact can occur and they use cold welding to best advantage. For instance, a 1 cm cube of metal comminuted into 240?100 mesh-sieved particles (60?149 μm) yields approximately 1.25ױ06 grains having a total surface area of 320 cm2. This powder, reassembled as a cube, would be about twice as big as before since half the volume consists of voids.

it is important to obtain minimum porosity (that is, high starting density) in the initial powder-formed mass if a sturdy final product is preferred. Minimum porosity results in less dimensional change upon compression of the workpiece as well as lower pressures, decreased temperatures, and less time to prepare a given part. Careful vibratory settling reduces porosity in monodiameter powders to less than 40%. Large increase in net grain area will enhance the contact welding effect and noticeably improve the 'green strength' of relatively uncompressed powder whereas decrease in average grain size does not decrease porosity. In space applications moulds may not even be required to hold the components for subsequent operations such as sintering because cold welding in the forming stage is adequate to produce usable hard parts
Hard monodiameter spheres packed like cannonballs into body-centered arrays gives a porosity of about 25%, much lower than the ultimate minimum of 35% for vibrated collections of monodiameter spheres. (The use of irregularly shaped particles produces even more porous powders.) Selected range of grain sizes, typically 3?6 carefully chosen gauges in most terrestrial applications reduces porosity. Such mixtures theoretically permit less than 4% porosity in the starting powder but it becomes 15?20% more with binary or tertiary mixtures. powders containing particles in a wide range of sizes can approach 0% porosity as the finest grains are introduced. Even the introduction of vibration or shaking to free movement cannot induce closest configuration in powder mixtures. Sizeable theoretical and practical analyses exist to assist in understanding the packing of powders. Its seen that gravitational differential settling of the mixture tends to segregate grains in the compress, and some degree of cold welding occurs immediately upon formation of the powder compress which generates internal frictions that strongly impede further compaction. Moderate forces applied to a powder mass immediately cause grain rearrangements and superior packing. Specifically, pressures of 105 Pa (N/m2) decrease porosity by 1?4%; increasing the force to 107 Pa adds only an additional 1?2%. greater force is required mechanically to close all remaining voids by plastic flow of the compressed metal and distinct physical effects of particle deformation and mass flow become significant at still higher pressures or by the application of heat.
0 comments
* Growing demand for electricity
* Limited energy source
* Unavailability of land due to high population

Floating solar power stations

* Availability of solar energy
* Direct conversion into electrical energy
* Rich water logged areas
* Requires minimal land area
* Reduction in evaporation of water

* Expense can be minimized by using reservoirs of hydel stations
* 6-10 hours of electricity generation per day

Electricity generation by solar cells
Solar cells which are made of semi conductor material absorbs light which falls on it. This energy knocks the electron loose and it starts flowing freely. But the electric field in PV cells forces the free electrons to move in a certain direction which is current and this current is tapped by placing metal contacts on top and bottom of PV cell. The current together with cells voltage is called the power of a solar cell.
0 comments
Building an economical and competent resource for energy storage, plays a key role in energy conservation as it can cut down the time radically, for both energy supply and energy demand. This innovative concept as crucial as locating fresh sources of energy, can actually enhance the output of the whole energy system there by escalating its consistency.

In phase change materials (PCM?s) like the salt hydrates, paraffin?s, non-paraffin?s and eutectics of inorganic, the storage of solar heat energy, leads to compactness and delivering of heat energy at steady temperatures.
0 comments
The world first came to know about these tiny tubes of rolled up sheets of carbon hexagons in 1991. Born at the research labs of NEC, the size of these miniature tubes, which is 10,000 times slimmer than a human hair doesn?t have any comparison to its potential uses.

By the regulated manoeuvring of carbon nanotubes, mega teeny-weeny electronic devices can be designed. This nanotubes also can be used as microscopic wires.

Recent experiments by the IBM scientists to develop a method for changing the position, shape and orientation in addition to, finding a way for cutting the tiny tubes, have been fruitful using an atomic force microscope (AFM).
0 comments
Flywheel energy storage is all about the conversion of energy generated by motion, the kinetic energy into the most expensive energy, the electricity. Kinetic energy gets accumulated in the flywheel energy storage system by the continuous rotation of a small motor in a low-friction background. This accumulated kinetic energy gets converted to electricity by the continuous gyrating of rotor, once an interim back-up power is needed due to power failure or oscillations in power.

The incorporation of the tasks of a motor, flywheel rotor and a generator into a single built-in system is the basic theory behind the Active Power's CleanSource? Flywheel Technology. As per it?s working, motor draws power from the electric supply to continuously revolve the flywheel, the constant source of kinetic energy. This kinetic energy gets translated to electricity by the generator. The advantage of all-in-one technology is that the cost can be cut down drastically as well as the product efficiency can be improved.

Synthetic polymers that include the polyethylene, nylon etc are often called by the term ?plastics?. They are broadly organized under three different classes namely, thermoplastics, thermosets and elastomers.

These set of artificial polymers finds its use in almost all walks of our everyday life, be it related to the food packaging industry, the film industry, fibre, tube or pipe industry etc. Personal care industry also takes advantage from this plastic revolution related to areas like the texture of products, binding and moisture retention. The term plastic explosion refers to the manufacturing of polymers like the Acrylic, polyethylene etc.
0 comments
Propulsion subsystems in a spacecraft aid in preserving the three-axis stability and the control spin. In addition to it, they also help in carrying out manoeuvres and to bring about slight alterations in trajectory. The superior ones called ?engines? can infact, generate an acceleration of large torques in volume to sustain the stability of a solid rocket motor during burning phase. The delta-V for simple tasks like the interplanetary trajectory correction manoeuvres, orbit trim manoeuvres, reaction wheel de-saturation manoeuvres and the routine three-axis stabilization or spin control, are arranged with the support of rather smaller engines that can provide with a force between 1N and 10N. While AACS, kick-off many of regular actions of the propulsion subsystems, all these actions are regulated by CDS.

Magellan spacecraft had four rocket engine modules, where each module housed two 445 -N engines, one gold-colored 22-N thruster, and three gold colored 1-N thrusters.

Each engines had different tasks to fulfil. The 445 -N engines had to manage the mid flight course corrections and take care of orbit-trim corrections also had to steer the spacecraft during the burning of solid rocket motor on entering the venus orbit. While the 22-N thrusters protected the spacecraft from undulating on the manoeuvres, the 1-N thrusters gave the necessary acceleration for wheel de-saturation and small manoeuvres.
0 comments
It was in Wales on February 21st 1804, the world?s very first the steam engine was introduced by Richard Trevithick. The speed of the first steam locomotive was just 8 km/h (5 mph). Inventions followed and by the year 1815, George Stephenson developed the first workable steam locomotive. In 1825, exactly after 21 years of the one of the world?s most powerful and revolutionary inventions in transportation, the world?s first passenger train was born. Developed by George Stephenson, the locomotive ran at a speed of 25 km/h (16 mph).

It is hard to define the term high speed trains in this technologically sophisticated world, as we have today locomotives, that can actually be magnetically lifted off the tracks or can leap through the track at a speed of 500 km/h (311 mph). . But for the purpose of it, we call the trains, which run at or above the speed f 150 km/h as the high speed trains.
0 comments
The concept of this break through technology in transportation system was put forward by the American rocket scientist Robert Goddard in 1904. His hypothesis that by the sheer use of electromagnetic rails, the trains could be lifted off the tracks gained popularity with many nations of the world. Intense investigations and plans were charted out by Japan and Germany by the 70?s on this notion on flying trains.

The movement of maglev train is based on the concepts of magnetism and magnetic fields, formed by the utilization of high-powered electro-magnets. Wheels and other moving parts causing friction with air are removed from this flying train, which helps both in lifting the train off the track and its smooth movement in air. The suspension systems that give the maglev train this leverage are the Electromagnetic Suspension and Electrodynamic Suspension. Another recently developed suspension system called the Intuctrack is in the phase of research and design.


Maglev trains are the ideal choice for both the low speed and high speed transportation. Between 1984 and 1995, Birmingham, in England used the low transportation facility for short distance travels. Engineers always preferred to create high speed maglev trains, which can travel at a speed of 343mph, or 552km/h compared to the low speed trains.
0 comments
Rotary engine also referred as the Wankel engine or the Wankel rotary engine, named after it?s inventor Dr. Felix Wankel, is a type of internal combustion engine. It has the same characteristics of the regular piston engine but totally differs in their working mechanism. In the regular piston engine, the cylinder performs the four distinct tasks of intake, compression, combustion and exhaustion, rotationally.

Where as in a rotary engine, the same cylinder does the same four functions at the same time. The feeling is like, the cylinder, totally committed to do only that particular task. The only difference it makes from the conventional piston engine, is that the cylinder has to work frequently shifting from one job to another.
0 comments
Manufacturing of the intricate machines has taken a new dimension with the introduction of the innovative process of chemical, electrical and mechanical methods. Electrochemical machining (ECM) is a groundbreaking technology, for the development of complicated machines.
0 comments
Explosives that is potent in nature and bigger in size like the dynamite needs, another explosive gadget, which can trigger off the actual blast. More over, these devices should meet the safety standards and also should have enough power to set off the explosion. Usually all explosive compounds are sensitive but if they are more sensitive then there is always a chance of accidental explosions to happen. Even if, an explosive compound is safer with respect to its sensitivity then there is another possibility that it doesn?t get ignited intentionally.

The blasting cap comes into picture when both these issues arise. A blasting cap, a small detonating device, usually comes in three different categories namely, the non-electric caps, the electric caps and the fuse caps. It is safer to use, as the blasting is triggered off through a dynamo device that produces a small quantity of current through a wire, confining to the safety measures. A blasting cap also contains an easy-to-ignite explosive or the primary explosive, a high explosive compound, that can give the preliminary activation energy for a more stable blast. What makes the blasting cap more user friendly and dependable, is the fact that it also has a booster explosive. Usually the blasting cap is engaged to trigger off blasts in explosives like the sodium azide, lead azide, mercury fulminate and the tetryl. Blasting caps are usually misinterpreted because of their size and looks, causing accidents.
0 comments
ABSTRACT Participant Name : Manickam. AR Title of Paper : Influence of an iron fuel additive on the performance and emissions of a DI diesel engine. Project Work Carried out at College of Engineering, Anna University, Chennai ? 600 025, South India for the course, Master of Engineering in Internal Combustion Engineering, May 2007. This program used a twin cylinder 0.6 litre DI NA diesel engine to study the influence of ferrocene as a fuel additive on performance Characteristics and particulate and Nox emissions. Diesel fuel additives are used for a wide variety of purposes, however they can be grouped into four major categories: ? Engine performance ? Fuel stability ? Fuel handling ? Contaminant control In the category of improving engine performance there is a fuel additive which is ferrocene a combustion improver. In the presence study ferrocene was used as a fuel additive with diesel in 25 ppm and 250 ppm composition by weight. Performance and emission characteristic were studied. In the category of fuel additives there are metal containing additives. These are most commonly used to improve performance. The metal concentrations used in some of these test have been greater than 1000 rpm which is excessive. These high doses found an increase in total particulate mass emitted. Another diesel performance improvement technique is heterogeneous in cylinder catalysis with platinum. It suggested that simple presence of a catalytic metal is not sufficient to provide combustion modification. Ferrocene is a unique compound having iron content of 30% by weight, no oxygen in the molecule and thermal and oxidative stability. Studies show that it has good combustion catalytic activity. In the presence work ferrocene was used as an additive in various ratios with diesel fuel and performance and emissions studies were carried out in a DI Diesel Engine. The experiment was conducted with ferrocene as additive with diesel in a fourstroke, twin cylinder, water cooled, direct injection diesel engine. The following conclusions are drawn based on the experimental results: ؉NOx and smoke reduced with increasing percentage of ferrocene as additive with simultaneous increase in the brake thermal efficiency. ؉The use of ferrocene as additive influences the combustion process. ؉When 250 ppm dose there is increased in particulate emission is observed. ؉At 25 ppm dose NOx is reduced and performance is increased. ؉At 25 ppm is the optimum does of ferrocene with diesel.
0 comments
The starting torque is improved by making the following ajustments:
1.Intially,by changing the rail track i.e by making a light slots(outer groove) on track up to some distance eg:(10mts).

2.secondly,by changing the locomotive wheel i.e instead of smooth surface make it inner groove so that it exactly fix in to rail track groove so that it avoids slipings during starting and stopping. by
this,we can improve starting torque by atleast 25% and it reduces power saving.
0 comments
Nanotechnology, development and production of artefacts in which a dimension of less than 100 nanometres (nm) is critical to functioning (1 nm = 10-9 m/40 billionths of an inch). Nanotechnology is a hybrid science combining engineering and chemistry. Atoms and molecules stick together because they have complementary shapes that lock tog- ether, or charges that attract. As millions of these atoms are pieced together by nanomachines, a specific product will begin to take shape. The goal of nanotechnology is to manipulate atoms individually and place them in a pattern to produce a desired structure. Nanotechnology is likely to change the way almost everything, including medicine, computers and cars, are designed and constructed. Nanotechnology holds out the promise of materials of precisely specified composition and properties, which could yield structures of unprecedented strength and computers of extraordinary compactness and power. Nanotechnology may lead to revolutionary methods of atom-by-atom manufacturing and to surgery on the cellular scale. Scientists have made some progress at building devices, including computer components, at nanoscales. Nanotechnology is anywhere from five to 15 years in the future.Nanotechnology is the new frontier and its potential impact is compelling.
0 comments
The gas turbine has been an enormously successful power plant for aircraft and marine propulsion, and electric power generation, due to its lightweight, smooth and reliable operation, low emissions, and varied applications. Nevertheless, it is not very efficient in converting fuel energy to useful work, due to fundamental thermodynamic limitations imposed by turbo machinery technology.

In recent years, there is renewed interest in the potential high efficiency of alternative thermodynamic cycles and unsteady combustion systems for propulsion and gas turbine applications, with pulse detonation engines and wave rotors receiving significant attention. While wave rotors have been mostly used as dynamic pressure exchangers in previous efforts, the achievement of nearly constant-volume combustion inside rotor channels is a relatively new goal, and has been experimentally and numerically investigated. Such a pressure-gain combustion device, know as the (internal) combustion wave rotor, should yield a higher cycle thermal efficiency than conventional constant-pressure combustors. This presentation provides an overview of the latest developments in the field of combustion wave rotors, highlighting the ongoing joint research effort between Rolls Royce, IUPUI, and Purdue University to design, test, and commercialize the combustion wave rotor.
0 comments
The safety of future manned missions to the moon and Mars hinges upon a proper understanding of how fires behave in outer space zero gravity/ micro gravity environments. Past studies have been limited to low oxygen concentrations and configurations in which the high velocity fuel enters an oxidizing atmosphere. However, one important fire scenario in outer space is a high velocity high oxygen oxidizer jet encountering fuel (an inverse flame configuration). This study focuses on flame shapes of oxygen- enhanced flames in inverse and normal diffusion flame configurations. The objective is to compare results from recent analytical models with experimental data for a number of normal and inverse flame configurations with varying oxygen concentrations to understand the effect of high oxygen concentrations and gravity on flame shapes. Flame shapes are important for fire safety researchers and for flame radiation calculations, which require flame surface area and volume information. The results show that in highly convective inverse diffusion flames gravity has a marginal effect of gravity on flame shape. The effect of buoyant acceleration and thermal gradients in the flame are studied through an extended form of the Roper model. The results are encouraging considering the limitations of the analytical model.
0 comments
Important two-phase fluid mechanics problems that are dominated or significantly affected by capillary effects are not oddities but rather daily events in our lives. Seemingly unrelated topics such as the economics of satellite television and pulmonary health and safety research share capillary fluids physics. Current and future miniaturized two-phase heat transfer loops and fuel cells possess similar concerns. This seminar presents research in a number of capillary fluids topics on Earth and in orbit. An energy method code, Surface Evolver, is exploited to address a variety of engineering and science issues. Details of the modeling are discussed, including advantages of the surface mesh, the excellent high-fidelity contact angle boundary condition, and the broad applicability of the code. Results from work in the control and gauging of liquid rocket propellants in orbit, sealing of pulmonary passages by droplets, infiltration of liquid into porous media, critical contact angle determination, space station fluids experiment design, the 2500 liter Dewar in the recently-launched Gravity Probe-B satellite, and similar are presented. Conclusions drawn from these results demonstrate the great efficiency of applying energy methods to capillary fluids problem in engineering and science.
0 comments
The Scuderi Split Cycle Engine design is a rethink of the conventional four-stroke Otto cycle internal combustion engine conceived by Carmelo J. Scuderi (1925-2002). While as of this writing no working prototype of the engine exists, computer simulations carried out by the Scuderi Group and the Southwest Research Institute showed promising gains in efficiency and toxic emissions. It also has the innate capacity to power an air hybrid system.

n a conventional Otto-cycle engine, each cylinder performs four strokes per cycle: intake, compression, power, and exhaust. This means that two revolutions of the crankshaft are required for each power stroke. The Scuderi split-cycle engine divides these four strokes between two paired cylinders: one for intake/compression and another for power/exhaust. Compressed air is transfered from the compression cylinder to the power cylinder through a crossover passage. Fuel is then injected and fired to produce the power stroke. Because the engine produces one power stroke per crankshaft rotation, a Scuderi-cycle engine has the same total engine size (number of cylinders and displacement) as a comparable Otto-cycle engine.

The power cylinder fires just after the piston has begun its downward motion (after top dead center, or ATC). This is in contrast to engine design convention, which calls for combustion just before top dead center (BTC) in order to allow combustion pressure to build. The Scuderi-cycle engine can get away with firing ATC because its burn rate is faster, and so is able to build pressure more quickly. This property of firing ATC is a key feature of the design, as it enables the engine's higher efficiency and lower emissions.
0 comments
In engineering, the Miller cycle is a combustion process used in a type of four-stroke internal combustion engine. The Miller cycle was patented by Ralph Miller (engineer), an American engineer, in the 1940s.

This type of engine was first used in ships and stationary power-generating plants, but was adapted by Mazda for their KJ-ZEM V6, used in the Millenia sedan. More recently, Subaru has combined a Miller cycle flat-4 with a hybrid driveline for their 'Turbo Parallel Hybrid' car, known as the Subaru B5-TPH.

A traditional Otto cycle engine uses four 'strokes', of which two can be considered 'high power' – the compression stroke (high power consumption) and power stroke (high power production). Much of the internal power loss of an engine is due to the energy needed to compress the charge during the compression stroke, so systems that reduce this power consumption can lead to greater efficiency.

In the Miller cycle, the intake valve is left open longer than it would be in an Otto cycle engine. In effect, the compression stroke is two discrete cycles: the initial portion when the intake valve is open and final portion when the intake valve is closed. This two-stage intake stroke creates the so called 'fifth' cycle that the Miller cycle introduces. As the piston initially moves upwards in what is traditionally the compression stroke, the charge is being pushed back out the still-open valve. Typically this loss of charge air would result in a loss of power. However, in the Miller cycle, the piston is over-fed with charge air from a supercharger, so pushing some of the charge air back out into the intake manifold is entirely planned. The supercharger typically will need to be of the positive displacement type due its ability to produce boost at relatively low engine speeds. Otherwise, low-rpm torque will suffer.

A key aspect of the Miller cycle is that the compression stroke actually starts only after the piston has pushed out this 'extra' charge and the intake valve closes. This happens at around 20% to 30% into the compression stroke. In other words, the actual compression occurs in the latter 70% to 80% of the compression stroke. The piston gets the same resulting compression as it would in a standard Otto cycle engine for 70% of the work.

The Miller cycle results in an advantage as long as the supercharger can compress the charge using less energy than the piston would use to do the same work. Over the entire compression range required by an engine, the supercharger is used to generate low levels of compression, where it is most efficient. Then, the piston is used to generate the remaining higher levels compression, operating in the range where it is more efficient than a supercharger. Thus the Miller cycle uses the supercharger for the portion of the compression where it is best, and the piston for the portion where it is best. In total, this reduces in the power needed to run the engine by 10% to 15%. To this end, successful production engines using this cycle have typically used variable valve timing to effectively switch off the Miller cycle in regions of operation where it does not offer an advantage.

In a typical spark ignition engine, the Miller cycle yields an additional benefit. The intake air is first compressed by the supercharger and then cooled by an intercooler. This lower intake charge temperature, combined with the lower compression of the intake stroke, yields a lower final charge temperature than would be obtained by simply increasing the compression of the piston. This allows ignition timing to be altered to beyond what is normally allowed before the onset of detonation, thus increasing the overall efficiency still further.

Efficiency is increased by raising the compression ratio. In a typical gasoline engine, the compression ratio is limited due to self-ignition (detonation) of the compressed, and therefore hot, air/fuel mixture. Due to the reduced compression stroke of a Miller cycle engine, a higher overall compression ratio (supercharger compression plus piston compression) is possible, and therefore a Miller cycle engine has a better efficiency.

It should be noted that the benefits of utilizing positive displacement superchargers do not come without a cost. 15% to 20% of the power generated by a supercharged engine is usually required to do the work of driving the supercharger, which compresses the intake charge (also known as boost).

A similar delayed-valve closing method is used in some modern versions of Atkinson cycle engines, but without the supercharging. These engines are generally found on hybrid electric vehicles, where efficiency is the goal, and the power lost compared to the Miller cycle is made up through the use of electric motors.
0 comments
The Lenoir cycle is an idealised thermodynamic cycle for the pulse jet engine. An ideal gas undergoes

constant volume heating
reversible adiabatic expansion.
isobaric compression to the volume at the start of the cycle.
The expansion process is isentropic and hence involves no heat interaction. Energy is absorbed as heat during the constant volume process and rejected as heat during the constant pressure process.
0 comments
The Kalina cycle is a thermodynamic cycle for converting thermal energy to mechanical power which utilizes working fluid comprised of at least two different components and a ratio between those components is varied in different parts of the system to increase thermodynamical reversibility and therefore increase overall thermodynamic efficiency. There are multiple variants of Kalina cycle systems specifically applicable for different types of heat sources. Several proof of concept power plants using the kalina have already been built.

The Kalina cycle was invented by the Russian engineer Aleksandr Kalina.
0 comments
The Atkinson cycle engine is a type of Internal combustion engine invented by James Atkinson in 1882. The Atkinson cycle is designed to provide efficiency at the expense of power.

The Atkinson cycle allows the intake, compression, power, and exhaust strokes of the four-stroke cycle to occur in a single turn of the crankshaft. Owing to the linkage, the expansion ratio is greater than the compression ratio, leading to greater efficiency than with engines using the alternative Otto cycle.

The Atkinson cycle may also refer to a four stroke engine in which the intake valve is held open longer than normal to allow a reverse flow of intake air into the intake manifold. This reduces the effective compression ratio and, when combined with an increased stroke and/or reduced combustion chamber volume, allows the expansion ratio to exceed the compression ratio while retaining a normal compression pressure. This is desirable for improved fuel economy because the compression ratio in a spark ignition engine is limited by the octane rating of the fuel used. A high expansion ratio delivers a longer power stroke, allowing more expansion of the combustion gases and reducing the amount of heat wasted in the exhaust. This makes for a more efficient engine.

The disadvantage of the four-stroke Atkinson cycle engine versus the more common Otto cycle engine is reduced power density. Because a smaller portion of the intake stroke is devoted to compressing the intake air, an Atkinson cycle engine does not intake as much air as would a similarly-designed and sized Otto cycle engine.

Four stroke engines of this type with this same type of intake valve motion but with forced induction (supercharging) are known as Miller cycle engines.

Multiple production vehicles use Atkinson cycle engines:

Toyota Prius hybrid electric (front-wheel-drive)

Ford Escape hybrid electric (front- and four-wheel drive)

In all of these vehicles, the lower power level of the Atkinson cycle engine is compensated for through the use of electric motors in a hybrid electric drive train. These electric motors can be used independent of, or in combination with, the Atkinson cycle engine.
0 comments
In conventional chemical synthesis or chemosynthesis, reactive molecules encounter one another through random thermal motion in a liquid or vapor. In a hypothesized process of mechanosynthesis, reactive molecules would be attached to molecular mechanical systems, and their encounters would result from mechanical motions bringing them together in planned sequences, positions, and orientations. It is envisioned that mechanosynthesis would avoid unwanted reactions by keeping potential reactants apart, and would strongly favor desired reactions by holding reactants together in optimal orientations for many molecular vibration cycles. Mechanosynthetic systems would be designed to resemble some biological mechanisms.
0 comments
The most fundamental consideration in CFD is how one treats a continuous fluid in a discretized fashion on a computer. One method is to discretize the spatial domain into small cells to form a volume mesh or grid, and then apply a suitable algorithm to solve the equations of motion (Euler equations for inviscid, and Navier-Stokes equations for viscid flow). In addition, such a mesh can be either irregular (for instance consisting of triangles in 2D, or pyramidal solids in 3D) or regular; the distinguishing characteristic of the former is that each cell must be stored separately in memory. Lastly, if the problem is highly dynamic and occupies a wide range of scales, the grid itself can be dynamically modified in time, as in adaptive mesh refinement methods.

If one chooses not to proceed with a mesh-based method, a number of alternatives exist, notably :

smoothed particle hydrodynamics, a Lagrangian method of solving fluid problems,
Spectral methods, a technique where the equations are projected onto basis functions like the spherical harmonics and Chebyshev polynomials
Lattice Boltzmann methods, which simulate an equivalent mesoscopic system on a Cartesian grid, instead of solving the macroscopic system (or the real microscopic physics).
Methodology
In all of these approaches the same basic procedure is followed.

The geometry (physical bounds) of the problem is defined.
The volume occupied by the fluid is divided into discrete cells (the mesh).
The physical modelling is defined - for example, the equations of motions + enthalpy + species conservation
Boundary conditions are defined. This involves specifying the fluid behaviour and properties at the boundaries of the problem. For transient problems, the initial conditions are also defined.
The equations are solved iteratively as a steady-state or transient.
Analysis and visualization of the resulting solution.
0 comments
Computational Fluid Dynamics (CFD) is an extremely effective tool to solve problems involving mass and even heat transfer. Problems with varying degrees of complexities can now be easily solved with the help of appropriately written CFD codes. A CFD analysis is performed to:1)understand the nature of fluid flow around differsnt geometries like an aerofoil or any other shape for that matter.2)find the lift and drag effects for an aerodynamic structure.3)analyse the heat transfer by fluids in devices ranging from radiators to furnaces to gas turbine combustion to rocket propulsion. 4)analyse fluid flow in complex piping networks, etc.,. 5) boundary layer analysis for many different applications.5)optimum circuit board design for efficient cooling,..........etc. ANSYS FLOTRAN is a very capable software that could be positively used to analyse fluid flow problems ranging from steady state and transient state to compressible and incompressible flow to laminar and turbulent flow. This presentation gives the procedural steps for performing a simple CFD analysis for a convergent flow using ANSYS 10.0 software.
0 comments
The recent renewed interest on fluid-fuel nuclear systems has stimulated the development of models and methods for the physics of these systems, for both steady-state and transient analysis. The European research project MOST within the 5-th Framework Program has devoted a considerable effort to the assessment of models and computational tools for molten-salt systems. The seminar addresses the activities carried out at Politecnico di Torino on the neutronics of fluid-fuel systems. The basic physical phenomena connected to the motion of the fuel in a nuclear reactor are illustrated. Suitable mathematical models for the proper description of the neutronics are presented, showing the peculiar effects of the velocity field on the steady-state distribution and the time-dependent response for a molten salt reactor, both critical and subcritical. The second part of the seminar is devoted to the deduction of the quasi-static technique for fluid-fuel systems, extending the classic factorization methods. Results obtained in the recent benchmark exercise performed within the MOST Projects are presented.
0 comments
Titanium matrix composites (TMCs) have been extensively evaluated for their potential to replace conventional superalloys in high temperature structural applications, as they provide significant weight-savings while maintaining comparable mechanical properties. Gamma titanium aluminide alloys and an appropriate fiber could offer an improved TMC for use in intermediate temperature applications (400-800°C). The purpose of this investigation is aimed at evaluating the potential of a gamma titanium aluminide alloy with nominal composition Ti-46.5Al-4(Cr,Nb,Ta,B)at.% as a matrix material in future aerospace transportation systems, where very light-weight structures are necessary to meet the goals of advanced aerospace programs. Monotonic tests were performed on thin rolled sheet product to evaluate basic mechanical properties and stress-strain behavior of the gamma titanium alloy. Coupons of SCS-6/gamma TiAl were manufactured at NASA LaRC. Analytical predictions were made of the optimal composite stress-strain response using AGLPLY. An [0]4 composite lay-up was modeled to estimate residual stresses after consolidation and the potential of these composites as structural materials. The analysis considered various fiber volume ratios and two potential reinforcing fibers: Ultra-SCS and Nextel 610. High residual stresses were observed due to the CTE mismatch in the materials. Laminates with Nextel 610 fibers were found to offer the best potential for a composite in this comparison. The laminate coupons manufactured cracked during cooling due to the large thermal mismatch between the silicon carbide fibers and the matrix material.
0 comments
In recent years subcritical multiplying structures driven by an external neutron source have been proposed for long-lived product transmutation and for safe and acceptable energy production. The study of the dynamics of these systems needs the adaptation of current methods in order to properly reproduce the peculiar features of source-driven systems.The seminar addresses the recent activities carried out by the Reactor Physics Group of Politecnico di Torino on the neutron kinetics of source-driven systems. The basic physical aspects of the time-dependent behavior of subcritical systems and their consequences on the computational methods are illustrated. The importance of transport effects on the dynamic response of a subcritical system to an external source is addressed. Recent works on the adaptation of the quasi-static technique is presented. The problem of the choice of the weighting function and its influence on the effectiveness of quasi static schemes is highlighted. Furthermore, the principles of the multipoint scheme and the possibility of its inclusion into a quasi-static framework are discussed. Some numerical results show the advantages of the technique when dealing with highly decoupled systems.
0 comments
Meeting the customer requirements has always been the driving principle in product development. Now, more than ever, customers are demanding individually tailored options. Manufacturers that satisfy these demands of customization quickly and at the appropriate price will remain globally competitive. This poses a serious challenge however. In current industry practice, a rise in product customization requests means increased product complexity, variety, inventory costs, additional outsourcing, and information overload. Industry survey has shown that '20% of the parts initially thought to require new designs actually need them; 40% could be built from an existing design and 40% could be created by modifying an existing design.' In addition specialization in functions leads to designed parts and systems that undergo much iteration with suppliers resulting in higher lead times and costs. Because design knowledge and context are intimately related to the 3D geometry, a significant amount of knowledge generated during design and manufacturing is associated with the 3D model. Text-based search is inadequate for supporting the new requirements imposed on the entire product life cycle
0 comments
In metallic alloys at room temperature, atomic diffusion is typically very sluggish, and equilibrium or metastable phases do not undergo transformations. However, during plastic deformation, significant atomic mobility is possible. In addition, deformation may lead to mixing, possibly favoring the formation of nonequilibrium phases. In this talk, two examples will be presented, in which the rate of transformation to equilibrium is enhanced by mechanical deformation. In an unstable, nanocrystalline Fe50Cu50 solid solution, ball milling leads to decomposition into the equilibrium, nearly elemental, phases. Non-monotonic effects are observed, which are attributed to the evolving mechanical properties. In amorphous Al90Fe5Gd5, bending at room temperature leads to the precipitation of Al-rich nanocrystals at shear bands. The mechanisms of this transformation are investigated using high-resolution transmission electron microscopy, nanodindentation and bending experiments. Nanovoids are observed in shear bands created under tension, when nanocrystals do not form. It is concluded that the nucleation and growth rate is significantly enhanced by the excess free volume generated in the shear bands, unless the free volume coalesces into nanovoids. The kinetics of free-volume annihilation are analyzed by observing the effect of strain rate on the resulting nanocrystallite-size distribution.
0 comments
The photoacoustic effect - production of sound from light - may be exploited for detection and localization of gas leaks on the surface of otherwise sealed components. The technique involves filling the test component with a photoactive tracer gas, and irradiating the component to produce photoacoustic sound from any leak site where a tracer gas cloud forms. This presentation describes experiments utilizing 10.6-micron radiation from a carbon-dioxide laser with sulfur hexafluoride as a tracer gas. Here, photoacoustic sounds from a laminar plume of sulfur hexafluoride and several NIST-traceable calibrated leak sources with leak rates between 1 cubic centimeter in 4.6 hrs and 1cubic centimeter in 6.3 years were recorded with four or twelve microphones in a bandwidth from 3 kHz to ~100 kHz. The measured photoacoustic waveforms compare well with those from an approximate theoretical development based on the forced wave equation when the unmeasured size of the photoactive gas cloud is adjusted within the likely gas-cloud diffusion zone. However, for small gas clouds, the photoacoustic sound amplitudes predicted by the approximate theory fall far below the experimental observations and several potential reasons for this mismatch will be offered. Interestingly, the higher measured signal amplitudes imply that the sensitivity of photoacoustic leak testing may reach or even exceed the capabilities of the most sensitive commercial leak test systems based on helium mass-spectrometers.
0 comments
Stable tearing crack growth (slow ductile growth of a macroscopic crack in a load-bearing structure) is an important fracture failure process in ductile materials (e.g. metals) and usually precedes the final catastrophic failure of a structure. In recent years, there has been a growing demand for a simulation-based structural design and evaluation methodology that takes into account the stable tearing crack growth process. Within such a methodology, two tools must be available: (1) an efficient computer simulation code that can handle the kinematics of curvilinear crack growth and perform conventional stress analysis, and (2) a practical mixed-mode fracture criterion that can predict both the instant and direction of crack growth. This presentation will describe research efforts at the University of South Carolina aimed at understanding the stable tearing crack growth process and developing a simulation based prediction methodology for the process. Topics include the mixed-mode CTOD criterion, development of the 3D simulation software CRACK3D, effects of stress constraint, modeling of crack tunneling, and experimental validation studies.
0 comments
While the number of advanced devices designed for the wireless communication industry increases significantly, their sophistication in terms of technology, level of integration, and miniaturization increases as well. Concurrently, cost, size, and performance expectations become more and more stringent, necessitating advanced system architectures, 'new' materials and versatile design optimization procedures. The capability to manipulate the distribution of properties within the dielectric materials in an automated way is critical to enable dramatic improvements in antenna performance, and to overcome traditional design trade-offs between efficiency, bandwidth, and miniaturization. These concepts have not been addressed earlier in the electromagnetic (EM) community due to a plethora of barriers that have made these concepts unfeasible in the past. In this research, the challenges of system design are addressed from an interdisciplinary engineering perspective, with a focus on using automated design tools (such as topology optimization) and artificially engineered composite materials. Specifically, a design framework was developed using the concepts of topology optimization, rigorous analysis models, high-contrast dielectric materials and sophisticated millimeter-and micro-scale fabrication of ceramics, polymers, and other materials to create 'novel' EM devices. This allowed us, for the first time, to develop full three-dimensional volumetric material textures and printed conductor topologies to enhance the performance of various RF components such as filters and patch antennas. Design and fabrication technologies, presented in this research, based on basic capabilities of using engineered materials and systems, when applied correctly, will dramatically shift the face of multidisciplinary engineering design.
2 comments
The role of nuclear energy in the production of electricity continues to grow in importance as environmental issues put increased pressure on the generation of electricity from other sources. The presentation will give an overview of the present status of the domestic nuclear power industry and the outlook for the future. Issues such as performance of the current fleet, operating license extension as a bridge to the future generation of plants, strategy for the future of the industry, security, political and used fuel storage will be addressed.
0 comments
Because of their excellent high temperature characteristics, Ni-based single-crystal alloys are used in applications where operating temperatures exceed 900ºC. The initiation of cracks under these conditions is generally associated with micro-scale porosities (typically between 10 and 20 microns). A rate-dependent crystallographic constitutive model in conjunction with a mass diffusion model has been used to study crack initiation in single crystal nickel-base superalloys, exposed to an oxidizing environment and subjected to mechanical loading. The time to crack initiation under creep and fatigue loading conditions has been predicted using a strain-based failure criterion. The problem has been solved using a two dimensional finite element analysis. A notched compact tension specimen has been studied with a casting defect, idealized as a cylindrical void, close to the notch surface. The analysis predicts that, due to the strong localization of inelastic strain at the void, a microcrack will initiate in the vicinity of the void rather than at the notch. The numerical results have shown that the time or number of cycles to crack initiation depends strongly on the applied load level and the void location. The coupled diffusion-deformation finite element studies have shown that environmental effects (i.e. oxidation) reduce the time or number of cycles to crack initiation, due to the oxidation-induced material softening in the vicinity of the notch and void.
0 comments
In this topic we present a general theory of anharmonic lattice statics for analysis of defective complex lattices. This theory differs from the classical treatments of lattice statics in that it does not rely on knowledge of force constants for a limited number of nearest neighbor interactions. Instead, the only thing needed as input is an interatomic potential that models the interaction of atoms this theory takes into account the fact that close to defects force constants are different from those in the bulk crystal. This formulation of lattice statics reduces the analysis of defective crystals to solving discrete boundary-value problems which consist of system of difference equations with some boundary conditions. To be able to solve the governing equations analytically, the discrete governing equations are linearized about a reference configuration that resembles a nominal defect. Fully nonlinear solutions are obtained by modified Newton-Raphson iterations of the harmonic solutions. In this theory, defective crystals are classified into three groups: defective crystals with 1-D symmetry reduction, defective crystals with 2-D symmetry reduction, and defective crystals with no symmetry reduction. Our theory systematically reduces the discrete governing equations for defective crystals with 1-D and 2-D symmetry reductions to ordinary difference equations and partial difference equations in two independent variables, respectively. Solution techniques for the discrete governing equations are demonstrated through some examples for ferroelectric domain walls. This formulation of lattice statics is very similar to continuum mechanics and we hope that developing this theory would be one step forward for doing lattice scale calculations analytically.
0 comments
One of the central objectives of multiscale materials modeling is to develop and apply simulation techniques to uncover atomic-level mechanisms without the significant limitations on system size and simulation time inherent to purely atomistic methods. In this context, I have made two contributions. I developed an interatomic potential finite element method (IPFEM) to study homogeneous dislocation nucleation by nano-indentation. The implementation of IPFEM facilitates simulations at length scales that are large compared to atomic dimensions, while remaining faithful to the nonlinear interatomic interactions. Aided by a shear localization criterion, which was also calculated from the interatomic potential, I was able to provide atomically accurate predictions about when, where and how a dislocation nucleates beneath a nanoindenter. My second contribution was to extend the time-scale of atomistic simulation of fracture by adopting several reaction pathway sampling schemes. I studied the thermally activated processes at a crack tip that control the brittle to ductile transitions in solids. Using the sampling scheme of the nudged elastic band method, atomistic pathways were identified that characterize dislocation loop emission in Cu, cleavage crack extension in Si, and water-assisted bond ruptures in a silica nanorod. The associated energetics and atomistic geometries were quantified, thus making contact with previous continuum analyses and experimental observations.
0 comments
The uniaxial extensional viscosity is a fundamental material property of a fluid which characterizes the resistance of a material to stretching deformations. For microstructured fluids, this extensional viscosity is a function of both the rate of deformation and the total strain accumulated. Some of the most common manifestations of extensional viscosity effects in complex fluids are the dramatic changes they have on the lifetime of a fluid thread undergoing capillary breakup. In a pinching thread, viscous, inertial and elastic forces can all resist the effects of surface tension and control the 'necking' that develops during the pinch-off process. The dominant balance of forces depends on the relative magnitudes of each physical effect and can be rationalized by dimensional analysis. The high strains and very large molecular deformations that are obtained near breakup can result in a sharp transition from a visco-capillary or inertio-capillary balance to an elasto-capillary balance. As a result of the absence of external forcing the dynamics of the necking process are often self-similar and observations of this 'self-thinning' can be used to extract the transient extensional viscosity of the material. It can also lead to iterated dynamical processes that result in self-similar spatial structures such as a 'beads on a string' morphology. The intimate connection between the degree of strain-hardening that develops during free extensional flow and the dynamical evolution in the profile of a thin fluid thread is important in many industrial processing operations and is also manifested in heuristic concepts such as 'spinnability', 'tackiness' and 'stringiness'. Common examples encountered in every-day life include the spinning of ultra-thin filaments of silk by orb-weaving spiders, the stringiness of cheese, the drying of liquid adhesives, splatter-resistance of paints and the unexpectedly long life-time of strands of saliva.
0 comments
The science-based methodologies we introduce students to first should be those that provide for tackling the broadest possible classes of problems. In this seminar I will review some of the historical antecedents that underlie most of our current undergraduate science-based curricula and then, as an example, introduce a means of introducing basic mechanics in a way that gives students access to a very broad class of problems of professional interest. As a demonstration that these methods can be used to solve problems of significant professional interest, I will show an application to the dynamics of a spinning satellite with long radial wire appendages.

Rough surface contact plasticity at microscale and nanoscale is of crucial importance in many new applic at ions and technologies, such as nano-imprinting and nano-welding. The multiscale n at ure of surface roughness, the structural and size-sensitive m at erial deform at ion behavior, and the importance of surface forces and other physical interactions give rise to very complex surface phenomena at small scales. We first show the p at hological behaviors of contact models based on fractal roughness and continuum plasticity theory. A micromechanical model of surface steps under adhesive contact examines disloc at ion nucle at ion from surface sources and disloc at ion interaction underne at h. The disloc at ion nucle at ion process is studied by both at omistic simul at ions and the Rice-Thomson model. Depending on interface adhesion, roughness fe at ures and slip planes, we have a variety of surface deform at ion behaviors, such as anisotropic hardening and l at ent softening. As a consequence, the rough surface contact at mesoscale leads to the form at ion of a disloc at ion double layer, which cannot be predicted by existing continuum and nonlocal plasticity theories. The micromechanical analysis of surface plasticity could serve as the connection between microscale bulk disloc at ion plasticity and nanoscale at omistic simul at ions.
0 comments
Environmental noise is a growing priority. Concerns about avi at ion noise have essentially halted airport expansion in large metropolitan areas, which has increased congestion, reduced safety margins, and impacted economic development. Community response to highway traffic noise is beginning to cre at e the same conflict between economic development, safety, and environmental concerns for ground transport at ion systems. Environmental noise is a complex systems problem with significant sociological, physiological, or policy elements as well as engineering issues. Thus, environmental noise is not exclusively an engineering problem. However, engineering research is essential to the mitig at ion aspect of this issue and in some cases to the effects aspect as well. Engineering researchers make good partners and leaders of the multi-disciplinary teams th at are required to find balanced solutions. Examples of the evolution of multi-disciplinary research teams for traffic noise and avi at ion noise will be described with particular emphasis on the engineering research components of these problems.
0 comments
There are several growing needs for hydrogen: in the near-term for the upgrading of heavy petroleum, as a chemical feedstock, and for the production of fertilizer; in the mid-term for the production of synthetic fuels for transportation; and in the long-term as a transportation fuel through the use of fuel cells.



High-temperature electrolytic water-splitting supported by nuclear process heat and electricity has the potential to produce hydrogen with an overall system efficiency of 45 to 50 %, while avoiding the challenging corrosive conditions of the thermochemical processes.



A program is under way at INEEL to develop materials and techniques for high-temperature electrolytic production of hydrogen using solid-oxide cells. Solid-oxide fuel cells have been developed primarily for power production using hydrogen or hydrocarbons as a fuel.
0 comments
The wide deployment and applications of automatic sensing devices and computer systems have resulted in both temporally and spatially dense data-rich environments, which bring new challenges in quality engineering. Data fusion, through integration of engineering domain knowledge with data analysis techniques from advanced statistics, signal processing, decision making and control, represents one of the frontiers in quality improvement research for complex systems. In this presentation, an overview of ongoing research activities along this emerging area will be presented. Examples of methodological developments and their applications will be discussed to demonstrate the characteristics of data fusion research and the need of multidisciplinary efforts. Detail discussions will be given on a model free multiscale process monitoring method for autocorrelated processes, which is demonstrated in solar cell manufacturing processes.

The wide deployment and applications of automatic sensing devices and computer systems have resulted in both temporally and spatially dense data-rich environments, which bring new challenges in quality engineering. Data fusion, through integration of engineering domain knowledge with data analysis techniques from advanced statistics, signal processing, decision making and control, represents one of the frontiers in quality improvement research for complex systems. In this presentation, an overview of ongoing research activities along this emerging area will be presented. Examples of methodological developments and their applications will be discussed to demonstrate the characteristics of data fusion research and the need of multidisciplinary efforts. Detail discussions will be given on a model free multiscale process monitoring method for autocorrelated processes, which is demonstrated in solar cell manufacturing processes.
0 comments
Due to innovations in sensor technology and rising complexity of manufacturing processes, increasingly more sensors are distributed throughout manufacturing and production systems to maintain the production performance, to ensure the life-cycle quality of products, and to improve the quality of management and service. This system-wide deployment of sensing devices is referred to as a distributed sensor system. Distributed sensor systems have resulted in a data-rich manufacturing environment and provide us unprecedented opportunities for quality and productivity improvement. In order to realize the full potential of a distributed sensor system, several fundamental research issues need to be addressed, including quantification of a sensor system, information estimation using multi-sensor measurements, and information system optimization. Under the context of a multi-station assembly process, this talk will discuss some recent work on a diagnosability analysis for sensor system evaluation (information quantification), in-process variation estimators (information estimation), and sensor distribution in a multi-station process (information system optimization).
0 comments
This topic is devoted to the analysis of the mechanical behavior of filament-wound pipes. The process of filament winding makes it possible to produce rotating parts made of composite material containing polymers reinforced with long fibers. Most of the time, this process consists in winding a fiber tow coated with a thermosetting polymeric matrix around a mandrel, and thus covering the entire mandrel after successive passages. Following a polymerization phase, the mandrel can be removed in order to get only the composite structure. In certain cases, the mandrel can be left in place and is then used as a liner.

Filament winding is used for the manufacturing of gas/fluid vessels. But one of the main uses is to perform pipes for fluid transportation in nuclear or oil industries, or for marine applications. Pipes made of composite materials offer a good resistance to adverse environmental conditions. In fact, glass fibers are generally used for these applications. The loading of the pipes structures is often a complex phenomenon and involves the study of the mechanical behavior under multiaxial loading.
0 comments
A fast multilevel multipole (FMM) algorithm is derived for the Helmholtz equation and adopted to the symmetric Galerkin boundary element method (BEM) for acoustics. The FMM allows to evaluate a matrix-vector product of the BEM with the computational cost of O(N log 2 N) , thus leading to a significant reduction of computation time and memory requirements compared to standard BEM formulations with computational cost of O(N 2) . This allows the simulation of large scale acoustic models. The performance of the algorithm is demonstrated on the example of sound radiation from an L-shaped domain with BE discretizations of up to 105 elements. A coupling algorithm based on Lagrange multipliers is proposed for the simulation of structure-acoustic field interaction. Finite plate elements are coupled to the Galerkin boundary element formulation of the acoustic domain. The interface pressure is interpolated as a Lagrange multiplier, thus, allowing the coupling of non-matching grids. The resulting saddle-point problem is solved by an approximate Uzawa-type scheme in which the matrix-vector products of the boundary element operators are evaluated efficiently by the fast multipole boundary element method. The algorithm is demonstrated on the example of a cavity-backed elastic panel.
0 comments
Ultrasonic methods enjoy the great capabilities in solving various problems associated with structural engineering and engineering materials. In this seminar, a host of research works and studies in this field carried out by the speaker will be presented to demonstrate the broad spectrum of their applications in aerospace engineering. Focus will be on nondestructive evaluation (NDE) and characterization of aerospace materials such as composites. Topics to be presented in the seminar include (1) 3-D imaging of damage in composites, (2) echo detection using the homomorphic deconvolution, (3) mechanical characterization of fiber-reinforced composites, (4) ray tracing model for characterizing fiber waviness in thick composites, (5) real-time monitoring of damage evolution in ceramic-matrix composite, (6) monitoring of matrix cracking, (7) ultrasonic phased arrays and processing of array data, (8) synthetic phase tuning of Lamb waves, (9) spectrotemporal analysis of dispersive waves, etc.
0 comments
On the macro scale, feedback control is routinely applied to improve performance and enable new tasks in complex and uncertain systems operating in noisy environments. Our lab has focused on applying feedback control ideas to systems on the micro scale. Here we show how control methods can improve existing performance in the UCLA lab-on-a-chip electrowetting system and can create entirely new capabilities in our 'micro fluidic tweezers' cell steering devices.In the Electro-Wetting-On-Dielectric (EWOD) system developed at UCLA by CJ Kim, a grid of electrodes is used to locally change surface tension forces on liquid droplets. By choosing the electrode firing sequence it is possible to move, split, join, and mix liquids in the droplets. We present an experimentally validated 2-phase fluid flow model of the liquid dynamics, and then show the development of control algorithms validated on this model. Control ideas and real time image algorithms are presented for controlling material points on the liquid/gas boundaries, for precision splitting of droplets, for steering of particles inside the droplets, and for dealing with external noise source. Nanoventions has developed a number of low-cost, complex micro-optic polymer film and particle products that provide optical for the second example; we show how feedback flow control can enable particle steering in cheap, handheld micro-fluidic systems using real time vision feedback and routine electro-osmotic actuation. Here we create temporally and spatially varying flow fields that carry all the particles along their desired trajectories. We have demonstrated flow steering of many particles at once both in simulations and in experiments. (Flow control steering for a single yeast cell is shown above.) This system is being further developed to enable sample preparation (remove all but the interesting objects from the sample) and for cell loading for a 'cell clinics' olfactory and bio-chemical sensing system at the University of Maryland.
0 comments
The Radiation Safety Information Computational Center (RSICC) is focused on collecting, organizing, and disseminating computational codes and nuclear data associated with radiation transport and safety. Established in 1963 as the Radiation Shielding Information Center, RSICC currently has a library of approximately 1700 code and data packages used for radiation source characterization, dosimetry, neutral- and charged-particle shielding, criticality safety, radiation dispersion modeling, and reactor physics. Although a large number of these software packages represent an archiving of historical information, approximately 2000 software packages are distributed each year because they represent current state-of-the-art software that is valuable for general- and special-purpose nuclear analyses. These software packages are widely distributed worldwide, especially to nuclear engineering students and professors.
0 comments
Nondestructive testing has become an essential tool for failure prevention, conditionbased maintenance, structural health monitoring, as well as for quality control during production processes. Ultrasound and Acousto-Optics are applied to examine the mechanical properties of isotropic and anisotropic materials, and also to detect material defects. Fiber reinforced composites are ubiquitous in many industries, yet the interaction of ultrasound with such materials is particularly challenging because of effects such as surface roughness, the fact that materials in principle may be triclinic instead of orthotropic, the presence of a pre-stress, piezoelectric effects, the existence of a coating, the finite dimensions of material parts, etc. Some important recent results will be presented. The presentation will focus on the interaction of ultrasound with multi-layered fiber reinforced composites and crystals, the propagation of ultrasound in piezoelectric materials that are subject to a pre-stress, and diffraction phenomena on 1D and 2D corrugated surfaces.
0 comments
As quality and Six Sigma excellence has become a decisive factor in global market competition, quality control techniques such as Statistical Process Control (SPC) and Engineering Process Control (EPC) are becoming popular in industries. With advances in information, sensing, and data capture technology, large volumes of data are being routinely collected and shared over multiple-stage processes, which has a growing impact on the existing SPC and EPC methods. This talk will discuss several technical challenges and present some recent extensions in this area such as profile monitoring, engineering-controlled process monitoring, and multistage SPC.
0 comments
As quality and Six Sigma excellence has become a decisive factor in global market competition, quality control techniques such as Statistical Process Control (SPC) and Engineering Process Control (EPC) are becoming popular in industries. With advances in information, sensing, and data capture technology, large volumes of data are being routinely collected and shared over multiple-stage processes, which has a growing impact on the existing SPC and EPC methods. This talk will discuss several technical challenges and present some recent extensions in this area such as profile monitoring, engineering-controlled process monitoring, and multistage SPC.
0 comments
Introducing the concept of AMOEBA ORGANIZATION based on adaptability, which is the key to business success of modern days; but many organizations are too rigidly organized to adapt to constant change & seize new opportunities. Modern day organizations are lengthening their life span by reshaping internal systems for flexibility, modernizing their cultures & monitoring the ever-changing environments in which they operate.
0 comments
It is shown that several (single-mode) models (PTT, XPP) are basically special cases of the General Network model. The XPP model of Verbeeten et al (2001) is shown to give an identical response to a PTT model at high elongational rates, but the models differ in shear flow at high shear rates. The Giesekus term in the XPP model has actually no effect at any deformation rate except the generation of a second normal stress difference at small shear rates. The possibility of tailoring the shear flow response at high shear rates is explored. The results for the models are also given to show the effect of branching on the response in both transient and steady flows - elongation, shear, planar elongation and biaxial deformation. Finally, the fitting of data with multi-mode models is investigated.
0 comments
WPI has a project-based curriculum that is well suited to the teaching of design. Open-ended design projects are assigned in courses from freshman to senior level, including a year-long senior thesis. A large fraction of these projects are sponsored by industry and thus are real engineering problems. In some cases the students' designs have been prototyped and delivered to the sponsor to be put into use in their laboratories or production facilities. This talk will describe the design education program at WPI and present examples of the types of projects that we assign to students at various levels. It will conclude with some general recommendations for the teaching of mechanical engineering design.
0 comments
Mechanical cutting of visco-elastic polymers is experimentally investigated using sharp knives. The knife is aligned orthogonally to the substrate's surface, and is forced into the substrate. Two cutting scenarios are investigated i) plunge cutting, where the knife edge moves directly into the substrate, and ii) slicing, where the knife shears tangentially as it moves into the substrate. For slicing fracture lines emanating from the cut are observed, and the threshold forces for cutting are considerably smaller, indicating that the two cutting processes are very different.
0 comments
With future threats emerging and the overall needs of the military changing, the armor mechanics community is always exploiting new materials and technologies to stay at the forefront of defense capabilities. Along these lines, this presentation will cover some of the research conducted at the Army Research Laboratory (ARL) and will be given in two parts. The first part will cover recent work completed on the ballistic testing of two silicon carbides with varying porosity distributions. The second part will go over a brief introduction of the work conducted by the Armor Mechanics Branch at ARL.
0 comments
A new research thrust in the analysis of complex systems has been created and an undergraduate curriculum has been reinvigorated.This has been accomplished by implementing a model of Scholarship put forth by Boyer in his work 'Scholarship Revisited'.Information Technology is the proving ground for the scholarship of integration and, by consequence, a catalyst for new teaching,research and service efforts. While the standardization of legacy software places boundaries on disciplines, the utilization of information technologies at a fundamental level enriches the investigations the scholar. This will presentation will connect Boyer's four models of scholarship to the development of a new software approach to analyzing complex systems using information technologies.
0 comments
The science-based methodologies we introduce students to first should be those that provide for tackling the broadest possible classes of problems. In this seminar I will review some of the historical antecedents that underlie most of our current undergraduate science-based curricula and then, as an example, introduce a means of introducing basic mechanics in a way that gives students access to a very broad class of problems of professional interest. As a demonstration that these methods can be used to solve problems of significant professional interest, I will show an application to the dynamics of a spinning satellite with long radial wire appendages.

Rough surface contact plasticity at microscale and nanoscale is of crucial importance in many new applic at ions and technologies, such as nano-imprinting and nano-welding. The multiscale n at ure of surface roughness, the structural and size-sensitive m at erial deform at ion behavior, and the importance of surface forces and other physical interactions give rise to very complex surface phenomena at small scales. We first show the p at hological behaviors of contact models based on fractal roughness and continuum plasticity theory. A micromechanical model of surface steps under adhesive contact examines disloc at ion nucle at ion from surface sources and disloc at ion interaction underne at h. The disloc at ion nucle at ion process is studied by both at omistic simul at ions and the Rice-Thomson model. Depending on interface adhesion, roughness fe at ures and slip planes, we have a variety of surface deform at ion behaviors, such as anisotropic hardening and l at ent softening. As a consequence, the rough surface contact at mesoscale leads to the form at ion of a disloc at ion double layer, which cannot be predicted by existing continuum and nonlocal plasticity theories. The micromechanical analysis of surface plasticity could serve as the connection between microscale bulk disloc at ion plasticity and nanoscale at omistic simul at ions.
0 comments
Although grinding is considered one of the more established manufacturing processes, it is still relatively poorly understood. In particular, the interaction of the wheel with the workpiece at times seems more like an art than a science. In this presentation, a 3-D topographic measurement approach which encompasses both the wheel and the workpiece will be presented. The topographic information obtained from the wheel will be applied to a model to predict the workpiece surface generated which will then be verified by experimentation. The preliminary results demonstrate a good correlation between the predicted and actual surface generated. Further research is now underway test the robustness of the model for a variety of wheel types and materials.
0 comments
Novel shape memory effect (SME) and pseudoelastic behavior are discovered in single-crystalline Au, Cu, and Ni nanowires with lateral dimensions of 1.5-10 nm. Under tensile loading and unloading, these wires can recover elongations of up to 50%, well beyond the recoverable strains of 5-8% typical for most bulk shape memory alloys (SMAs). Results of atomistic simulations and evidences from experiments show that this phenomenon only exists at the nanometer scale and is associated with a reversible crystallographic lattice reorientation driven bythe high surface-stress-induced internal stresses at the nanoscale.This understanding also explains why these metals do not show an SME at macroscopic scales.
0 comments
Rheology is the study of the deformation and flow of matter under the influence of an applied stress. The term was coined by Eugene Bingham, a professor at Lehigh University, in 1920, from a suggestion by a colleague, Markus Reiner. The term was inspired by Heraclitus's famous expression panta rei, 'everything flows'. Rheology unites the seemingly unrelated fields of plasticity and non-Newtonian fluids by recognising that both these types of materials are unable to support a shear stress in static equilibrium. In this sense, a plastic solid is a fluid. Granular rheology refers to the continuum mechanical description of granular materials. One of the tasks of rheology is to empirically establish the relationships between deformations and stresses, respectively their derivatives by adequate measurements. These experimental techniques are known as rheometry. Such relationships are then amenable to mathematical treatment by the established methods of continuum mechanics. Rheology has important applications in engineering, geophysics and physiology. In particular, hemorheology, the study of blood flow, has an enormous medical significance. In geology, solid Earth materials that exhibit viscous flow over long time scales are known as rheids. In engineering, rheology has had its predominant application in the development and use of polymeric materials (plasticity theory has been similarly important for the design of metal forming processes, but in the engineering community is often not considered a part of rheology).
0 comments
Diamond is the hardest material known to man kind. When used on tools, diamond grinds away material on micro (Nano) level. Diamond is the hardest substance known and is given a value of 10 in the Mohs hardness scale, devised by the German mineralogist Friedrich Mohs to indicate relative hardness of substances on a rating scale from 1 to 10. Its hardness varies in every diamond with the crystallographic direction. Moreover, hardness on the same face or surface varies with the direction of the cut.

Diamond crystallizes in different forms. Eight and twelve sided crystal forms are most commonly found. Cubical, rounded, and paired crystals are also common. Crystalline diamonds always separate cleanly along planes parallel to the faces. The specific gravity for pure diamond crystals is almost always 3.52. Other properties of the diamond are frequently useful in differentiating between true diamonds and imitations: Because diamonds are excellent conductors of heat, they are cold to the touch; Most diamonds are not good electrical conductors and become charged with positive electricity when rubbed; Diamond is resistant to attack by acids or bases; Transparent diamond crystals heated in oxygen burn at about 1470° F, forming carbon dioxide.
0 comments
The future of fuel-cell vehicles is already happening in an unlikely proving ground: forklifts used in warehouses. Several manufacturers are testing forklifts powered by a combination of fuel cells and batteries -- and finding that these hybrids perform far better than the lead-acid battery systems now typically used. In some situations, in fact, they could pay for themselves in cost savings and added productivity within two or three years.

The adoption of the technology points to a promising hybrid strategy for finally making fuel cells economically practical for all sorts of vehicles. While researchers have speculated for years that hydrogen fuel cells could power clean, electric vehicles, cutting emissions and decreasing our dependence on oil, manufacturing fuel cells big enough to power a car is prohibitively expensive -- one of the main reasons they are not yet in widespread use. But by relying on batteries or ultracapacitors to deliver peak power loads, such as for acceleration, fuel cells can be sized as much as four times smaller, slashing manufacturing costs and helping to bring fuel cell-powered vehicles to market.

The forklift hybrids use ultracapacitors, devices similar to batteries but able to deliver higher bursts of power. The fuel cell powers the forklift as it drives through a warehouse, while at the same time the cell charges the ultracapacitors. The ultracapacitors kick in to lift a pallet.

'If you had to do that with just fuel-cell power, you'd need a fuel cell about four times as large, which would be too big,' says Michael Sund, spokesperson for Maxwell Technologies, an ultracapacitor manufacturer. 'It would dwarf the forklift, and it would also be very expensive. Being able to downsize the fuel cell makes it smaller, lighter, and cheaper.'

The use of the fuel-cell hybrids in forklifts could bode well for the auto industry. Cars and SUVs, like forklifts, have peak power demands. When cruising, they use less than one-quarter of an engine's maximum power, which is sized to provide acceleration and sustained power up long hills, says Brian Wicke, who's developing fuel-cell systems at GM.

Batteries and ultracapacitors could provide at least some of the accelerating power, allowing the fuel cell to be smaller. Last year, GM rolled out a concept car featuring a hybrid system, although it will be after the end of the decade before such a vehicle is available. Other major automakers are also pursuing the hybrid technology.

In addition to supplying peak power, ultracapacitors and batteries give fuel-cell vehicles the ability to recapture energy from braking, as happens now with commercial gasoline-battery hybrid vehicles. This can make the system much more efficient, especially in applications such as city driving. A vehicle powered by a fuel cell alone would not have this ability.

'You can't take energy into a fuel cell. You've got to have a battery,' says Brian Barnett at Tiax in Cambridge, MA, a company that has provided analyses of fuel cells for the U.S. Department of Energy. 'Why you would put an electric drive train system on the road, and not have the ability to accept regenerative braking is beyond me.'


The future of fuel-cell vehicles is already happening in an unlikely proving ground: forklifts used in warehouses. Several manufacturers are testing forklifts powered by a combination of fuel cells and batteries -- and finding that these hybrids perform far better than the lead-acid battery systems now typically used. In some situations, in fact, they could pay for themselves in cost savings and added productivity within two or three years.

The adoption of the technology points to a promising hybrid strategy for finally making fuel cells economically practical for all sorts of vehicles. While researchers have speculated for years that hydrogen fuel cells could power clean, electric vehicles, cutting emissions and decreasing our dependence on oil, manufacturing fuel cells big enough to power a car is prohibitively expensive -- one of the main reasons they are not yet in widespread use. But by relying on batteries or ultracapacitors to deliver peak power loads, such as for acceleration, fuel cells can be sized as much as four times smaller, slashing manufacturing costs and helping to bring fuel cell-powered vehicles to market.

The forklift hybrids use ultracapacitors, devices similar to batteries but able to deliver higher bursts of power. The fuel cell powers the forklift as it drives through a warehouse, while at the same time the cell charges the ultracapacitors. The ultracapacitors kick in to lift a pallet.

'If you had to do that with just fuel-cell power, you'd need a fuel cell about four times as large, which would be too big,' says Michael Sund, spokesperson for Maxwell Technologies, an ultracapacitor manufacturer. 'It would dwarf the forklift, and it would also be very expensive. Being able to downsize the fuel cell makes it smaller, lighter, and cheaper.'

The use of the fuel-cell hybrids in forklifts could bode well for the auto industry. Cars and SUVs, like forklifts, have peak power demands. When cruising, they use less than one-quarter of an engine's maximum power, which is sized to provide acceleration and sustained power up long hills, says Brian Wicke, who's developing fuel-cell systems at GM.

Batteries and ultracapacitors could provide at least some of the accelerating power, allowing the fuel cell to be smaller. Last year, GM rolled out a concept car featuring a hybrid system, although it will be after the end of the decade before such a vehicle is available. Other major automakers are also pursuing the hybrid technology.

In addition to supplying peak power, ultracapacitors and batteries give fuel-cell vehicles the ability to recapture energy from braking, as happens now with commercial gasoline-battery hybrid vehicles. This can make the system much more efficient, especially in applications such as city driving. A vehicle powered by a fuel cell alone would not have this ability.

'You can't take energy into a fuel cell. You've got to have a battery,' says Brian Barnett at Tiax in Cambridge, MA, a company that has provided analyses of fuel cells for the U.S. Department of Energy. 'Why you would put an electric drive train system on the road, and not have the ability to accept regenerative braking is beyond me.'
0 comments
This seminar will review the state of the art of active control of gas turbine combustors processes. The seminar will first discuss recently developed approaches for active control of detrimental combustion instabilities by use of 'fast' injectors that modulate the fuel injection rate at the frequency of the instability and appropriate phase and gain. Next, the paper discusses two additional approaches for damping of combustion instabilities; i.e., active modification of the combustion process characteristics and open loop modulation of the fuel injection rate at frequencies that differ from the instability frequency. The second part of the seminar will discuss active control of lean blowout in combustors that burn fuel in a lean premixed mode of combustion to reduce NOx emissions. This discussion will describe recent developments of optical and acoustic sensing techniques that employ sophisticated data analysis approaches to detect the presence of lean blowout precursors in the measured data. It will be shown that this approach can be used to determine in advance the onset of lean blowout and that the problem can be prevented by active control of the relative amounts of fuel supplied to the main, premixed, combustion region and a premixed pilot flame. The will close with a discussion of research needs, with emphasis on the integration of utilized active control and health monitoring and prognostication systems into a single combustor control system.
0 comments
The typical future-tech scenario calls for millions of low-powered radio frequency devices scattered throughout our environment -- from factory-floor sensor arrays to medical implants to smart devices for battlefields.





Because of the short and unpredictable lifespans of chemical batteries, however, regular replacements would be required to keep these devices humming. Fuel cells and solar cells require little maintenance, but the former are too expensive for such modest, low-power applications, and the latter need plenty of sun.


A third option, though, may provide a powerful -- and safe -- alternative. It's called the Direct Energy Conversion (DEC) Cell, a betavoltaics-based 'nuclear' battery that can run for over a decade on the electrons generated by the natural decay of the radioactive isotope tritium. It's developed by researchers at the University of Rochester and a startup, BetaBatt, in a project described in the May 13 issue of Advanced Materials and funded in part by the National Science Foundation.


Because tritium's half-life is 12.3 years (the time in which half of its radioactive energy has been emitted), the DEC Cell could provide a decade's worth of power for many applications. Clearly, that would be an economic boon -- especially for applications in which the replacement of batteries is highly inconvenient, such as in medicine and oil and mining industries, which often place sensors in dangerous or hard-to-reach locations.


'One of our main markets is for remote, very difficult to replace sensors,' says Larry Gadeken, chief inventor and president of BetaBatt. 'You could place this [battery] once and leave it alone.'


Betavoltaic devices use radioisotopes that emit relatively harmless beta particles, rather than more dangerous gamma photons. They've actually been tested in labs for 50 years -- but they generate so little power that a larger commercial role for them has yet to be found. So far, tritium-powered betavoltaics, which require minimal shielding and are unable to penetrate human skin, have been used to light exit signs and glow-in-the-dark watches. A commercial version of the DEC Cell will likely not have enough juice to power a cell phone -- but plenty for a sensor or pacemaker.


The key to making the DEC Cell more viable is increasing the efficiency with which it creates power. In the past, betavoltaics researchers have used a design similar to a solar cell: a flat wafer is coated with a diode material that creates electric current when bombarded by emitted electrons. However, all but the electron particles that shoot down toward the diodes are lost in that design, says University of Rochester professor of electrical and computer engineering Phillipe Fauchet, who developed the more-efficient design based on Gadeken's concept.


The solution was to expose more of the reactive surface to the particles by creating a porous silicon diode wafer sprinkled with one-micron wide, 40 micron-deep pits. When the radioactive gas occupies these pits, it creates the maximum opportunity for harnessing the reaction.


As importantly, the process is easily reproducible and cheap, says Fauchet -- a necessity if the DEC Cell is to be commercially viable.


The fabrication techniques may be affordable, but the tritium itself -- a byproduct of nuclear power production -- is still more expensive than the lithium in your cell-phone battery. The cost is less of an issue, however, for devices designed specifically to collect hard-to-get data.



Cost is only one reason why Gadeken says he will not pursue the battery-hungry consumer electronics market. Other issues include the regulatory and marketing obstacles posed by powering mass-market devices with radioactive materials and the large battery size that would be required to generate sufficient power. Still, he says, the technology might some day be used as a trickle-recharging device for lithium-ion batteries.





Instead, his company is targeting market sectors that need long-term battery power and have a comfortable familiarity with nuclear materials.


'We're targeting applications such as medical technology, which are already using radioactivity,' says Gadeken.


For instance, many implant patients continue to outlive their batteries and require costly and risky replacement surgery.


Eventually, Gadeken hopes to serve NASA as well, if the company can find a way to extract enough energy from tritium to power a space-faring object. Space agencies are interested in safer and lighter power sources than the plutonium-powered Radioisotope Thermal Generators (RTG) used in robotic missions, such as Voyager, which has an RTG power source that is intended to run until around 2020.


Furthermore, a betavoltaics power source would likely alleviate environmental concerns, such as those voiced at the launch of the Cassini satellite mission to Saturn, when protestors feared that an explosion might lead to fallout over Florida.


For now, though, Gadeken hopes to interest the medical field and a variety of niche markets in sub-sea, sub-surface, and polar sensor applications, with a focus on the oil industry.


And the next step is to adapt the technology for use in very tiny batteries that could power micro-electro-mechanical Systems (MEMS) devices, such as those used in optical switches or the free-floating 'smart dust' sensors being developed by the military.


In fact, another betavoltaics device, under development at Cornell University, is also targeting the MEMS power market. The Radioisotope-Powered Piezoelectric Generator, due in prototype form in a few years, will combine a betavoltaics cell with a tritium-powered electromechanical cantilever device first demonstrated in 2002.


Amit Lal, one of the Cornell researchers, offers both praise and cautious skepticism about the DEC Cell. While he's impressed with the power output from the DEC Cell, he said that there are still issues with power leakage. To avoid those potential leakage problems, Cornell is using a slightly larger-scale wafer design. They're also planning to move to a porous design and either solid or liquid tritium to improve efficiency.


Lal also notes that the market for either Cornell's device or the DEC Cell might be squeezed by newer, longer-lasting lithium batteries. Still, there's a niche for very small devices, he believes, especially those that must run longer than ten years.
0 comments
Most metal parts are manufactured by either fully-liquid (e.g., casting) or fully-solid (e.g., forging) processes. Semi-Solid Metalworking (SSM) incorporates elements of both casting and forging for the manufacture of near-net-shape discrete parts. Applications are fuel rails, suspension arms, engine brackets, steering Knuckles,rear axle components and motor cycle upper fork plates.





SSM casting was selected for each of these applications for different reasons - high integrity, pressure tightness and design simplification. In each case SSM processing provides several unique advantages over other candidates





The process capitalises on thixotropy, a physical state wherein a solid material behaves like a fluid when a shear force is applied. The SSM process requires a nondendritic feedstock that can be produced by applying mechanical or electromechanical stirring during alloy solidification at a controlled rate, or from fine-grained materials produced by powder metallurgy or spray forming methods. This feedstock, usually in billet form, is then heated to a temperature between its solidus and liquidus and formed in dies to make near-net-shape parts.
0 comments
Previously, all ventures in to space were achieved with giant rockets which, after a certain amount of time , were directed back in to the earth’s atmosphere to be reduced to a cinder by the enormous heat of re entry –after the crew and their capsule had been ejected virtually all of that tremendously expensive equipment was destroyed after only one use.





Following are the main supporting systems of a space shuttle.





1. Propulsion system


2. External fuel tank


3. Space shuttle orbiter
0 comments
Environmental priorities are forcing chloro­-fluorocarbons out of market place thereby creating a new demand for cooling technologies. This paper presents an outcome of researches done to design, construct and evaluate a CFC free refrigeration system. A prototype Thermo acoustic cooling system utilizing high amplitude sound waves in an inert gas is discussed.





Thermo Acoustic Engines


A thermo acoustic engine converts some heat from a high Temperature heat source into acoustic power rejecting waste heat to a low temperature heat sink. The figure shown is very similar to appearance of a heat engine. The apparatus absorbs heat per unit time Qh from the heat source at high temperature Th rejects heat per unit time QC to a heat sink at low temperature TC and produces acoustic power W. According to first law of thermodynamics W + QC = Qh. The second law of thermodynamics show that the efficiency W/Qh is bounded by the carnot efficiency (Th-TC)/Th.


One of the most important scales in a thermo acoustic device is the length of its resonator which determines the operating frequency. Just as the length of an organ pipe determines its pitch. The gas sound speed is also an important criterion. In the figure with both ends of resonator are closed, the west resonant mode is that which fits a half wavelength.
0 comments
The nanotechnology needs some nanorobotic manipulationsystem for nano-instrumentation, fabrication and assembly, since it requires 3D manipulation in the nano scale world. The nano robotic manipulation requires some dexterous manipulation under the SEM and TEM environ-ment. This lecture will overview those systems for handing the Carbon Nanotubes (CNT) and show how to create nano-sensor, nano-actuator from the CNTs and NanoServosystem.