Thursday, September 30, 2010

Project for BE/MBA Students - Development of HR-E Induction Program

To develop a HR-E Induction Application with below functions.

1) Administrator should be able to maintain User master , Add new employees , Schedule a timeframe for taking the program, Reset Password
2) Send an automated welcome message to all the new users
3) Track the completion of the program by the employee
4) Generate completion/status report with assessment details
5) Reminder to the Employee/administrator on incomplete/pending induction
6) Sessions
7) Program Administrator to be able to contact all/selected employees through
8) Email Query submission to be received by both HR Administrator and subject are expert, ie. Employee who created the relevant ppt
9) In order to move to the successive session, the new joiner must score 100% on

Wednesday, September 29, 2010

Seminar Topics for Engineering Students

1. Micro Fabrication Techniques Of Microelectromechanical Systems
2. Virtual Manufacturing
3. Robotics Application
4. Fiber Reinforced Plastic
5. Smart Material
6. Diesel Locomotive
7. Recent Metrological Machines
8. Global Warming Its Effects On Climate And Mitigation Techniques.
9. Reverse Engineering
10. Rapid Prototyping And Rapid Tooling
11. Lasermachining
12. Artificial Intelligence
13. Biomedical Materials
14. Nanotechnology
15. Biodiesel
16. Rapid Prototyping & Rapid Tooling
17. Fuel Cells
18. Hybrid Vehicles
19. Microwave Processing Of Materials1
20. Robotic Applications In Arc Welding
21. Recent Advancements In Metrology_Mangesh Landage
22. Nuclear Power-Promises_Shirish Atri
23. Molten Carbonate Fuel Cells &Micro Fuel Cells_Mahesh
24. Global Warming
25. Gliding And Landing
26. Design Optimization

Tuesday, September 28, 2010

Why Virtual Manufacturing Now?

Why virtual manufacturing now? Perhaps the best answer to this question is that the very nature of simulation is the search for more information. Every simulation acts as a point from which one can better view the possibilities and then ask the next question. That question generally requires a finer simulation, or more of them, and as soon as that is available, someone will ask for the "optimum" solution. The primary limitation today in reaching this optimum solution is problem size. The needs of companies for faster solutions, for better and better simulations, for more refined and accurate simulations, and now for virtual manufacturing simulations leads to the unquenchable demand for more computational power. The computer industry is delivering on that demand. Recently the rapid diffusion of Electronic Commerce urges the birth of new type of enterprises and stream of value chain. Most existing companies are also trying to promptly adjust to the current EC environment. However, small and medium manufacturers can't adjust properly to the new environment because they are in short of money, personnel, and technology. To cope with this problem and strengthen the sales power of small and medium manufacturers, we have focused on virtual manufacturing system

Computer Industry Maturing
In the past, simulations such as these were limited to the largest of companies possessing the largest of computers. That is no longer the case. Today, all of our analysis and graphical products operate on workstations that are readily available from a number of manufacturers running any of the popular operating systems. Increasingly, the single most important factor in determining which computer you choose is simply "How fast do you want your answers?"
Parallel Processing
Parallel Processing involves combining the resources of many CPU's or entire machines and applying them to the solution of a single virtual manufacturing simulation so that problem solving becomes easy.

Wednesday, September 22, 2010

ECO-FRIENDLY REFRIGRATION SYSTEMS AND CRYOGENICS

ABSTRACT

The earth's ozone layer in the upper atmosphere is needed for the absorption of harmful ultraviolet rays from the sun. These rays can cause skin cancer. CFSs have been linked to the depletion of this ozone layer. They have varying degree of Ozone Depletion Potential (ODP). In addition, they act as greenhouse gases. Hence they have Global Warming Potential (GWP) as well. According to 'Montreal Protocol', the use of fully haloganated CFCs (no hydrogen at all in the molecule).they are considered to have high ODP viz. R11,R12,R113,R114 and R502 have been phased out by the year 2000A.D. R22 , which is an HCFC is not covered under the original Montreal Protocol as its ODP is only 5% of that of R12,but because of its GWP, it will be phased out by year 2030 AD.
Thus, for environment , efforts are now being directed to develop Eco-friendly alternative refrigeration systems .Search for alternative refrigerants is now necessary. Some alternative are R123,R125, R-134a,R407a,R408,R143a,R152a,R32.Among these R-134a is the best alternative for R-12.However application of R-134a is not without any problems. One more attractive alternative is R152-a. Its magnitude of GWP is less than R-134a and it also has lower energy consumption. Hence, the Environment Protection Agency of Europe prefers R152a over R-134a.to the extent that broader use of conventional refrigerants including ammonia and carbon dioxide are considered as additional alternative.
Liquefied gases (e.g. nitrogen gas) can also be used as alternative refrigerants or as Eco-friendly refrigerants, because they does not have O.D.P and G.W.P. Stirling cryocooler refrigerators used for small units are also to be considered.

INTRODUCTION

Refrigerants and air conditioning play an important role in modern human life. The accelerated technical development and economic growth of most countries during the last century has produced severs environmental problems. we have recognized the fact that man-made products contributing to human comfort as a result of harming the environmental :ozone depletion and global warming. These concerns are the field of refrigeration and air conditioning.
Choroflurocarbons (CFCs) and hydrochloroflurocarbons(HCFCs)are being used as refrigerants .They posses most of the desirable characteristics, such as thermal and chemical stability, thermodynamic suitability, non toxicity, non flammability, material compatibility, low cost etc. CFCs and HCFCs are now being regulated because of ozone depletion. Refrigerants such as R11,R12,R113,R114,R502 have to be phased out by year 2030 AD.

WHY ALTERNATE REFRIGERANTS ?
CFCs have dominated the refrigeration industry because of their favorable thermodynamic and transport properties. There is no denial of the fact that these are still the best from the performance point of view. Still we are talking of replacing them. Recent studies have established that CFCs are instrumental in depleting the ozone layer(stratosphere) which is 20km away from the earth and also contribute towards global warming. The ozone layer is needed for the absorption of harmful ultraviolet rays from the sun. These rays can cause health hazards on human and living creatures and other ecological and environmental problems such as global warming, sea level rise, increased floods, drought and storms etc. A hole was detected in ozone layer for the first time in 1956. Recent studies shows that this hole has expanded.

Mechanism of ozone layer depletion:
As these CFC molecules reach stratosphere they are dissociated by sunlight in to active chlorine compounds which attack ozone, generating a chain reactions. One molecule of CFC is sufficient to destroy 100000 molecules of ozone.
Equations are as follows:
CCl2F2 = CClF2 +Cl
Cl +O3 = ClO +O2
O+Cl2 = Cl +O2 .
O +O3 = 2O2 .

CFCs AND GLOBAL WARMING
Green house effect refers to the trapping of infrared radiation by atmosphere and subsequent warming of the earth. Although the green house effect is the primarily due to carbon dioxide and the concentration of CFCs are very low compared to CO2, CFCs absorb strongly in the I-R region particularly in the wavelength between 7 and 13 microns where atmosphere is largely transparent. This absorption is due to the C-CI and C-F bonds present in the CFCs. If the greenhouse gases keep on emitting at present rate, the average global temperature will increase. This would cause melting of large polar ice caps and hence submerging the coastal areas and also lead to ecological unbalance.

MONTREAL PROTOCOL:
Montreal Protocol (MP) on substances that deplete the ozone layer was established to phase out the consumption and production of ozone depleting substances(ODP)within a specified time frame both for developed and developing countries, reviewed time to time based on the advice from Scientific Technology and Economic Assessment Panels. Montreal Protocol does not address non _ODS. The original control schedule is contained was a 50% reduction in emission of CFCs by the year 1998 compared to 1986 levels and grace period of 10 years for developing countries. The protocol has been adjusted and many ozone depleting chemicals were added in the period of 1990-1995.

KYOTO PROTOCOL:
The Kyoto-Protocol (KP) adopted at the third conference of parties on the frame work convention of the global climate changes in the Kyoto in 1997 has been decided to put HCFCs together with CO2, N2O,CH4, PFCs AND SF6 in the basket of controlled substances. The Kyoto Protocol aims at the reduction and control of GHG emission and it is obligatory only for developed countries. While developing countries have no such commitments.
Although CFCs , HCFCs contribute to global warming, the Kyoto Protocol does not address these substances since these are already controlled under Montreal Protocol. Montreal Protocol and Kyoto-Protocols are interconnected.

ALTERNATIVES TO CFCS:
All around the world there is tremendous effort to develop CFC alternatives. Any alternative has to develop CFC alternatives. Any alternative has to posses all usual desirable characteristics. In addition they should have zero Ozone Depletion Potential(ODP), relatively low Global Warming Potential (GWP) and they should not be a volatile organic compound.
Hydrocarbons(HCs) and hydroflurocarbons(HCFCs) provide an alternative to fully haloganated CFC refrigerants. They have zero ODP. However HCFCs have a level of GWP. Hence these also have to be phased out ultimately. Till then, they can be used as transitional refrigerants.

SUBSTITUTES FOR CFC 12:
It is considered to be the most preferred substitute for R12 in USA. It's Normal Boiling Point (N.B.P.) of -26.15 is quite close to R12's N.B.P.(-29.8). But its one of big disadvantage is that it has relatively high GWP. The use of oil in R134 a systems requires a very stringent quality control. It is not soluble in mineral oil. The polyester based synthetic oil that is used with it should be totally dry. Moreover, it is non-reactive with Cu-based materials of construction, winding enamel etc.

HYDROCARBONS:
Hydrocarbons have zero ODP and negligible GWP.
Earlier researches for alternatives had excluded hydrocarbons because of their flammability. However, hydrocarbons are readily available, much cheaper and thermodynamically very suitable. The use of hydrocarbons in domestic refrigerator does increase the risk when compared to CFCs. Therefore possible means of minimizing the risk must be attempted.
Hydrocarbon refrigerator was tested and a safety mark award certificate was issued stating that the product meets the safety requirement of their Equipment Safety Law. The published reports on the performance of HC refrigerator have been encouraging and no adverse effect with respect to safety have been reported so far.

R152a(DIFUROETHANE)
One more attractive alternative is R152a. Its GWP is less and it has zero depletion potential. Also it has lower energy consumption. Hence Environmental Protection Agency of Europe prefers R152a over R134a. It is flammable. Inconsistencies exist among efficiency results reported by various investigators. However, results appear comparable to CFC-12 and HFC-134a. Comprehensive thermal stability information are not available for HCF-152a, but no significant issues have been identified to date.

ALTERNATIVE REFRIGERANTS TO R-502
R-502 is very widely used for low temperature applications. R-502 is being azeotropic mixture with one of its constituents CFC-115 and thus it is to be phased out along with other CFCs. The leading alternative to R-502 are R-507,R-404a,
R-407. These are HCF based azeotropic mixtures with a small temperature glide.
The performance of various alternatives to R-502 have been studied and relative performance is given in the table

Test refrigerant Condensing Temp Evaporating Temp. System Capacity (KW) COP

R-502

40.6
-33.4
31.2
1.00

R-507
40.6

-35.8
31.2
0.92

R-404
40.6
-36.8
28.9
0.90

R-407
40.6
-43.0
18.5
0.80

ALTERNATIVE REFRIGERANTS TO HCFC-22
HCFC-22 is very widely used refrigerant both for refrigeration and air conditioning. HCFC-22 is also considered as an alternative to CFCs for many applications. However HCFCs are also controlled substances due to their ozone depleting potential. The alternative refrigerants to HCFC-22 are discussed in the following table.

Refrigerant N.B.P Characteristics
R-23 -82 Critical temp. is too low
R-32 -51.7 Flammable
R-125 -48.1 High GWP, low theoretical efficiency
R-143a -47.2 Flammable, high GWP
R-134a -26.1 Low volumetric capacity
R-152a -24 Flammable

As shown in table R-32 has good properties as a good refrigerant, such as a high latent heat and a high thermal conductivity. But it is flammable and has high vapor pressure.

AMMONIA AS A REFRIGRANT:
In future, the environmental impact and higher cost of CFC's or their alternatives would favor the use of ammonia. COP of ammonia is 3% better than R-22 and 7% better than R-50. Ammonia's sensible heat capacity and conductivity are 4 to 5 times better than R-22 and R-12 and it is marginally less viscous. Thermodynamic and thermophysical properties of ammonia is nearly as a perfect refrigerant. Ammonia's physical and transport properties are much better than R-134a and it is recognized as a heat transfer fluid of very high order.
Advantages:
The advantages of low molecular weight of ammonia is of very high latent heat capacity. Thus, very less mass flow are required
to provide a given refrigeration effect. Ammonia has zero global warming potential & offers superior thermodynamic properties.
Disadvantages:
The low molecular weight of ammonia results pumping capacity for the compression equipments.
Ammonia has a relatively high specific heat ratio, therefore the compression heat results in very high discharge temperature of the vapour particularly at high pressure ratios.
One of the most serious drawback is ammonia`s toxicity. It is dangerous in excessively high concentrations & is considered as a hazardous chemical. Ammonia is also moderately flammable.


CRYOCOOLERS:
As CFC & HCFCs compounds have to be phased out a need has been created for a replacement refrigerant or a new cooling device. The stirling cycle , though already invented as an effective prime
mover, has become widely used in the cryogenic industry to obtain low temperatures & for production of liquid gases since the cycle uses only inert gas its working fluid, the environmental issues concerning the refrigeration are avoided altogether. It is well known that at low temperatures, the stirling cycle is superior to vapour compression cycle from an efficiency stand point.
Stirling cycle was invented by a Scottish minister Robert stirling for use in a hot air engine. Attempts have been made by investigator to revive stirling cycle as a refrigeration device. The best overall performance achieved so far is 35% of carnot. With continued development it is expected that at least 40% of carnot is easily within reach & greater than 50% may also be attained.
Efforts towards development of long life domestic refrigerator stirling cycle coolers are necessary and such coolers have a tremendous potential as a replacement for CFC based refrigeration system. Long life coolers will find variety of applications such as in hospitals, refrigerated transportation, electronics etc.

CONCLUSION:
As CFC`s & HCFC`s are to be phase out, so best suitable alternative refrigerants should be made available as early as possible. Various tests and investigations are being carried out and the property information is readily available for new refrigerants.
Research has to be done to find best alternative refrigerants. The future refrigerant should be low cost, early availability, high COP, non flammable, non toxic and most important low GWP & zero ODP.

GENERALITY IN ARTIFICIAL INTELLIGENCE

John McCarthy
Computer Science Department
Stanford University
Stanford, CA 94305
jmc@cs.stanford.edu
http://www-formal.stanford.edu/jmc/
1971-1987
Abstract
My 1971 Turing Award Lecture was entitled \Generality in Arti cial In-
telligence". The topic turned out to have been overambitious in that I dis-
covered that I was unable to put my thoughts on the subject in a satisfactory
written form at that time. It would have been better to have reviewed pre-
vious work rather than attempt something new, but such wasn't my custom
at that time.
I am grateful to the ACM for the opportunity to try again. Unfortunately
for our science, although perhaps fortunately for this project, the problem of
generality in AI is almost as unsolved as ever, although we now have many
ideas not available in 1971. This paper relies heavily on such ideas, but it is
far from a full 1986 survey of approaches for achieving generality. Ideas are
discussed at a length proportional to my familiarity with them rather than
according to some objective criterion.
It was obvious in 1971 and even in 1958 that AI programs su ered from
a lack of generality. It is still obvious, and now there are many more details.
The rst gross symptom is that a small addition to the idea of a program
1
often involves a complete rewrite beginning with the data structures. Some
progress has been made in modularizing data structures, but small modi ca-
tions of the search strategies are even less likely to be accomplished without
rewriting.
Another symptom is that no-one knows how to make a general database
of common sense knowledge that could be used by any program that needed
the knowledge. Along with other information, such a database would contain
what a robot would need to know about the e ects of moving objects around,
what a person can be expected to know about his family, and the facts about
buying and selling. This doesn't depend on whether the knowledge is to be
expressed in a logical language or in some other formalism. When we take the
logic approach to AI, lack of generality shows up in that the axioms we devise
to express common sense knowledge are too restricted in their applicability
for a general common sense database. In my opinion, getting a language
for expressing general common sense knowledge for inclusion in a general
database is the key problem of generality in AI.
Here are some ideas for achieving generality proposed both before and
after 1971. I repeat my disclaimer of comprehensiveness.
1 REPRESENTING BEHAVIOR BY PRO-
GRAM
Friedberg (1958 and 1959) discussed a completely general way of representing
behavior and provided a way of learning to improve it. Namely, the behavior
is represented by a computer program and learning is accomplished by mak-
ing random modi cations to the program and testing the modi ed program.
The Friedberg approach was successful in learning only how to move a single
bit from one memory cell to another, and its scheme of rewarding instructions
involved in successful runs by reducing their probability of modi cation was
shown by Herbert Simon (a now substantiated rumor froma 1987 personal
communication) to be inferior to testing each program thoroughly and com-
pletely scrapping any program that wasn't perfect. No-one seems to have
attempted to follow up the idea of learning by modifying whole programs.
The defect of the Friedberg approach is that while representing behaviors
by programs is entirely general, modifying behaviors by small modi cations
to the programs is very special. A small conceptual modi cation to a behavior
2
is usually not represented by a small modi cation to the program, especially
if machine language programs are used and any one small modi cation to
the text of a program is considered as likely as any other.
It might be worth trying something more analogous to genetic evolution;
duplicates of subroutines would be made, some copies would be modi ed and
others left unchanged. The learning system would then experiment whether
it was advantageous to change certain calls of the original subroutine to calls
of the modi ed subroutine. Most likely even this wouldn't work unless the
relevant small modi cations of behavior were obtainable by calls to slightly
modi ed subroutines. It would probably be necessary to provide for modi -
cations to the number of arguments of subroutines.
While Friedberg's problem was learning from experience, all schemes for
representing knowledge by program su er from similar di culties when the
object is to combine disparate knowledge or to make programs that modify
knowledge.
2 THE GPS AND ITS SUCCESSORS
One kind of generality in AI comprises methods for nding solutions that are
independent of the problem domain. Allen Newell, Herbert Simon and their
colleagues and students pioneered this approach and continue to pursue it.
Newell and Simon rst proposed the General problem Solver GPS in their
(1957) (also see (Ernst and Newell 1969). The initial idea was to represent
problems of some general class as problems of transforming one expression
into another by means of a set of allowed rules. It was even suggested in their
(1960) that improving GPS could be thought of as a problem of this kind.
In my opinion, GPS was unsuccessful as a general problem solver, because
problems don't take this form in general and because most of the knowledge
about the common sense needed for problem solving and achieving goals is
not simply representable in the form of rules for transforming expressions.
However, GPS was the rst system to separate the problem solving structure
of goals and subgoals from the particular domain.
If GPS had worked out to be really general, perhaps the Newell and Simon
predictions about rapid success for AI would have been realized. Newell's cur-
rent candidate for general problem representation is SOAR (Laird, Newell
and Rosenbloom 1987), which, as I understand it, is concerned with trans-
forming one state to another, where the states need not be represented by
3
expressions.
3 PRODUCTION SYSTEMS
The rst production systems were done by Newell and Simon in the 1950s,
and the idea was written up in their (1972). A kind of generality is achieved
by using the same goal seeking mechanism for all kinds of problems, changing
only the particular productions. The early production systems have grown
into the current proliferation of expert system shells.
Production systems represent knowledge in the form of facts and rules,
and there is almost always a sharp syntactic distinction between the two.
The facts usually correspond to ground instances of logical formulas, i.e.
the correspond to predicate symbols applied to constant expressions. Un-
like logic-based systems, these facts contain no variables or quanti ers. New
facts are produced by inference, observation and user input. Variables are
reserved for rules, which usually take a pattern-action form. Rules are put
in the system by the programmer or \knowledge engineer" and in most sys-
tems cannot arise via the action of the system. In exchange for accepting
these limitations, the production system programmer gets a relatively fast
program.
Production system programs rarely use fundamental knowledge of the
domain. For example, MYCIN (Buchanan and Shortli e 1974) has many
rules about how to infer which bacterium is causing an illness based on
symptoms and the result of laboratory tests. However, its formalism has no
way of expressing the fact that bacteria are organisms that grow within the
body. In fact MYCIN has no way of representing processes occuring in time,
although other production systems can represent processes at about the level
of the situation calculus to be described in the next section.
The result of a production system pattern match is a substitution of con-
stants for variables in the pattern part of the rule. Consequently production
systems do not infer general propositions. For example, consider the de ni-
tion that a container is sterile if it is sealed against entry by bacteria, and
all the bacteria in it are dead. A production system (or a logic program)
can only use this fact by substituting particular bacteria for the variables.
Thus it cannot reason that heating a sealed container will sterilize it given
that a heated bacterium dies, because it cannot reason about the unenumer-
ated set of bacteria in the container. These matters are discussed further in
4
(McCarthy 1984).
4 REPRESENTING KNOWLEDGE IN LOGIC
It seemed to me in 1958 that small modi cations in behavior are most often
representable as small modi cations in beliefs about the world, and this
requires a system that represents beliefs explicitly.
\If one wants a machine to be able to discover an abstraction, it seems most
likely that the machine must be able to represent this abstraction in some
relatively simple way" (McCarthy 1959).
The 1958 idea for increasing generality was to use logic to express facts
in a way independent of the way the facts might subsequently be used. It
seemed then and still seems that humans communicate mainly in declarative
sentences rather than in programming languages for good objective reasons
that will apply whether the communicator is a human, a creature from Alpha
Centauri or a computer program. Moreover, the advantages of declarative
information also apply to internal representation. The advantage of declara-
tive information is one of generality. The fact that when two objects collide
they make a noise may be used in particular situations to make a noise, to
avoid making noise, to explain a noise or to explain the absence of noise. (I
guess those cars didn't collide, because while I heard the squeal of brakes, I
didn't hear a crash).
Once one decides to build an AI system that represents information
declaratively, one still has to decide what kind of declarative language to
allow. The simplest systems allow only constant predicates applied to con-
stant symbols, e.g. on(Block1;Block2). Next one can allow arbitrary con-
stant terms, built from function symbols, constants and predicate symbols,
e.g. location(Block1) = top(Block2). Prolog databases allow arbitrary Horn
clauses that include free variables, e.g. P(x; y)^Q(y; z) R(x; z), expressing
the Prolog in standard logical notation. Beyond that lies full rst order logic
including both existential and universal quanti ers and arbitrary rst order
formulas. Within rst order logic, the expressive power of a theory depends
on what domains the variables are allowed to range. Important expressive
power comes from using set theory which contains expressions for sets of any
objects in the theory.
Every increase in expressive power carries a price in the required com-
plexity of the reasoning and problem solving programs. To put it another
5
way, accepting limitations on the expressiveness of one's declarative informa-
tion allows simpli cation of the search procedures. Prolog represents a local
optimum in this continuum, because Horn clauses are medium expressive but
can be interpreted directly by a logical problem solver.
One major limitation that is usually accepted is to limit the derivation
of new facts to formulas without variables, i.e to substitute constants for
variables and then do propositional reasoning. It appears that most human
daily activity involves only such reasoning. In principle, Prolog goes slightly
beyond this, because the expressions found as values of variables by Prolog
programs can themselves involve free variables. However, this facility is rarely
used except for intermediate results.
What can't be done without more of predicate calculus than Prolog allows
is universal generalization. Consider the rationale of canning. We say that a
container is sterile if it is sealed and all the bacteria in it are dead. This can
be expressed as a fragment of a Prolog program as follows.
sterile(X):-sealed(X); notalive-bacterium(Y;X):
alive-bacterium(Y;X):-in(Y;X); bacterium(Y ); alive(Y ):
However, a Prolog program incorporating this fragment directly can ster-
ilize a container only by killing each bacterium individually and would require
that some other part of the program successively generate the names of the
bacteria. It cannot be used to discover or rationalize canning | sealing the
container and then heating it to kill all the bacteria at once. The reasoning
rationalizing canning involves the use of quanti ers in an essential way.
My own opinion is that reasoning and problem solving programs will
eventually have to allow the full use of quanti ers and sets and have strong
enough control methods to use them without combinatorial explosion.
While the 1958 idea was well received, very few attempts were made to
embody it in programs in the immediately following years, the main one
being F. Black's Harvard PhD thesis of 1964. I spent most of my time
on what I regarded as preliminary projects, mainly LISP. My main reason
for not attempting an implementation was that I wanted to learn how to
express common sense knowledge in logic rst. This is still my goal. I might
be discouraged from continuing to pursue it if people pursuing nonlogical
approaches were having signi cant success in achieving generality.
(McCarthy and Hayes 1969) made the distinction between epistemological
and heuristic aspects of the AI problem and asserted that generality is more
6
easily studied epistemologically. The distinction is that the epistemology is
completed when the facts available have as a consequence that a certain strat-
egy is appropriate to achieve the goal, while the heuristic problem involves
the search that nds the appropriate strategy.
Implicit in (McCarthy 1959) was the idea of a general purpose common
sense database. The common sense information possessed by humans would
be written as logical sentences and included in the database. Any goal-
seeking program could consult the database for the facts needed to decide
how to achieve its goal. Especially prominent in the database would be
facts about the e ects of actions. The much studied example is the set of
facts about the e ects of a robot trying to move objects from one location to
another. This led in the 1960s to the situation calculus (McCarthy and Hayes
1969) which was intended to provide a way of expressing the consequences
of actions independent of the problem.
The basic formalism of the situation calculus is
s0 = result(e; s);
which asserts that s0 is the situation that results when event e occurs in
situation s. Here are some situation calculus axioms for moving and painting
blocks.
Quali ed Result-of-Action Axioms
8xls:clear(top(x); s)^clear(l; s)^:tooheavy(x) loc(x; result(move(x; l); s)) = l
8xcs:color(x; result(paint(x; c); s)) = c:
Frame Axioms
8xyls:color(y; result(move(x; l); s)) = color(y; s):
8xyls:y 6= x loc(y; result(move(x; l); s)) = loc(y; s):
8xycs:loc(x; result(paint(y; c); s)) = loc(x; s):
8xycs:y 6= x color(x; result(paint(y; c); s)) = color(x; s):
Notice that all quali cations to the performance of the actions are explicit in
the premisses and that statements (called frame axioms) about what doesn't
7
change when an action is performed are explicitly included. Without those
statements it wouldn't be possible to infer much about result(e2; result(e1; s)),
since we wouldn't know whether the premisses for the event e2 to have its
expected result were ful lled in result(e1; s).
Futhermore, it should be noticed that the situation calculus applies only
when it is reasonable to reason about discrete events, each of which results
in a new total situation. Continuous events and concurrent events are not
covered.
Unfortunately, it wasn't very feasible to use the situation calculus in
the manner proposed, even for problems meeting its restrictions. In the
rst place, using general purpose theorem provers made the programs run
too slowly, since the theorem provers of 1969 (Green 1969) had no way of
controlling the search. This led to STRIPS (Fikes and Nilsson 1971) which
reduced the use of logic to reasoning within a situation. Unfortunately, the
STRIPS formalizations were much more special than full situation calculus.
The facts that were included in the axioms had to be delicately chosen in
order to avoid the introduction of contradictions arising from the failure to
delete a sentence that wouldn't be true in the situation that resulted from
an action.
5 NONMONOTONICITY
The second problem with the situation calculus axioms is that they were
again not general enough. This was the quali cation problem, and a possible
way around it wasn't discovered until the late 1970s. Consider putting an
axiom in a common sense database asserting that birds can
y. Clearly
the axiom must be quali ed in some way since penguins, dead birds and
birds whose feet are encased in concrete can't
y. A careful construction of
the axiom might succeed in including the exceptions of penguins and dead
birds, but clearly we can think up as many additional exceptions like birds
with their feet encased in concrete as we like. Formalized nonmonotonic
reasoning (see (McCarthy 1980, 1986), (Doyle 1977), (McDermott and Doyle
1980) and (Reiter 1980)) provides a formal way of saying that a bird can

y unless there is an abnormal circumstance and reasoning that only the
abnormal circumstances whose existence follows from the facts being taken
into account will be considered.
Non-monotonicity has considerably increased the possibility of expressing
8
general knowledge about the e ects of events in the situation calculus. It has
also provided a way of solving the frame problem, which constituted another
obstacle to generality that was already noted in (McCarthy and Hayes 1969).
The frame problem (The term has been variously used, but I had it rst.)
occurs when there are several actions available each of which changes certain
features of the situation. Somehow it is necesary to say that an action changes
only the features of the situation to which it directly refers. When there is a
xed set of actions and features, it can be explicitly stated which features are
unchanged by an action, even though it may take a lot of axioms. However, if
we imagine that additional features of situations and additional actions may
be added to the database, we face the problem that the axiomatization of
an action is never completed. (McCarthy 1986) indicates how to handle this
using circumscription, but Lifschitz (1985) has shown that circumscription
needs to be improved and has made proposals for this.
Here are some situation calculus axioms for moving and painting blocks
taken from (McCarthy 1986).
Axioms about Locations and the E ects of Moving Objects
8xes::ab(aspect1(x; e; s)) loc(x; result(e; s)) = loc(x; s)
asserts that objects normally do not change their locations. More speci -
cally, an object does not change its location unless the triple consisting of
the object, the event that occurs, and the situation in which it occurs are
abnormal in apect1.
8xls:ab(aspect1(x; move(x; l); s))
However, moving an object to a location in a situation is abnormal in aspect1.
8xls::ab(aspect3(x; l; s)) loc(x; result(move(x; l); s)) = l
Unless the relevant triple is abnormal in aspect3, the action of moving an
object to a location l results in its being at l.
Axioms about Colors and Painting
8xes::ab(aspect2(x; e; s)) color(x; result(e; s)) = color(x; s)
8xcs:ab(aspect2(x; paint(x; c); s))
9
8xcs::ab(aspect4(x; c; s)) color(x; result(paint(x; c); s)) = c
Thes three axioms give the corresponding facts about what changes the color
of an object.
This treats the quali cation problem, because any number of conditions
that may be imagined as preventing moving or painting can be added later
and asserted to imply the corresponding ab aspect : : :. It treats the frame
problem in that we don't have to say that moving doesn't a ect colors and
painting locations.
Even with formalized nonmonotonic reasoning, the general commonsense
database still seems elusive. The problem is writing axioms that satisfy our
notions of incorporating the general facts about a phenomenon. Whenever
we tentatively decide on some axioms, we are able to think of situations in
which they don't apply and a generalization is called for. Moreover, the
di culties that are thought of are often ad hoc like that of the bird with its
feet encased in concrete.
6 REIFICATION
Reasoning about knowledge, belief or goals requires extensions of the do-
main of objects reasoned about. For example, a program that does backward
chaining on goals used them directly as sentences, e.g. on(Block1;Block2),
i.e. the symbol on is used as a predicate constant of the language. How-
ever, a program that wants to say directly that on(Block1;Block2) should
be postponed until on(Block2;Block3) has been achieved, needs a sentence
like precedes(on(Block2;Block3); on(Block1;Block2)), and if this is to be a
sentence of rst-order logic, then the symbol on must be taken as a func-
tion symbol, and on(Block1;Block2) regarded as an object in the rst order
language.
This process of making objects out of sentences and other entities is called
rei cation. It is necessary for expressive power but again leads to complica-
tions in reasoning. It is discussed in (McCarthy 1979).
10
7 FORMALIZING THE NOTION OF CON-
TEXT
Whenever we write an axiom, a critic can say that the axiom is true only
in a certain context. With a little ingenuity the critic can usually devise a
more general contex in which the precise form of the axiom doesn't hold.
Looking at human reasoning as re
ected in language emphasizes this point.
Consider axiomatizing \on" so as to draw appropriate consequences from
the information expressed in the sentence, \The book is on the table". The
critic may propose to haggle about the precise meaning of \on" inventing
di culties about what can be between the book and the table or about how
much gravity there has to be in a spacecraft in order to use the word \on"
and whether centrifugal force counts. Thus we encounter Socratic puzzles
over what the concepts mean in complete generality and encounter examples
that never arise in life. There simply isn't a most general context.
Conversely, if we axiomatize at a fairly high level of generality, the axioms
are often longer than is convenient in special situations. Thus humans nd
it useful to say, \The book is on the table" omitting reference to time and
precise identi cations of what book and what table. This problem of how
general to be arises whether the general common sense knowledge is expressed
in logic, in program or in some other formalism. (Some people propose that
the knowledge is internally expressed in the form of examples only, but strong
mechanisms using analogy and similarity permit their more general use. I
wish them good fortune in formulating precise proposals about what these
mechansims are).
A possible way out involves formalizing the notion of context and combin-
ing it with the circumscription method of nonmonotonic reasoning. We add
a context parameter to the functions and predicates in our axioms. Each ax-
iom makes its assertion about a certain context. Further axioms tell us that
facts are inherited by more restricted context unless exceptions are asserted.
Each assertions is also nonmonotonically assumed to apply in any particular
more general context, but there again are exceptions. For example, the rules
about birds
ying implicitly assume that there is an atmosphere to
y in. In
a more general context this might not be assumed. It remains to determine
how inheritance to more general contexts di ers from inheritance to more
speci c contexts.
Suppose that whenever a sentence p is present in the memory of a com-
11
puter, we consider it as in a particular context and as an abbreviation for the
sentence holds(p;C) where C is the name of a context. Some contexts are
very speci c, so that Watson is a doctor in the context of Sherlock Holmes
stories and a baritone psychologist in a tragic opera about the history of
psychology.
There is a relation c1 c2 meaning that context c2 is more general than
context c1. We allow sentences like holds(c1 c2; c0) so that even statements
relating contexts can have contexts. The theory would not provide for any
\most general context" any more than Zermelo-Frankel set theory provides
for a most general set.
A logical system using contexts might provide operations of entering and
leaving a context yielding what we might call ultra-natural deduction allowing
a sequence of reasoning like
holds(p;C)
ENTER C
p
...
q
LEAV E C
holds(q;C):
This resembles the usual logical natural deduction systems, but for reasons
beyond the scope of this lecture, it is probably not correct to regard contexts
as equivalent to sets of assumptions | not even in nite sets of assumptions.
All this is unpleasantly vague, but it's a lot more than could be said in
1971.
References
Black, F., (1964). A Deductive Question Answering System, Doctoral Dis-
sertation, Harvard University.
Buchanan, B.G. and Shortli e, E.H, eds., (1984). Rule-based expert systems:
the MYCIN experiments of the Stanford Heuristic Programming Project.
Davis, Randall; Buchanan, Bruce; and Shortli e, Edward (1977). Production
Rules as a Representation for a Knowledge-Based Consultation Program,
Arti cial Intelligence, Volume 8, Number 1, February.
12
Doyle, J. (1977). Truth Maintenance Systems for Problem Solving, Proc. 5th
IJCAI, p. 247.
Ernst, George W. and Allen Newell (1969). GPS: A Case Study in Generality
and Problem Solving, Academic Press.
Fikes, R, and Nils Nilsson, (1971). STRIPS: A New Approach to the Applica-
tion of Theorem Proving to Problem Solving, Arti cial Intelligence, Volume
2, Numbers 3,4, January, pp. 189-208.
Friedberg, R.M. (1958). A Learning Machine, IBM Journal of Research,
Volume 2, Number 1, January, pp. 2-13.
Friedberg, R.M., B. Dunham and J.H. North (1959). A Learning Machine,
Part II, IBM Journal of Research, Volume 3, Number 3, July, pp. 282-287.
Green, C.(1969). Theorem-proving by Resolution as a Basis for Question
Answering Systems, in Machine Intelligence 4, pp. 183-205 (eds. Meltzer,
B. and Michie, D.). Edinburgh: Edinburgh University Press.
Laird, John E., Allen Newell and Paul S. Rosenbloom (1987). Soar: An
Architecture for General Intelligence, Arti cial Intelligence 33, pp. 1{64.
Lifschitz, Vladimir (1985). Computing Circumscription, in Proceedings of
the 9th International Joint Conference on Arti cial Intelligence, Volume 1,
1985, pp. 121{127.
McCarthy, John (1959). Programs with Common Sense, in Proceedings of the
Teddington Conference on the Mechanization of Thought Processes, London:
Her Majesty's Stationery O ce. (Reprinted in McCarthy 1990).
McCarthy, John and Patrick Hayes (1969). Some Philosophical Problems
from the Standpoint of Arti cial Intelligence, in B. Meltzer and D. Michie
(eds), Machine Intelligence 4, Edinburgh University. (Reprinted in B. L.
Webber and N. J. Nilsson (eds.), Readings in Arti cial Intelligence, Tioga,
1981, pp. 431{450; also in M. J. Ginsberg (ed.), Readings in Nonmonotonic
Reasoning, Morgan Kaufmann, 1987, pp. 26{45; also in McCarthy 1990.)
McCarthy, John (1979). First Order Theories of Individual Concepts and
Propositions, in Michie, Donald (ed.) Machine Intelligence 9, Ellis Horwood.
(Reprinted in McCarthy 1990.)
McCarthy, John (1984). Some Expert Systems Need Common Sense, in Com-
puter Culture: The Scienti c, Intellectual and Social Impact of the Computer,
13
Heinz Pagels, ed. vol. 426, Annals of the New York Academy of Sciences.
(Reprinted in McCarthy 1990.)
McCarthy, John (1980). Circumscription | A Form of Nonmonotonic Rea-
soning, Arti cial Intelligence, Volume 13, Numbers 1,2. (Reprinted in B. L.
Webber and N. J. Nilsson (eds.), Readings in Arti cial Intelligence, Tioga,
1981, pp. 466{472; also in M. J. Ginsberg (ed.), Readings in Nonmonotonic
Reasoning, Morgan Kaufmann, 1987, pp. 145{152; also in McCarthy 1990.)
McCarthy, John (1986). Applications of Circumscription to Formalizing
Common Sense Knowledge Arti cial Intelligence 28, pp. 89{116. (Reprinted
in M. J. Ginsberg (ed.), Readings in Nonmonotonic Reasoning, Morgan Kauf-
mann, 1987, pp. 153{166; also in McCarthy 1990.)
McCarthy, John (1990). Formalizing Common Sense, Ablex 1990.
McDermott, D. and Doyle, J. (1980). Nonmonotonic Logic I. Arti cial In-
telligence, Volume 13, Numbers 1,2, pp. 41-72.
Newell, A., J. C. Shaw and H. A. Simon (1957). Preliminary Description
of General Problem Solving Program-I (GPS-I), CIP Working Paper #7,
Carnegie Institute of Technology, Dec.
Newell, A., J. C. Shaw and H. A. Simon, (1960). A variety of intelligent
learning in a General Problem Solver, in Self-Organizing Systems, Yovits
M.C., and Cameron, S. eds., Pergammon Press, Elmford, N.Y., pp 153-189.
Newell, A. and H. A. Simon (1972). Human Problem Solving, Prentice-Hall.
Reiter, Raymond (1980). A Logic for Default Reasoning, Arti cial Intelli-
gence, Volume 13, Numbers 1,2, April.
/@steam.stanford.edu:/u/ftp/jmc/generality.tex: begun 1996 May 15, latexed 1996 May 15 at 1:14 p.m.
14

ROBOT APPLICATIONS IN MANUFACTURING

Current robot applications include a wide variety of production operations. These operations can be classified into following categories:

1. Material transfer
2. Machine loading and unloading
3. Processing operations
4. Assembly
5. Inspection

MATERIAL TRANSFER APPLICATIONS:

Material transfer applications are defined as operations in which primary objective is to move a part from one location to another location. This category includes application in which robot transfer parts into and out of a production machine.
This application involves the transfer of a part from one machine to another such as moving a die cast to a conveyor belt and transferring sand castings from a casting machine to a shake out conveyor. Due to the simplicity of motion, this is one of the earliest applications.

General considerations in material handling robot include:
1.Parts must be present to a robot in a known position and orientation.
2.Special end effectors must be designed for the robot to grasp and hold the work part during the handling operation.
3.Material handling operation should be planned so as to minimize the distances that the parts must be moved.
4.The load capacity of the robot must not be exceeded beyond limit.
5.Many parts transfer operations are simple enough that they can be accomplished by a robot with two or four joints of motion. Machine-loading applications often require more degrees of freedom.

The following is the input data required for the selection of a Robot in material handling applications:
1.The environment in which the Robot is placed.
2.The weight, geometry and temperature of the component.
3.The location of the picking and placement area of the component.
4.The available loading and unloading time.
These applications usually require a relatively unsophisticated robot, and the interlocking requirements with other equipment are typically uncomplicated.

Robotic Technology

Robotics is the science of designing and building robots suitable for real-life application in automated manufacturing and other non-manufacturing environments. Robots are the means of performing multifarious activities for man’s welfare in the most planned and integrated manner.
Maintaining their own flexibility to do any work, effecting enhanced productivity, guaranteeing Quality, assuring reliability and ensuring safety to the worker.

Definition: -
The definition that has been accepted as reasonable in the present state-of-the art is given by Robotics Industrial Association in Nov., 1979. An industrial robot has been defined as
“…A reprogrammable multifunctional manipulator designed to move material, parts, tools or specialized devices through various programmed motions for the performance of a variety of tasks.”

Sir Isaac Asimov on the subject of robotics framed three basics laws which the robotics still obeys with respect. The laws are philosophical in nature. They are as follows: -
First law: - A robot must not harm a human being or, through inaction, allow one to come to harm.
Second law: - A robot must always obey human beings it is in conflict with the first law.
Third law: - A robot must protect itself from harm that is in conflict with the first and or second laws.

History:
The word 'robot' was coined by the Czech playwright Karel Capek (pronounced "chop'ek") from the Czech word for forced labour or serf. The term 'robotics' refers to the study and use of robots and was coined and first used by the Russian-born American scientist and writer Isaac Asimov.
The first industrial modern robots were the Animates developed by George Devol and Joe Eagleburger in the late 50's and early 60's. The first patents were by Devol for parts transfer machines. Engel Berger formed Animation and was the first to market robots. As a result, Engel Berger has been called the 'father of robotics.'

Robot Applications

Robotics has rapidly moved from theory to application over the last decade primarily due to the need for improved productivity and quality.
One of the key features of robots is their versatility. A programmable robot used in conjunction with a variety of end-effectors can be programmed to perform specific tasks, then later reprogrammed and refitted to adapt to process or production line variations or changes.
The robot offers an excellent means of utilizing high technology to make a given manufacturing operation more profitable and competitive. However, robot technology is relatively new to the industrial scene, and the prospective buyer of robot technology who is accustomed to buying more conventional items will find robot applications a highly complex subject.
Robots are used today primarily for welding, painting, assembly, machine loading, and foundry activities. The sharp visibility given to the automotive industry's robotics applications and its declared intention to even more aggressively increase the installation rates have made that industry a major focus for robot builders.

Monday, September 20, 2010

Microcontroller Based Instrumentation in Automotive Engineering

Microcontroller Based Instrumentation in Automotive Engineering
Abstract:
Microcontroller is a standalone device that is used to control the operation of a piece of equipment. This involves sensing simple parameters and controlling of events. This article presents an overview of how microcontrollers, ‘computer on a chip’, are used in automotive applications, i.e. in intelligent vehicles. The approach we use is to first provide a framework why microcontrollers are used in instrumentation, how systems work, why not microprocessor, current market trends, diagnosis process and several other applications.
1.1 Introduction:
Instrumentation is a technology of measurement, analysis, and control which serves not only science but all branches of engineering, medicine and almost every human endeavor. The basic purpose of instrumentation in a process is to obtain requisite information operating to the fruitful completion of the process. The object of fruitful completion, in industrial technology, is obtained when process efficiency is maximum with minimum cost of product quality. A microcontroller can be found at the heart of all such electronic control module in all process industries today. One most important advantage in using a microcontroller in the system may be, designed to eliminate human factor in processing data. The prime use of microcontroller is to control the operation of a machine using a fixed program that is stored in EEPROM and that does not change over life time of system.
1.2 Reason behind Choosing Microcontroller in Instrumentation:
The major reasons for digital microcontroller based control system are

A. Stability and accuracy of control B. Lower cost per function C. Flexibilities D. Greater reliabilities and equipment life E. Human factors favoring digital interface.
In process control using electronic controllers there are lots of hardware as shown in Fig1. Process performance totally depends upon the characteristics. Process accuracy is less and operation is not very fast. For a simple modification we have to change total hardware design which is very costly and time expansive.

Fig.1. Close Loop System Using Op-Amp.
In microcontroller based systems, total hardware of control system is replaced by microcontroller unit. By a little change in program we can change system operation. Less hardware is required in this case as shown in Fig2.

Fig 2. Close Loop System Using Microcontroller Unit.
Market Trends:

Fig3. Market Trends of Microcontroller Based System:

1.3 Basic Scheme:

Fig4. Basic Scheme
Measurement and control of physical quantities such as temperature, pressure, speed, displacement, level flow etc. is done with transducers which are used to control the physical quantities to electrical signals. The electrical voltages proportional to the physical quantities by an ADC before is applied to microcontroller. A microcontroller being very fast can measure, process and control
many signals one by one in a very short time according to the program fed in it. The output from the microcontroller is used to control different process accurately, by means of controlling different device like relays, motors, actuators etc.

1.4 Why Microcontroller, Not Microprocessor:
Microprocessor is a general purpose device .It requires lots of extra peripherals which make system complicated and system size become large and bulky. Whereas a microcontroller is used for specific operation. For small systems there is no necessary to use microprocessor as it become expansive and bulky. In microcontroller all peripherals required in microprocessors are contained in a single chip, called microcontroller. For e.g. in a simple smart washing machine, it is not necessary to use a big computer for its operation, a single 4-bit microcontroller can easily do all the work what we require. Microcontroller can function as a computer with the addition of no external digital parts.

1.5Application of Microcontroller Based Instrumentation in Automotive engineering: A microcontroller can be found at the heart of almost any automotive electronics control module or ECU in production today. Automotive system such as Anti-lock braking system (ABS), Cruise control, engine control, navigation and vehicle dynamic all incorporate at least one microcontroller within ECU to perform necessary control functions.
1.5.1 Engine Control:
The electronic engine control system consists of sensing devices which continuously measure the operating conditions of the engine, an electronic control unit (ECU) which evaluates the sensor inputs using data tables and calculations and determines the output to the actuating devices which are commanded by the ECU to perform an action in response to the sensor input. The motive for using electronic engine control system, ECU is to provide the needed accuracy, adaptability in order to minimize exhaust emission, fuel consumption fuel metering and ignition control. An information on the failure from the ECU and correct the problem. ECU (microcontroller) performs following operation in this section:
1. Air/fuel ratio control. 2. Ignition timing control 3. Exhaust gas recirculation control & monitoring. 4. Idle speed control 5. Fuel system monitoring 6. Diagnosis.
1.5.2 Cruise Control:
A sophisticated digital controller constantly maintains a set speed under varying driving conditions, thus allowing the vehicle operator to index from constant foot throttle manipulation and improve fuel efficiency. By using the power and speed of microcontroller device and fuzzy logic software design excellent cruise control system can be designed. The MCU for cruise control applications requires high functionality. The MCU would include the following:
1. A precise internal time base for the measurement calculations. 2.A/D inputs
2. 3. PWM outputs 4. Internal watch dog 5. EEPROM
Crash avoidance system could be inter connected with cruise control system to avoid collision and

Fig.5 Engine and Cruise Control Parameters
warn the drivers.
1.5.3 Anti-Lock Braking System:
ABS consists of wheel speed sensors, hydraulic modulator, electric / pump and ECU (microcontroller). Control of hydraulic modular and electric motor / pump is performed by the electronic unit. Microcontroller based ECU performs braking operation most accurately.
1.5.4 Intelligent Safety System:
Electronically controlled passenger and car safety systems, such as rollover sensors system, air bags, seat belts tension system, help to avoid injuries or to reduce injury severely in an accident. In ISS, vehicle has collision avoidance that takes necessary actions like reduce speed or apply brake, and also warn the driver and order to take action. The sequence of crash relevant events like closure of discriminating sensors , arming sensors, battery voltage level, energy reserve voltage, turn on of power stages, can be stored in the EEPROM i.e. record present conditions of the vehicle before crash.
1.5.5 Vehicle Antitheft System:
To avoid vehicle thefts, we must do three things: sense unauthorized entry of the vehicle, detect unauthorized empted to starts the vehicle and activate the alarm system. Some fixed or rolling codes are
store in EEPROM of microcontroller. When those code matches with user code, system allow using the

Fig 6: Method for Arming and Disarming an Antitheft
vehicle otherwise alarm system activated.
1.5.6 Voice Alarm:
Inputs from sensors are fed to the microcontroller unit. MCU continuously compare the input values fro sensors with the set point value and maximum allowable value stored in EEPROM. if the exceeds the maximum limit , then microcontroller send a signal to voice module which gives recorded message to take necessary action.
1.5.7 Navigation:
Military, as well as civilian, vehicles need to be guided, located, or to navigate or position them independently. Navigation pertains to the actions involved in driving a vehicle from point to point. Aerospace/military guidance and navigation applications require advanced inertial sensors with high performance (e.g., very high intrinsic stability, rectification error), advanced functionality (digital output, correction for temperature effects), and shock survivability, as well as sensors that are lighter, smaller, and cost less than typical existing technologies. Moreover, such parameters as mission time, expected precision, dynamic range, and harsh environmental conditions (temperature, shock, etc) are important to measure in inertial navigation systems. By using the advantages of modern communication system and instrumentation technology navigation process is done which helps to know on line traffic condition, road condition etc.
1.6 Diagnosis Process:
In order to minimize the number of defects or even to completely avoid them, a vehicle requires regular
checks. In case of an inevitable defect, a clear and directed diagnosis is required and has to follow by prompt, reliable and inexpensive repair. For an effective and successful diagnosis following tasks are involved

Fig8. Operational Sequence in Microcontroller
1. Fault storage with boundary conditions. 2. Fault localization. 3. Data correction, recognition of imminent faults.4. Parameter substitution. 5. Providing guidelines. 6. External diagnosis
1.7 Other most widely used applications:
1. MEMS accelerometers, flow meters use a microcontroller.
2. Analytical instrumentation.
 UV and X-rays silicon pixel detectors, micro-fluidics, micro-electrodes arrays in gold, platinum or titanium, micro-coils and micro-magnets, arrays of micro holes and micro-heaters.
 Higher - power deep UV (ultraviolet) exciter lasers that utilize beam shaping and coherent power combining techniques. ; X-ray, chromatography, NMR, spectrometer.
3. Medical instrumentation.
 TFT laser annealing, and ophthalmic surgery.
 In the life sciences arena, Colibrys has ability in combining micromechanics, micro fluidics,
micro magnetic, micro optics, and single photon counting detectors.
4. Aerospace/defense applications.
 Radar control. ; Guided missile control;
 Satellite launcher.
1.8 Pros and cons:
Pros: Stability and accuracy of control system increase.
 Integrated features onto microcontroller allow the system designer to reduce cost else where in the system.
 Very compact circuitry as single chip contents all peripherals.. System flexibilities are very high.
 The high speed (24-32 MHz) operation feature of microcontroller allows more code to be executed and thus system performance improves.
 System security is very high as individual manufacturers uses own fixed program, and stored in ROM, any other manufacturer can not use the same program.
Cons: Noise in the system can affects on the performance.
 Servicing is not possible by general user; replacement of chip is the only solution.
1.9 Conclusion:
The desire for greater safety, comfort, and environmental compatibility is leading to a rapid increase in electronic control units and sensors. Smart vehicle is one of the most common applications where microcontroller based instrumentation is widely used. By the use of microcontroller based instrumentation, process become more accurate and easy to design. Adding little bit modification in program we can add various features in control system without changing any hardware. As the hardware technology developing very rapidly , more and more feature included in the microcontroller that give process designer more flexibility to design a system in a simple manner., and larger than large system becoming smaller than small with very user friendly manner.

Reference:
1. Ronald Jurgen, “Automotive Electronics Handbook”. chapter 11 ,12,13, 14,15,22,23,24
2. Automotive Engineering International (SAE), September 2000.Delphi’s Integrated Safety System
3. .Kenneth J. Ayala ‘The 8051 Microcontroller
4. D. Patranabish ‘Principles of Industrial instrumentation’
5. Sensor Business Digest , April 2003
6. Integrated Circuit Engineering Corporation; ‘Microcontroller Market Trends’ ‘SEC03’page15

Hybrid Vehicle

Applications
• Hybrid vehicles (fuel powered; electric drive)
• Remote generators (fuel powered)
• Power generation substations (stream or fuel powered)
• Locomotives (diesel powered)
8.5 The Advantages
The MOTOR/GENERATOR produces electric current.
o Powerful (high energy output); lightweight; minimal moving parts; compact; no mechanical output.
o Eliminates "rotary" components (cost + weight savings) like Flywheel, Block, Main Bearings, Camshaft, Armature & Shaft, Crankshaft, Timing Gears.
o At 3600 RPM, provides standard 60 hertz current.
o Can be tuned to a single speed; Minimum number of cylinders can be tuned for fuel efficiency.
o The circular coils are more efficient and inexpensive than the "oblong" coils in
standard generators.
o Engine may be started by reversing coil current.
.
VEHICLE ADVANTAGES
o Small efficient power plant could replace engine/transmission/differential in automobiles to create an electric/fuel (hybrid) vehicle.
o Allow for highly flexible vehicle design configuration.
o Vehicle would be lightweight and very fuel efficient.
o Auxiliary devices may be electric motor driven (air conditioner, power steering, water pump, lube system).

References for mechanical engineering

www.nrel.gov/international/china/pdfs/annex5/ introduction_of_hybrid_electric_vehicles.pdf
http://www.howstuffworks.com/hybrid-car1.htm
http://www.howstuffworks.com/hybrid-car2.htm
http://www.ott.doe.gov/pdfs/drivehev_factsheet.pdf
http://www.ott.doe.gov/pdfs/puthev_factsheet.pdf
http://www.fueleconomy.gov/feg/hybridtech.shtml
http://www.ott.doe.gov/hev/what.html
http://www.ott.doe.gov/hev/components.html
http://www.ott.doe.gov/hev/related.html
http://www.fleets.doe.gov/fleet_tool.cgi?$$,benefits,1
http://www.ott.doe.gov/hev/faqs.html
http://www.ott.doe.gov/pdfs/techhev_factsheet.pdf
http://www.ott.doe.gov/pdfs/gmhev_factsheet.pdf
http://www.fueleconomy.gov/feg/hybrid_sbs.shtml

CRANKSHAFT BALANCE

Most of the crankshaft balancing is done during manufacture. Holes for balance are drilled in the counter weight to lighten them. Sometimes, these holes are drilled after the crankshaft is installed in the engine. Some manufactures are able to control their casting quality so closely that counterweight machining for balancing is not necessary. Engines with cast crankshafts usually have some external balancing. External balance of these engines is accomplished by adding weights to the damper hub and to the flywheel or automatic transmission drive plate.
After the basic dimensions of the crankshaft have evolved, including length, engine stroke, number and size of bearings, bearing journal diameters, oil holes, and so on, the crankshaft balance then receives attention. This involves the determination of the size, weight and location of the crankshaft counterweights.
The counterweights are required to balance the reciprocating and rotating motions of the piston and connecting-rod assemblies and cranks. Thus, the weights of these assemblies, as well as the stroke and the crank radius, must be established at this time. The designer must know what these weights are as well as what they will be doing during the rotation of the crankshaft.
Once these factors are established, vector analysis can then be used to determine the resultant of the inertial and centrifugal forces from the reciprocating and rotating masses. Determining the position, shape and weight of these counterweights is called design balancing the crankshaft.
A limiting dimension is the radius of the counterweights. If the radius is too large, the counterweights strike other engine parts-the piston skirt, for example. In many engines, the piston skirts are cut away to provide room for the counterweights to swing around them at BDC as the crankshaft rotates. The counterweights cannot e too thick through from front to back either. There must be clearance between the counterweights and the connecting rods. Also, there must be clearance between the counterweights and the cylinder-block webs supporting the crankshaft. These dimensions and clearances determine the maximum radius and thickness of the counterweights.
Two basic factors in designing a counterweight are the amount of weight (It must balance the piston and rod weight) and the placement and the distribution of the weight (it must be so placed as to cancel out the opposing piston and rod weight). One procedure is to divide the counterweight into three parts for separate analysis: the arm, the left hand half, and the right hand half. Each of the three parts is then subjected to analytical routine that determines its volume (weight), center of gravity, and polar moment of inertia. The weight distribution is then determined. In effect, the distribution should be such that the unbalancing force of the piston and rod motion is countered exactly at any instant by a balancing mass from the counterweight, pulling in the opposite direction.

INTRODUCTION TO FINITE ELEMENT ANALYSIS

The limitations of human mind are such that it cannot grasp the behavior of its complex surrounding and creations in one operation. Thus the process of subdividing all systems into their individual components or elements, whose behavior is all readily understood and then rebuilding the original system from such components to study its behavior is a natural way in which the engineer, the scientist or even the economist proceeds.
The finite element method is a numerical method, which can be used for the solution of complex engineering problems with accuracy acceptable to engineers.
In 1957 this method was first developed basically for the analysis of aircraft structures. There after the usefulness of this method for various engineering problems were recognized. Over the years, the finite element technique has been so well developed that, today it is considered to be one of the best methods for solving a wide variety of practical problems efficiently.
One of the main reasons for the popularity of the method in different fields of engineering is that once a general computer program is written, it can be used for the solution of any problem simply by changing the input data.
In FEM since the actual problem is replaced by a simpler one in finding the solution we will be able to find only an approximate solution rather than the exact solution. In most of the practical problems, the existing mathematical tools are not even able to find approximate solution of the problem .Thus, in the absence of any other convenient method to find even the approximate solution of a given problem; we have to prefer the FEM.
The digital computer provided a rapid means of performing many calculations involved in FEA. Along with the development of high speed computers, the application of the FEM also progressed at a very impressive rate.




GENERALLY USED COMMANDS IN ANSYS:

 AADD: Adds separate areas to create single area.

 VADD: Adds separate volumes to create single volume.

 ANTYPE: Specifies the analysis type and restart status.

 AOFFST: Generates an area, offset from a given area.

 APLOT: Displays the selected areas.

 VPLOT: Displays the selected volumes.

 KPLOT: Displays the selected key-points.

 AL: generates an area bounded by previously defined lines.

 LSTR: Defines a straight line irrespective of the active co-ordinate system.

 LGEN: Generates additional lines from a pattern of lines.

 VGEN: Generates additional volumes from a pattern of volumes.

 KGEN: Generates additional key-points from a pattern of key-points.

 LSYMM: Generates lines from a line pattern by symmetry reflection.

 VSYMM: Generates volumes from a volume pattern by symmetry reflection

 KDELE: Deletes unmeshed key-points.

 LDELE: Deletes unmeshed lines.

 ADELE: Deletes unmeshed areas.

 VDELE: Deletes unmeshed volumes.

 VDRAG: Generates volumes by dragging an area pattern along a path.

 VROTAT: Generates cylindrical volumes by rotating an area pattern about an axis.

 LSBL: Subtracts lines from lines.

 LDIV: Divides a single line into to two or more lines.

 LFILLT: Generates a fillet line between two intersecting lines.

 LARC: Defines a circular arc.


BASIC ELEMENT SHAPE:
Mostly, choice of the type of element is dictated by the geometry of the body and the number of independent spatial co-ordinates necessary to describe the system .The element may be one, two and three dimensional
When the geometry, material properties and other parameters (stress, displacement, pressure and temperature) can be described in terms of only one spatial co-ordinate, we can use the one dimensional element. Although this element has a cross-sectional area ,it is generally shown schematically as a line segment .When configuration and other details of the problem can be described in terms of two independent spatial co-ordinates ,we can use the two dimensional elements. The basics element useful for the two dimensional analysis is the triangular element. Rectangular and parallelogram shaped elements or quadrilateral (combination of two or four triangular elements) element can also be used.
If the geometry, material properties and other parameters of the body can be described by three independent spatial co-ordinates, we can idealize the body by using three dimensional elements. Tetrahedron element is the basics three dimensional element. Hexahedrogon can also be used advantageously.
The problems that possess axial symmetry like pistons, storage tanks, valves, rocket nozzles fall into this category. For the discritisation of the problem involving curved geometries finite elements with curved size are useful. The ability to model curved boundaries has been made possible by the addition of mid-side nodes.
Finite element with straight sides is known as linear elements, while those with curved sides are called higher order elements.


APPLICATIONS OF FEA IN VARIOUS FIELDS:

FEA is applicable in three major categories of boundaries value problem.
1) Steady state or equilibrium or time independent problems.
2) Eigen values problems.
3) Propagation problem.

APPLICATIONS:
 Mechanical designs;
 Aircraft structures;
 Heat conduction;
 Hydraulics and water resources engineering;
 Nuclear engineering;
 Biomedical engineering;
 Civil engineering structures.

Design of crankshaft

DESIGN OF CRANKSHAFT:

The crankshaft is the most complicated and strained engine part subjected to cyclic loads due to gas pressure, inertial forces and their couples. The effect of these forces and their moments cause considerable stresses of torsion, bending and tension-compression in the crankshaft material. Apart of this, periodically varying moments cause torsional vibration with the shaft with the resultant additional torsional stresses.
Therefore, for the most complicated and severe operating conditions of crankshaft, high and diverse requirements are imposed on materials utilized for fabrication of the crankshaft. The crankshaft material has to feature high strength and toughness, high resistance to wear and fatigue stresses, resistance to impact loads and hardness. Such properties are possessed by properly machined carbon and alloy steels and also high duty cast iron. Crankshafts of the soviet made automobile and tractor engines are made up of steels 40, 45, 45T2, 50, of a special cast iron, and those for augmented engines, of high alloy steels, grades 18XHBA, 40XHMA and others.
The intricate shape of the crankshaft, a variety of forces and moments loading it, changes in which are dependant on the rigidity of the crankshaft and its bearings, and some other causes do not allow the crankshaft strength to be compute precisely. In view of this various approximate methods are used which allow us to obtain conventional stresses and safety factors for individual elements of a crankshaft. The most popular design diagram of the crankshaft is a diagram of a simply supported beam with one and two spans between the supporters (FIGURE 13.1).
When designing a crankshaft, we assume that:
A crank (or two cranks) is freely supported by supporters;
The supporters and the force points are in the center planes of the crankpins and journals;
The entire span (one or two) between the supports represents an ideally rigid beam.
The crankshaft is generally designed for the nominal operation (n=nN), taking into account the action of the following forces and moments,

1) Kp,th = K + KR = K + KR c.r + KR c are the forces acting on the crankshaft throw by the crank, neglecting counterweights. where K=p cos(+) / cos is the total force directed along the crank radius; KR= - mRRω2 is the inertial force of rotating masses; KR c.r = -mc.r.cR ω2 is the inertial force of the rotating mass of the connecting rod; KRc = - mcRω2 is the inertial force of the rotating mass of the crank;

2) ZE = Kp,th +2Pcw and the total force acting on the crank plane, where
Pcw = + mcwrω2 is the centrifugal inertial force of the counterweight located on the web extension;

3) T is the tangential force acting perpendicular to the crank plane;
4) ZCE= K’p,th +2P’cw are the support reactions to the total forces acting on the crank plane,
where K’p,th = -0.5 Kp,th and Pcw = - 2P’cw ;

5) T`=-0.5T are the support reactions to the tangential plane perpendicular to the crank;

6) Mm.j.i is the accumulated (running on) torque transmitted to the design throw from the crankshaft nose;

7) Mt.c = TR is the torque produced by the tangential force;

8) Mm.j.(i+1) = Mm.j.i + Mt.c is the diminishing (running off) torque transmitted by the design throw to the next throw.

The basic relations of the crankshaft elements needed for checking are given in the table 13.1

Engines l/B dc.p/B Lc.p/B* dm.j/B lm.j/B**

Diesel engines

In-line



1.25-1.30




0.64-0.75




0.7-1.0




0.70-0.90




0.45-0.60
0.75-0.85


[*B(D) is the engine cylinder bore diameter; lc.p is the full length of the crankpin including fillets.
** The data are for the intermediate and outer (or center) main journals.]

The dimensions of the crankpins and journals are chosen, bearing in mind the required shaft strength and rigidity and permissible values of unit area pressures exerted on the bearings. Reducing the length of crankpins and journals and increasing their diameter add to the crankshaft rigidity and decreases the overall dimensions and weight of the engine. Crankpin-and-journal overlapping (dm.j + dc.p > 2R) also adds to the rigidity of the crankshaft and strength of the webs.
In order to avoid heavy concentration of stresses, the crankshaft fillet radius should not be less than 2 to 3 mm. In practical design it is taken from 0.035 to 0.080 of the journal of cranking diameter, respectively. Maximum stress concentration occur when the fillet of the crankpins and journals are in one plane.

According to the statistical data, the web width of crank shaft in automobile and tractor engines varies within (1.0 to 1.25) B for carburetor engines and (1.05 to 1.30) B for diesel engines, while the web thickness, within (0.20 to 0.22) B and (0.24 to 0.27) B, respectively.

UNIT AREA PRESSURE ON CRANKPINS AND JOURNALS:
The value of unit area pressure on the working surface of a crankpin or a main journal determines the conditions under which the bearing operates and its service life in the long run. With the bearings in operations measures are taken to prevent the lubricating oil film from being squeezed out, damage to the whitemetal and premature wear of the crankshaft journals and crankpins is made on the basis of the action of average and maximum resultants of all forces loading the crankpins and journals.
The maximum (Rm.j max and Rc.p max) and mean (Rm.j.m and Rc.p.m) values of resulting forces are determined from the developed diagrams of the loads on the crankpins and journals.

The mean unit area pressure (in Mpa) is:
On the crankpin
Kc.p.m = Rc.p.m / (dc.p l’c.p)

On the main journal
Km.j.m = Rm.j.m / (dm.j l’m.j) or
Km.j.m = Rcw m.j.m / (dm.j l’m.j)

Where Rc.p.m and Rm.j.m are the resultant forces acting on the crankpin and journal, respectively, MN; Rcw m.j.m is the resultant force acting on the main journal when the use is made of counterweights, MN; dc.p and dm.j are the diameters of the crankpin and main journal, respectively, m; l’c.p and l’m.j are the working width of the crankpin and the main journal shells, respectively, m.
The value of the mean unit area pressure attains the following values:

Diesel engines………………..6-16 Mpa

The maximum pressure on the crankpins and journals is determined by the similar formulae due to the action of the maximum resultant forces Rc.p max, Rm.j max or Rcwm.j max. The values of maximum unit area pressures on crankpins and journals Kmax (in MPa) vary within the following limits:

Diesel engine………………….20-42

DESIGN OF JOURNALS AND CRANKPINS:
DESIGN OF MAIN JOURNALS:
The main bearing journals are computed only for torsion. The maximum and minimum twisting moments are determined by plotting diagrams (Fig.13.4) or compiling tables (table 13.2) of accumulated moments reaching in sequence individual journals to compile such table use is made of dynamic analysis data.



 Mm.j2 Mm.j3 Mm.j,i. Mm.j.(i+1)
0
10 ( or 30)
And so on
Table 13.2
The order of determining accumulated (running-on) moments for inline engines which shown in Fig.13.2a.
The running-on moments and torques of individual cylinder are algebraically summed up following the engine firing order starting with first cylinder.
The maximum and minimum tangential stresses (in MPa) of the journal alternating cycle are:
max = Mm.j,I max / Wm j (13.3)
min = Mm.j,I min / Wm. j (13.4)

Where Wm. j = (  / 16) x d3m.j [ 1-(m.j / dm.j ) 4 ] is the journal moment resisting to torsion, m3 ; dm.j and m.j are the journal outer and inner diameter respectively.
With max and min known, we determine the safety factor of the main bearing journal. An effective factor of stress concentration for the design is taken with allowance for an oil hole in the main journal. For rough computation we may assume K / (s ss ) = 2.5.
The safety factors of main bearing journals have the following values:
Unsupercharged diesel engine…………………...4-5
Supercharged diesel engine………………………2-4.

Design of crankpin:
Crankpins are computed to determine their bending and torsion stresses. Torsion of a crankpin occurs under the effect of a running-on moment Mc.p,i . Its bending is caused by bending moments acting in the crank plane Mz and in the perpendicular plane MT. Since the maximum values of twisting and bending moments do not coincide in time, the crankpin safety factors to meet twisting and bending stresses are determined separately and then added together to define the total safety margin.
The twisting moment acting on the ith crankpin is:
For one span crankshaft (see fig. 13.1 a and b)
Mc.p,i = Mm.j,i –T’iR

For two span crankshaft (see fig.13.1 c and d)
Mc.p,i = Mm.j,i – T’iR

To determine the most loaded crankpin, a diagram is plotted (see fig. 13.5) or a table is compiled (Table 13.3) showing accumulated moments for each crankpin.
The associated values of Mm.j,i are transferred into Table 13.3 from Table 13.2 covering accumulated moments, while values of T’i or T’i are determined against Table 9.6 or 9.15 involved in the dynamic analysis.
The values of maximum Mc.p,i max and minimum Mc.p,i min twisting moments for the most loaded crankpin are determined from data of Table 13.3. The extreme of the cycle tangential stresses (in Mpa) are:

° 1st crank-
pin 2nd crankpin ith crankpin
Mc.p1=
-T’1R Mm.j2 T’2R Mc.p2 =
= Mm.j2-T’2R Mm.j,i T’iR Mc.p,i=
=Mm.j,i- T’iR
0
30
and so
on
Table 13.3

max = Mc.p,i max / W c.p (13.5)
min = Mc.p,i min / W c.p (13.6)

Where W c.p =( /16 ) * d3c.p [1-(c.p /dc.p )4] is the moment resisting to crankpin torsion, m3; dc.p and c.p are the outer and inner diameters of the crankpin , respectively, m.
The safety factor  is determined in the same way as in the case of the main journal, bearing in the mind the presence of stress concentration due to an oil hole.
Crankpin bending moments are usually determined by a table method (Table 13.4).

° T’ MT MT sino K’p, th Z’ Z’ l/2 Mz Mz coso Mo
0
30
and so
on
Table 13.4

The bending moment (Nm) acting on the crankpin in a plane perpendicular to the crank plane
MT = T’ l/2 (13.7)

Where l=(lm.j +lc.p +2h) is the center to center distance of the main journals, m.
The bending moment (Nm) acting on the crankpin in the crank plane

Mz= Z’ l/2 +pcwa (13.8)

Where a = 0.5(lc.p +h), m; Z’ = k’p.th + p’ cw, Pa.

The values of T’ and k’p.th are determined against Table 9.6 of the dynamic analysis and entered in Table 13.4.
The total bending moment
Mb=( M2T +M2Z) (13.9)
Since the most severe stresses in a crankpin occur at the lip of oil hole, the general practice is to determine the bending moment acting in the oil hole axis plane:
Mo= MT sino - Mz coso (13.10)
Where o is the angle between the axes of the crank and oil-hole usually located in the center of the least loaded surface of the crankpin. Angle o is usually determined against wear diagrams.
Positive moment Mo generally causes compression at the lip of an oil hole. Tension is caused in this case by negative moment Mo.
The maximum and minimum values of Mo are determined against Table 13.4

DESIGN OF CRANKWEBS:

The crankshaft webs are loaded by complex alternating stresses:
Tangential due to torsion and normal due to bending and push-pull. Maximum stresses occur where the crankpin fillet joins a crankweb (section A-A, fig 13.1b).
Tangential torsion stresses are caused by twisting moment

Mt.w= T’.0.5( lm.j+h) (13.14)

The values of T’max and T’min are determined in Table 13.4. The maximum and minimum tangential stresses are determined by the formulae:

max = M t.w max / W w
min = Mt.w min / Wc w (13.15)

Where W t.w = bh2 is the moment resisting to twisting the rectangular section of the web. The value of factor  is chosen, depending on the ratio of width b of the web design section to its thickness h;

b / h 1 1.5 1.75 2.0 2.5 3.0 4.0 5.0 10.0 
 0.208 0.231 0.239 0.246 0.258 0.267 0.282 0.292 0.312 0.333

The torsion safety factor n of the web and factors k , s and ss are determined by the formulae given.
Normal bending and push-pull stresses are caused by bending moment Mb.w, Nm (neglecting the bending causing minute stresses in a plane perpendicular to the crank plane) and push or pull force pw, N:

Mbw=0.25(K+KR+2Pcw) lm.j (13.16)
pw=0.5(K+KR) (13.17)

Extreme values of K are determined from the dynamic analysis table (KR and Pcw are constant), and maximum and minimum normal stresses are determined by the equations
max = mb.w max / Ww + Pw max / Fw (13.18)

min = mb.w min / Ww + Pw min / Fw (13.19)

Where Ww = bh2 / 6 is the moment of web resistance to the bending effect; Fw= bh is the area f design section A-A of the web.

For web factor of safety is:

Automobile engines ………………………………... not less than 2.0 - 3.0

Saturday, September 18, 2010

Effective employee recognition

Tips for Effective Employee Recognition
Prioritize employee recognition and organization can ensure a positive, productive, innovative organizational climate. Provide employee recognition to say “thank you” and to encourage more of the actions and thinking which will make organization successful. People who feel appreciated are more positive about themselves and their ability to contribute. People with positive self-esteem are potentially best employees. These beliefs about employee recognition are common among employers even if not commonly carried out. Why then is employee recognition so closely guarded in many organizations?

Time is an often-stated reason and admittedly, employee recognition does take time. Employers also start out with all of the best intentions when they seek to recognize employee performance. They often find their efforts turn into an opportunity for employee complaining, jealousy and dissatisfaction. With these experiences, many employers are hesitant to provide employee recognition.

Many experiences show that employee recognition is scarce because of a combination of factors. Organizations don’t know how to provide it effectively, so they have bad experiences when they do. They assume “one size fits all” when they provide employee recognition. Finally, employers think too narrowly about what people will find rewarding and recognizing. These guidelines and ideas will help effectively walk the slippery path of employee recognition and avoid potential problems while recognizing people in work place.

Guidelines for Effective Employee Recognition
Decide what organization wants to achieve through employee recognition efforts. Many organizations use a scatter approach to employee recognition. They put a lot of employee recognition out there and hope that some efforts will stick and create the results they want. Or, they recognize so infrequently that employee recognition becomes a downer for many when the infrequent few are recognized.

Instead, create goals and action plans that recognize the actions, behaviors, approaches, and accomplishments organization wants to foster and reward. Establish employee recognition opportunities that emphasize and reinforce these sought-after qualities and behaviors. If one needs to increase attendance in organization, hand out a three-part form, during Monday morning staff meeting. The written note thanks employees who have perfect attendance that week. The employee keeps one part; save the second in the personnel file; place the third in a monthly drawing for gift certificates.

Fairness, clarity, and consistency are important
People need to see that each person who makes the same or a similar contribution has an equal likelihood of receiving recognition for their efforts. It is recommended that for providing employee recognition, organizations establish criteria for what makes a person eligible for the employee recognition. Anyone who meets the criteria is then recognized.

As an example, if people are recognized for exceeding a production or sales expectation, anyone who goes over the goal gets the glory. Recognizing only the highest performer will defeat or dissatisfy all other contributors, especially if the criteria are unclear or based on opinion.

For day-to-day employee recognition need to set guidelines so leaders acknowledge equivalent and similar contributions. Each employee who stays after work to contribute ideas in a departmental improvement brainstorming session gets to have lunch with the department head.

This guideline is why an “employee of the month-type program” is most often unsuccessful. The criteria for results and the fairness of these criteria are not clear to people. So, people complain about “brown-nosing points” and the boss’s “pet.” These programs cause discontent and dissention when the organization’s intentions were positive.

As an additional example, it is important to recognize all people who contributed to a success equally. A CEO may perpetually announced employee recognition for major projects at the company holiday celebration. Without fail, he may miss the names of several people who contributed to the success of the project. With the opportunity for public recognition past, people invariably felt slighted by the post-banquet thanks.

More Ways to Provide Effective Employee Recognition
Employee recognition approaches and content must also be inconsistent
Contradictory? No, not really. Organization wants to offer employee recognition that is consistently fair, but it also wants to make sure that employee recognition efforts do not become expectations or entitlements. As expectations, employee recognition efforts become entitlements. Bad news.

As an example, a company owner provided lunch for all staff every Friday to encourage team building and positive work relationships. All interested employees voluntarily attended the lunches. He was shocked when a group of employees asked him for reimbursement to cover the cost of the lunch on days they did not attend. The lunches had become an expected portion of their compensation and benefits package. Sincere recognition had turned into entitlement.

Inconsistency is encouraged in the type of employee recognition offered also. If employees are invited to lunch with the boss every time they work over-time, the lunch is an expectation. It is no longer a reward. Additionally, if a person does not receive the expected reward, it becomes a dissatisfier and negatively impacts the person’s attitude about work.

Be as specific in telling the individual exactly why he is receiving the recognition
The purpose of feedback is to reinforce what organization would like to see the employee do more of; the purpose of employee recognition is the same. In fact, employee recognition is one of the most powerful forms of feedback that can be provided. While “you did a nice job today” is a positive comment, it lacks the power of, “the report had a significant impact on the committee’s decision. You did an excellent job of highlighting the key points and information we needed to weigh before deciding. Because of your work, we’ll be able to cut six percent of the budget with no layoffs.”

Offer employee recognition as close to the event as possible
When a person performs positively, provide recognition immediately. Likely the employee is already feeling good about her performance and timely recognition of the employee will enhance the positive feelings. This, in turn, positively affects the employee’s confidence in her ability to do well in organization.

Specific Ideas for Employee Recognition
Remember that employee recognition is situational
Each individual has a preference for what he finds rewarding and how that recognition is most effective for him. One person may enjoy public recognition at a staff meeting; another prefers a private note in personnel file. The best way to determine what an employee finds rewarding is to ask.

Use the myriad opportunities for employee recognition that are available
In organizations, people place too much emphasis on money as the only form of employee recognition. While salary, bonuses and benefits are critical within employee recognition and reward system - after all, most of us do work for money - think more broadly about opportunities to provide employee recognition. There are few categories of employee recognition which can be used to thank employees for their contribution.

Examples of items which can be used for Employee Recognition
Employee recognition is best approached creatively. While money is an important form of employee recognition, ideas for employee recognition are limited only by imagination. Use the following ideas as approach for providing employee recognition.

Money
• Base salary
• Bonuses
• Gift certificates
• Cash awards

Written Words
• Handwritten ‘Thank you’ notes
• A letter of appreciation in the employee file
• Handwritten cards to mark celebratory occasions
• Recognition posted on the employee bulletin board
• Contribution noted in the company newsletter

Positive Attention From Supervisory Staff
• Stop by an individual’s workstation or office to talk informally
• Provide frequent positive performance feedback – at least weekly
• Provide public praise at a staff meeting
• Take the employee out to lunch.

Encourage Employee Development
• Send people to conferences and seminars
• Ask people to present a summary of what they learned at a conference or seminar at a department meeting
• Work out a written employee development plan
• Make career development commitments and a schedule

The Work Itself
• Provide cross training opportunities
• Provide more of the kinds of work the employee likes and less of the work they do not like
• Provide opportunities for empowerment and self-management
• Ask the employee to represent the department at an important, external meeting
• Have the employee represent the department on an inter-departmental committee
• Provide opportunities for the employee to determine their own goals and direction
• Participation in idea-generation and decision making

Gifts
• Company logo merchandise such as shirts, hats, mugs, and jackets
• Gift certificates to local stores
• The opportunity to select items from a catalog
• The ability to exchange "positive points" for merchandise or entry into a drawing for merchandise

Symbols and Honors
• Framed or unframed certificates to hang on the wall or file
• Engraved plaques
• Larger work area or office
• More and better equipment
• Provide status symbols, whatever they are in organization

Benefits
Make employee recognition is a common practice, not a scarce incidence, in organization. With these ideas, one may have many ideas that will help to develop a work environment that fosters employee recognition and hence, employee success.

Motivated employees do a better job of serving customers well. Happy customers buy more products and are committed to use services. More customers buying more products and services increase profitability and success. It's an endless circle. Hop on the employee recognition bandwagon to keep the circle spinning.