Kamis, 20 Oktober 2011

Bond and Energy


Chemical bond

From Wikipedia, the free encyclopedia
Jump to: navigation, search
A chemical bond is an attraction between atoms that allows the formation of chemical substances that contain two or more atoms. The bond is caused by the electromagnetic force attraction between opposite charges, either between electrons and nuclei, or as the result of a dipole attraction. The strength of chemical bonds varies considerably; there are "strong bonds" such as covalent or ionic bonds and "weak bonds" such as dipole-dipole interactions, the London dispersion force and hydrogen bonding.
Since opposite charges attract via a simple electromagnetic force, the negatively charged electrons orbiting the nucleus and the positively charged protons in the nucleus attract each other. Also, an electron positioned between two nuclei will be attracted to both of them. Thus, the most stable configuration of nuclei and electrons is one in which the electrons spend more time between nuclei, than anywhere else in space. These electrons cause the nuclei to be attracted to each other, and this attraction results in the bond. However, this assembly cannot collapse to a size dictated by the volumes of these individual particles. Due to the matter wave nature of electrons and their smaller mass, they occupy a much larger amount of volume compared with the nuclei, and this volume occupied by the electrons keeps the atomic nuclei relatively far apart, as compared with the size of the nuclei themselves.
In general, strong chemical bonding is associated with the sharing or transfer of electrons between the participating atoms. The atoms in molecules, crystals, metals and diatomic gases— indeed most of the physical environment around us— are held together by chemical bonds, which dictate the structure of matter.
Examples of Lewis dot-style chemical bonds between carbon C, hydrogen H, and oxygen O. Lewis dot depictures represent an early attempt to describe chemical bonding and are still widely used today.

Overview of main types of chemical bonds

In the simplest view of a so-called 'covalent' bond, one or more electrons (often a pair of electrons) are drawn into the space between the two atomic nuclei. Here the negatively charged electrons are attracted to the positive charges of both nuclei, instead of just their own. This overcomes the repulsion between the two positively charged nuclei of the two atoms, and so this overwhelming attraction holds the two nuclei in a fixed configuration of equilibrium, even though they will still vibrate at equilibrium position. In summary, covalent bonding involves sharing of electrons in which the positively charged nuclei of two or more atoms simultaneously attract the negatively charged electrons that are being shared. In a polar covalent bond, one or more electrons are unequally shared between two nuclei.
In a simplified view of an ionic bond, the bonding electron is not shared at all, but transferred. In this type of bond, the outer atomic orbital of one atom has a vacancy which allows addition of one or more electrons. These newly added electrons potentially occupy a lower energy-state (effectively closer to more nuclear charge) than they experience in a different atom. Thus, one nucleus offers a more tightly bound position to an electron than does another nucleus, with the result that one atom may transfer an electron to the other. This transfer causes one atom to assume a net positive charge, and the other to assume a net negative charge. The bond then results from electrostatic attraction between atoms, and the atoms become positive or negatively charged ions.
The last and rarely mentioned type of bonding is metallic bond.
All bonds can be explained by quantum theory, but, in practice, simplification rules allow chemists to predict the strength, directionality, and polarity of bonds. The octet rule and VSEPR theory are two examples. More sophisticated theories are valence bond theory which includes orbital hybridization and resonance, and the linear combination of atomic orbitals molecular orbital method which includes ligand field theory. Electrostatics are used to describe bond polarities and the effects they have on chemical substances.

History

Early speculations into the nature of the chemical bond, from as early as the 12th century, supposed that certain types of chemical species were joined by a type of chemical affinity. In 1704, Isaac Newton famously outlined his atomic bonding theory, in "Query 31" of his Opticks, whereby atoms attach to each other by some "force". Specifically, after acknowledging the various popular theories in vogue at the time, of how atoms were reasoned to attach to each other, i.e. "hooked atoms", "glued together by rest", or "stuck together by conspiring motions", Newton states that he would rather infer from their cohesion, that "particles attract one another by some force, which in immediate contact is exceedingly strong, at small distances performs the chemical operations, and reaches not far from the particles with any sensible effect."
In 1819, on the heels of the invention of the voltaic pile, Jöns Jakob Berzelius developed a theory of chemical combination stressing the electronegative and electropositive character of the combining atoms. By the mid 19th century, Edward Frankland, F.A. Kekulé, A.S. Couper, A.M. Butlerov, and Hermann Kolbe, building on the theory of radicals, developed the theory of valency, originally called "combining power", in which compounds were joined owing to an attraction of positive and negative poles. In 1916, chemist Gilbert N. Lewis developed the concept of the electron-pair bond, in which two atoms may share one to six electrons, thus forming the single electron bond, a single bond, a double bond, or a triple bond; in Lewis's own words, "An electron may form a part of the shell of two different atoms and cannot be said to belong to either one exclusively."[1]
That same year, Walther Kossel put forward a theory similar to Lewis' only his model assumed complete transfers of electrons between atoms, and was thus a model of ionic bonds. Both Lewis and Kossel structured their bonding models on that of Abegg's rule (1904).
In 1927, the first mathematically complete quantum description of a simple chemical bond, i.e. that produced by one electron in the hydrogen molecular ion, H2+, was derived by the Danish physicist Oyvind Burrau.[2] This work showed that the quantum approach to chemical bonds could be fundamentally and quantitatively correct, but the mathematical methods used could not be extended to molecules containing more than one electron. A more practical, albeit less quantitative, approach was put forward in the same year by Walter Heitler and Fritz London. The Heitler-London method forms the basis of what is now called valence bond theory. In 1929, the linear combination of atomic orbitals molecular orbital method (LCAO) approximation was introduced by Sir John Lennard-Jones, who also suggested methods to derive electronic structures of molecules of F2 (fluorine) and O2 (oxygen) molecules, from basic quantum principles. This molecular orbital theory represented a covalent bond as an orbital formed by combining the quantum mechanical Schrödinger atomic orbitals which had been hypothesized for electrons in single atoms. The equations for bonding electrons in multi-electron atoms could not be solved to mathematical perfection (i.e., analytically), but approximations for them still gave many good qualitative predictions and results. Most quantitative calculations in modern quantum chemistry use either valence bond or molecular orbital theory as a starting point, although a third approach, Density Functional Theory, has become increasingly popular in recent years.
In 1935, H. H. James and A. S. Coolidge carried out a calculation on the dihydrogen molecule that, unlike all previous calculation which used functions only of the distance of the electron from the atomic nucleus, used functions which also explicitly added the distance between the two electrons.[3] With up to 13 adjustable parameters they obtained a result very close to the experimental result for the dissociation energy. Later extensions have used up to 54 parameters and give excellent agreement with experiment. This calculation convinced the scientific community that quantum theory could give agreement with experiment. However this approach has none of the physical pictures of the valence bond and molecular orbital theories and is difficult to extend to larger molecules.

Valence bond theory

Main article: Valence bond theory
In 1927, valence bond theory was formulated and it argues that a chemical bond forms when two valence electrons, in their respective atomic orbitals, work or function to hold two nuclei together, by virtue of effects of lowering system energies. Building on this theory, the chemist Linus Pauling published in 1931 what some consider one of the most important papers in the history of chemistry: "On the Nature of the Chemical Bond". In this paper, elaborating on the works of Lewis, and the valence bond theory (VB) of Heitler and London, and his own earlier works, Pauling presented six rules for the shared electron bond, the first three of which were already generally known:
1. The electron-pair bond forms through the interaction of an unpaired electron on each of two atoms.
2. The spins of the electrons have to be opposed.
3. Once paired, the two electrons cannot take part in additional bonds.
His last three rules were new:
4. The electron-exchange terms for the bond involves only one wave function from each atom.
5. The available electrons in the lowest energy level form the strongest bonds.
6. Of two orbitals in an atom, the one that can overlap the most with an orbital from another atom will form the strongest bond, and this bond will tend to lie in the direction of the concentrated orbital.
Building on this article, Pauling's 1939 textbook: On the Nature of the Chemical Bond would become what some have called the "Bible" of modern chemistry. This book helped experimental chemists to understand the impact of quantum theory on chemistry. However, the later edition in 1959 failed to adequately address the problems that appeared to be better understood by molecular orbital theory. The impact of valence theory declined during the 1960s and 1970s as molecular orbital theory grew in usefulness as it was implemented in large digital computer programs. Since the 1980s, the more difficult problems of implementing valence bond theory into computer programs have been solved largely, and valence bond theory has seen a resurgence.

Comparison of valence bond and molecular orbital theory

In some respects valence bond theory is superior to molecular orbital theory. When applied to the simplest two-electron molecule, H2, valence bond theory, even at the simplest Heitler-London approach, gives a much closer approximation to the bond energy, and it provides a much more accurate representation of the behavior of the electrons as chemical bonds are formed and broken. In contrast simple molecular orbital theory predicts that the hydrogen molecule dissociates into a linear superposition of hydrogen atoms and positive and negative hydrogen ions, a completely unphysical result. This explains in part why the curve of total energy against interatomic distance for the valence bond method lies above the curve for the molecular orbital method at all distances and most particularly so for large distances. This situation arises for all homonuclear diatomic molecules and is particularly a problem for F2, where the minimum energy of the curve with molecular orbital theory is still higher in energy than the energy of two F atoms.
The concepts of hybridization are so versatile, and the variability in bonding in most organic compounds is so modest, that valence bond theory remains an integral part of the vocabulary of organic chemistry. However, the work of Friedrich Hund, Robert Mulliken, and Gerhard Herzberg showed that molecular orbital theory provided a more appropriate description of the spectroscopic, ionization and magnetic properties of molecules. The deficiencies of valence bond theory became apparent when hypervalent molecules (e.g. PF5) were explained without the use of d orbitals that were crucial to the bonding hybridisation scheme proposed for such molecules by Pauling. Metal complexes and electron deficient compounds (e.g. diborane) also appeared to be well described by molecular orbital theory, although valence bond descriptions have been made.
In the 1930s the two methods strongly competed until it was realised that they are both approximations to a better theory. If we take the simple valence bond structure and mix in all possible covalent and ionic structures arising from a particular set of atomic orbitals, we reach what is called the full configuration interaction wave function. If we take the simple molecular orbital description of the ground state and combine that function with the functions describing all possible excited states using unoccupied orbitals arising from the same set of atomic orbitals, we also reach the full configuration interaction wavefunction. It can be then seen that the simple molecular orbital approach gives too much weight to the ionic structures, while the simple valence bond approach gives too little. This can also be described as saying that the molecular orbital approach is too delocalised, while the valence bond approach is too localised.
The two approaches are now regarded as complementary, each providing its own insights into the problem of chemical bonding. Modern calculations in quantum chemistry usually start from (but ultimately go far beyond) a molecular orbital rather than a valence bond approach, not because of any intrinsic superiority in the former but rather because the MO approach is more readily adapted to numerical computations. However better valence bond programs are now available.

Bonds in chemical formulas

The fact that atoms and molecules are three-dimensional makes it difficult to use a single technique for indicating orbitals and bonds. In molecular formulas the chemical bonds (binding orbitals) between atoms are indicated by various different methods according to the type of discussion. Sometimes, they are completely neglected. For example, in organic chemistry chemists are sometimes concerned only with the functional groups of the molecule. Thus, the molecular formula of ethanol (a compound in alcoholic beverages) may be written in a paper in conformational, three-dimensional, full two-dimensional (indicating every bond with no three-dimensional directions), compressed two-dimensional (CH3–CH2–OH), separating the functional group from another part of the molecule (C2H5OH), or by its atomic constituents (C2H6O), according to what is discussed. Sometimes, even the non-bonding valence shell electrons (with the two-dimensional approximate directions) are marked, i.e. for elemental carbon .'C'. Some chemists may also mark the respective orbitals, i.e. the hypothetical ethene−4 anion (\/C=C/\ −4) indicating the possibility of bond formation.

Strong chemical bonds

Typical bond lengths in pm
and bond energies in kJ/mol.

Bond lengths can be converted to Å
by division by 100 (1 Å = 100 pm).
Data taken from [1].
Bond
Length
(pm)
Energy
(kJ/mol)
H — Hydrogen
H–H
74
436
H–O
96
366
H–F
92
568
H–Cl
127
432
C — Carbon
C–H
109
413
C–C
154
348
C=C
134
614
C≡C
120
839
C–N
147
308
C–O
143
360
C–F
134
488
C–Cl
177
330
N — Nitrogen
N–H
101
391
N–N
145
170
N≡N
110
945
O — Oxygen
O–O
148
145
O=O
121
498
F, Cl, Br, I — Halogens
F–F
142
158
Cl–Cl
199
243
Br–H
141
366
Br–Br
228
193
I–H
161
298
I–I
267
151
Strong chemical bonds are the intramolecular forces which hold atoms together in molecules. A strong chemical bond is formed from the transfer or sharing of electrons between atomic centers and relies on the electrostatic attraction between the protons in nuclei and the electrons in the orbitals. Although these bonds typically involve the transfer of integer numbers of electrons (this is the bond order, which respresents one transferred electron or two shared electrons), some systems can have intermediate numbers of bonds. An example of this is the organic molecule benzene, where the bond order is 1.5 for each carbon atom, meaning that it has 1.5 bonds (shares three electrons) with each one of its two neighbors.
The types of strong bond differ due to the difference in electronegativity of the constituent elements. A large difference in electronegativity leads to more polar (ionic) character in the bond.

Covalent bond

Main article: Covalent bond
Covalent bonding is a common type of bonding, in which the electronegativity difference between the bonded atoms is small or nonexistent. Bonds within most organic compounds are described as covalent. See sigma bonds and pi bonds for LCAO-description of such bonding.
A polar covalent bond is a covalent bond with a significant ionic character. This means that the electrons are closer to one of the atoms than the other, creating an imbalance of charge. They occur as a bond between two atoms with moderately different electronegativities, and give rise to dipole-dipole interactions. The electronegativity of these bonds is 0.3 to 1.7 .
A coordinate covalent bond is one where both bonding electrons are from one of the atoms involved in the bond. These bonds give rise to Lewis acids and bases. The electrons are shared roughly equally between the atoms in contrast to ionic bonding. Such bonding occurs in molecules such as the ammonium ion (NH4+) and are shown by an arrow pointing to the Lewis acid. Also known as non-polar covalent bond, the electronegativity of these bonds range from 0 to 0.3.
Molecules which are formed primarily from non-polar covalent bonds are often immiscible in water or other polar solvents, but much more soluble in non-polar solvents such as hexane.

Ionic bond

Main article: Ionic bond
Ionic bonding is a type of electrostatic interaction between atoms which have a large electronegativity difference. There is no precise value that distinguishes ionic from covalent bonding, but a difference of electronegativity of over 1.7 is likely to be ionic, and a difference of less than 1.7 is likely to be covalent.[4] Ionic bonding leads to separate positive and negative ions. Ionic charges are commonly between −3e to +3e. Ionic bonding commonly occurs in metal salts such as sodium chloride (table salt). A typical feature of ionic bonds is that the species form into ionic crystals, in which no ion is specifically paired with any single other ion, in a specific directional bond. Rather, each species of ion is surrounded by ions of the opposite charge, and the spacing between it and each of the oppositely charged ions near it, is the same for all surrounding atoms of the same type. It is thus no longer possible to associate an ion with any specific other single ionized atom near it. This is a situation unlike that in covalent crystals, where covalent bonds between specific atoms are still discernable from the shorter distances between them, as measured by with such techniques as X-ray diffraction.
Ionic crystals may contain a mixture of covalent and ionic species, as for example salts of complex acids, such as sodium cyanide, NaCN. Many minerals are of this type. X-ray diffration shows that in NaCN, for example, the bonds between sodium cations (Na+) and the cyanide anions (CN-) are ionic, with no sodium ion associated with any particular cyanide. However, the bonds between C and N atoms in cyanide are of the covalent type, making each of the carbon and nitrogen associated with just one of its opposite type, to which it is physically much closer than it is to other carbons or nitrogens in a sodium cyanide crystal.
When such salts dissolve into water, the ionic bonds are typically broken by the interaction with water, but the covalent bonds continue to hold. In solution, the cyanide ions, still bound together as single CN- ions, move independently through the solution, as do sodium ions, as Na+. These charged ions move apart because each of them are more strongly attracted to a number of water molecules, than to each other. The attraction between ions and water molecules in such solutions is due to a type of weak dipole-dipole type chemic One- and three-electron bonds
Bonds with one or three electrons can be found in radical species, which have an odd number of electrons. The simplest example of a 1-electron bond is found in the hydrogen molecular cation, H2+. One-electron bonds often have about half the bond energy of a 2-electron bond, and are therefore called "half bonds". However, there are exceptions: in the case of dilithium, the bond is actually stronger for the 1-electron Li2+ than for the 2-electron Li2. This exception can be explained in terms of hybridization and inner-shell effects.[5]
The simplest example of three-electron bonding can be found in the helium dimer cation, He2+, and can also be considered a "half bond" because, in molecular orbital terms, the third electron is in an anti-bonding orbital which cancels out half of the bond formed by the other two electrons. Another example of a molecule containing a 3-electron bond, in addition to two 2-electron bonds, is nitric oxide, NO. The oxygen molecule, O2 can also be regarded as having two 3-electron bonds and one 2-electron bond, which accounts for its paramagnetism and its formal bond order of 2.[6]
Molecules with odd-electron bonds are usually highly reactive. These types of bond are only stable between atoms with similar electronegativities.[6]

Bent bonds

Main article: Bent bond
Bent bonds, also known as banana bonds, are bonds in strained or otherwise sterically hindered molecules whose binding orbitals are forced into a banana-like form. Bent bonds are often more susceptible to reactions than ordinary bonds.

3c-2e and 3c-4e bonds

In three-center two-electron bonds ("3c–2e") three atoms share two electrons in bonding. This type of bonding occurs in electron deficient compounds like diborane. Each such bond (2 per molecule in diborane) contains a pair of electrons which connect the boron atoms to each other in a banana shape, with a proton (nucleus of a hydrogen atom) in the middle of the bond, sharing electrons with both boron atoms.
Three-center four-electron bonds ("3c–4e") also exist which explain the bonding in hypervalent molecules. In certain cluster compounds, so-called four-center two-electron bonds also have been postulated.
In certain conjugated π (pi) systems, such as benzene and other aromatic compounds (see below), and in conjugated network solids such as graphite, the electrons in the conjugated system of π-bonds are spread over as many nuclear centers as exist in the molecule, or the network.

 Aromatic bond

Main article: Aromaticity
In organic chemistry, certain configurations of electrons and orbitals infer extra stability to a molecule. This occurs when π orbitals overlap and combine with others on different atomic centres, forming a long range bond. For a molecule to be aromatic, it must obey Hückel's rule, where the number of π electrons fit the formula 4n + 2, where n is an integer. The bonds involved in the aromaticity are all planar.
In benzene, the prototypical aromatic compound, 18 (n = 4) bonding electrons bind 6 carbon atoms together to form a planar ring structure. The bond "order" (average number of bonds) between the different carbon atoms may be said to be (18/6)/2=1.5, but in this case the bonds are all identical from the chemical point of view. They may sometimes be written as single bonds alternating with double bonds, but the view of all ring bonds as being equivalently about 1.5 bonds in strength, is much closer to truth.
In the case of heterocyclic aromatics and substituted benzenes, the electronegativity differences between different parts of the ring may dominate the chemical behaviour of aromatic ring bonds, which otherwise are equivalent.

Metallic bond

Main article: Metallic bond
In a metallic bond, bonding electrons are delocalized over a lattice of atoms. By contrast, in ionic compounds, the locations of the binding electrons and their charges are static. The freely-moving or delocalization of bonding electrons leads to classical metallic properties such as shininess (surface light reflectivity), electrical and thermal conductivity, ductility, and high tensile strength.

Intermolecular bonding

Main article: Intermolecular Force
There are four basic types of bonds that can be formed between two or more (otherwise non-associated) molecules, ions or atoms. Intermolecular forces cause molecules to be attracted or repulsed by each other. Often, these define some of the physical characteristics (such as the melting point) of a substance.
  • A large difference in electronegativity between two bonded atoms will cause a permanent charge separation, or dipole, in a molecule or ion. Two or more molecules or ions with permanent dipoles can interact in dipole-dipole interactions. The bonding electrons in a molecule or ion will, on average, be closer to the more electronegative atom more frequently than the less electronegative one, giving rise to partial charges on each atom, and causing electrostatic forces between molecules or ions.
  • A hydrogen bond is effectively a strong example of a permanent dipole. The large difference in electronegativities between hydrogen and any of fluorine, nitrogen and oxygen, coupled with their lone pairs of electrons cause strong electrostatic forces between molecules. Hydrogen bonds are responsible for the high boiling points of water and ammonia with respect to their heavier analogues.
  • The London dispersion force arises due to instantaneous dipoles in neighbouring atoms. As the negative charge of the electron is not uniform around the whole atom, there is always a charge imbalance. This small charge will induce a corresponding dipole in a nearby molecule; causing an attraction between the two. The electron then moves to another part of the electron cloud and the attraction is broken.

Summary: electrons in chemical bonds

In the (unrealistic) limit of "pure" ionic bonding, electrons are perfectly localized on one of the two atoms in the bond. Such bonds can be understood by classical physics. The forces between the atoms are characterized by isotropic continuum electrostatic potentials. Their magnitude is in simple proportion to the charge difference.
Covalent bonds are better understood by valence bond theory or molecular orbital theory. The properties of the atoms involved can be understood using concepts such as oxidation number. The electron density within a bond is not assigned to individual atoms, but is instead delocalized between atoms. In valence bond theory, the two electrons on the two atoms are coupled together with the bond strength depending on the overlap between them. In molecular orbital theory, the linear combination of atomic orbitals (LCAO) helps describe the delocalized molecular orbital structures and energies based on the atomic orbitals of the atoms they came from. Unlike pure ionic bonds, covalent bonds may have directed anisotropic properties. These may have their own names, such as sigma bond and pi bond.
In the general case, atoms form bonds that are intermediates between ionic and covalent, depending on the relative electronegativity of the atoms involved. This type of bond is sometimes called polar covalent.

Close




Energy

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Page semi-protected
This article is about the scalar physical quantity. For other uses, see Energy (disambiguation).
"Energetic" redirects here. For other uses, see Energetic (disambiguation).
Lightning is the electric breakdown of air by strong electric fields, which produce a force on charges. When these charges move through a distance, a flow of energy occurs. The electric potential energy in the atmosphere then is transformed into thermal energy, light, and sound, which are other forms of energy.
In physics, energy (Ancient Greek: νέργεια energeia "activity, operation"[1]) is an indirectly observed quantity. It is often understood as the ability a physical system has to do work on other physical systems.[2][3] Since work is defined as a force acting through a distance (a length of space), energy is always equivalent to the ability to exert pulls or pushes against the basic forces of nature, along a path of a certain length.
The total energy contained in an object is identified with its mass, and energy (like mass), cannot be created or destroyed. When matter (ordinary material particles) is changed into energy (such as energy of motion, or into radiation), the mass of the system does not change through the transformation process. However, there may be mechanistic limits as to how much of the matter in an object may be changed into other types of energy and thus into work, on other systems. Energy, like mass, is a scalar physical quantity. In the International System of Units (SI), energy is measured in joules, but in many fields other units, such as kilowatt-hours and kilocalories, are customary. All of these units translate to units of work, which is always defined in terms of forces and the distances that the forces act through.
A system can transfer energy to another system by simply transferring matter to it (since matter is equivalent to energy, in accordance with its mass). However, when energy is transferred by means other than matter-transfer, the transfer produces changes in the second system, as a result of work done on it. This work manifests itself as the effect of force(s) applied through distances within the target system. For example, a system can emit energy to another by transferring (radiating) electromagnetic energy, but this creates forces upon the particles that absorb the radiation. Similarly, a system may transfer energy to another by physically impacting it, but that case the energy of motion in an object, called kinetic energy, results in forces acting over distances (new energy) to appear in another object that is struck. Transfer of thermal energy by heat occurs by both of these mechanisms: heat can be transferred by electromagnetic radiation, or by physical contact in which direct particle-particle impacts transfer kinetic energy.
Energy may be stored in systems without being present as matter, or as kinetic or electromagnetic energy. Stored energy is created whenever a particle has been moved through a field it interacts with (requiring a force to do so), but the energy to accomplish this is stored as a new position of the particles in the field—a configuration that must be "held" or fixed by a different type of force (otherwise, the new configuration would resolve itself by the field pushing or pulling the particle back toward its previous position). This type of energy "stored" by force-fields and particles that have been forced into a new physical configuration in the field by doing work on them by another system, is referred to as potential energy. A simple example of potential energy is the work needed to lift an object in a gravity field, up to a support. Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appears as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.
Any form of energy may be transformed into another form. For example, all types of potential energy are converted into kinetic energy when the objects are given freedom to move to different position (as for example, when an object falls off a support). When energy is in a form other than thermal energy, it may be transformed with good or even perfect efficiency, to any other type of energy, including electricity or production of new particles of matter. With thermal energy, however, there are often limits to the efficiency of the conversion to other forms of energy, as described by the second law of thermodynamics.
In all such energy transformation processes, the total energy remains the same, and a transfer of energy from one system to another, results in a loss to compensate for any gain. This principle, the conservation of energy, was first postulated in the early 19th century, and applies to any isolated system. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time.[4]
Although the total energy of a system does not change with time, its value may depend on the frame of reference. For example, a seated passenger in a moving airplane has zero kinetic energy relative to the airplane, but non-zero kinetic energy (and higher total energy) relative to the Earth.

History

The word energy derives from the Greek νέργεια energeia, which possibly appears for the first time in the work of Aristotle in the 4th century BCE.
The concept of energy emerged out of the idea of vis viva (living force), which Gottfried Leibniz defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter, a view shared by Isaac Newton, although it would be more than a century until this was generally accepted. In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense.[5] Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". It was argued for some years whether energy was a substance (the caloric) or merely a physical quantity, such as momentum.
William Thomson (Lord Kelvin) amalgamated all of these laws into the laws of thermodynamics, which aided in the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan.
During a 1961 lecture[6] for undergraduate students at the California Institute of Technology, Richard Feynman, a celebrated physics teacher and Nobel Laureate, said this about the concept of energy:
There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law—it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.
Since 1918 it has been known that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. That is, energy is conserved because the laws of physics do not distinguish between different instants of time (see Noether's theorem).

Energy in various contexts

The concept of energy and its transformations is useful in explaining and predicting most natural phenomena. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often described by entropy (equal energy spread among all available degrees of freedom) considerations, as in practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.
The concept of energy is widespread in all sciences.
  • In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor eE/kT – that is the probability of molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation.The activation energy necessary for a chemical reaction can be in the form of thermal energy.
  • In biology, energy is an attribute of all biological systems from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or an organelle of a biological organism. Energy is thus often said to be stored by cells in the structures of molecules of substances such as carbohydrates (including sugars), lipids, and proteins, which release energy when reacted with oxygen in respiration. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, assuming an average human energy expenditure of 12,500kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum.[7] The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a “feel” for the use of a given amount of energy[8]
  • In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior.,[9] while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes, are all a result of energy transformations brought about by solar energy on the atmosphere of the planet Earth.
  • In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen).
Energy transformations in the universe over time are characterized by various kinds of potential energy that has been available since the Big Bang, later being "released" (transformed to more active types of energy such as kinetic or radiant energy), when a triggering mechanism is available.
Familiar examples of such processes include nuclear decay, in which energy is released that was originally "stored" in heavy isotopes (such as uranium and thorium), by nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae, to store energy in the creation of these heavy elements before they were incorporated into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs. In a slower process, radioactive decay of these atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may be later released to active kinetic energy in landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars created these atoms.
In another similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight. Such sunlight from our Sun may again be stored as gravitational potential energy after it strikes the Earth, as (for example) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives many weather phenomena, save those generated by volcanic events. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into the high-energy compounds carbohydrates, lipids, and proteins. Plants also release oxygen during photosynthesis, which is utilized by living organisms as an electron acceptor, to release the energy of carbohydrates, lipids, and proteins. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark, in a forest fire, or it may be made available more slowly for animal or human metabolism, when these molecules are ingested, and catabolism is triggered by enzyme action.
Through all of these transformation chains, potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in a number of ways over time between releases, as more active energy. In all these events, one kind of energy is converted to other types of energy, including heat.

Distinction between energy and power

Although in everyday usage the terms energy and power are essentially synonyms, scientists and engineers distinguish between them. In its technical sense, power is not at all the same as energy, but is the rate at which energy is converted (or, equivalently, at which work is performed). Thus a hydroelectric plant, by allowing the water above the dam to pass through turbines, converts the water's potential energy into kinetic energy and ultimately into electric energy, whereas the amount of electric energy that is generated per unit of time is the electric power generated. The same amount of energy converted through a shorter period of time is more power over that shorter time.

Conservation of energy

Energy is subject to the law of conservation of energy. According to this law, energy can neither be created (produced) nor destroyed by itself. It can only be transformed.
Most kinds of energy (with gravitational energy being a notable exception)[10] are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.[6][11] Conservation of energy is the mathematical consequence of translational symmetry of time (that is, the indistinguishability of time intervals taken at different time)[12] - see Noether's theorem.
According to Conservation of energy the total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system.
This law is a fundamental principle of physics. It follows from the translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable.
This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle - it is impossible to define the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation - rather it provides mathematical limits to which energy can in principle be defined and measured.
In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
\Delta E \Delta t \ge \frac { \hbar } {2 }
which is similar in form to the Heisenberg uncertainty principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).
In particle physics, this inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which and with real particles, is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons (which are simply lowest quantum mechanical energy state of photons) are also responsible for electrostatic interaction between electric charges (which results in Coulomb law), for spontaneous radiative decay of exited atomic and nuclear states, for the Casimir force, for van der Waals bond forces and some other observable phenomena.

Applications of the concept of energy

Energy is subject to a strict global conservation law; that is, whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.[13]
  • The total energy of a system can be subdivided and classified in various ways. For example, it is sometimes convenient to distinguish potential energy (which is a function of coordinates only) from kinetic energy (which is a function of coordinate time derivatives only). It may also be convenient to distinguish gravitational energy, electric energy, thermal energy, and other forms. These classifications overlap; for instance, thermal energy usually consists partly of kinetic and partly of potential energy.
  • The transfer of energy can take various forms; familiar examples include work, heat flow, and advection, as discussed below.
  • The word "energy" is also used outside of physics in many ways, which can lead to ambiguity and inconsistency. The vernacular terminology is not consistent with technical terminology. For example, while energy is always conserved (in the sense that the total energy does not change despite energy transformations), energy can be converted into a form, e.g., thermal energy, that cannot be utilized to perform work. When one talks about "conserving energy by driving less," one talks about conserving fossil fuels and preventing useful energy from being lost as heat. This usage of "conserve" differs from that of the law of conservation of energy.[11]
In classical physics energy is considered a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy-momentum 4-vector).[14] In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts).

Energy transfer

Because energy is strictly conserved and is also locally conserved (wherever it can be defined), it is important to remember that by the definition of energy the transfer of energy between the "system" and adjacent regions is work. A familiar example is mechanical work. In simple cases this is written as the following equation:
ΔE = W




(1)
if there are no other energy-transfer processes involved. Here E is the amount of energy transferred, and W  represents the work done on the system.
More generally, the energy transfer can be split into two categories:
ΔE = W + Q




(2)
where Q represents the heat flow into the system.
There are other ways in which an open system can gain or lose energy. In chemical systems, energy can be added to a system by means of adding substances with different chemical potentials, which potentials are then extracted (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Winding a clock would be adding energy to a mechanical system. These terms may be added to the above equation, or they can generally be subsumed into a quantity called "energy addition term E" which refers to any type of energy carried over the surface of a control volume or system volume. Examples may be seen above, and many others can be imagined (for example, the kinetic energy of a stream of particles entering a system, or energy from a laser beam adds to system energy, without either being either work-done or heat-added, in the classic senses).
ΔE = W + Q + E




(3)
Where E in this general equation represents other additional advected energy terms not covered by work done on a system, or heat added to it.
Energy is also transferred from potential energy (Ep) to kinetic energy (Ek) and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
Epi + Eki = EpF + EkF




(4)
The equation can then be simplified further since Ep = mgh (mass times acceleration due to gravity times the height) and E_k = \frac{1}{2} mv^2(half mass times velocity squared). Then the total amount of energy can be found by adding Ep + Ek = Etotal.

Energy and the laws of motion

In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.

The Hamiltonian

The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics.[15]

The Lagrangian

Another energy-related concept is called the Lagrangian, after Joseph Louis Lagrange. This is even more fundamental than the Hamiltonian, and can be used to derive the equations of motion. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy.
Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).

Noether's Theorem

Noether's (first) theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law.
Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalization of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.

Energy and thermodynamics

Internal energy

Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.[16]

The laws of thermodynamics

According to the second law of thermodynamics, work can be totally converted into heat, but not vice versa. This is a mathematical consequence of statistical mechanics. The first law of thermodynamics simply asserts that energy is conserved,[17] and that heat is included as a form of energy transfer. A commonly used corollary of the first law is that for a "system" subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas), the differential change in energy of the system (with a gain in energy signified by a positive quantity) is given as the following equation:
\mathrm{d}E = T\mathrm{d}S - P\mathrm{d}V\,,
where the first term on the right is the heat transfer into the system, defined in terms of temperature T and entropy S (in which entropy increases and the change dS is positive when the system is heated), and the last term on the right hand side is identified as "work" done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system). Although this equation is the standard textbook example of energy conservation in classical thermodynamics, it is highly specific, ignoring all chemical, electric, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat, and because it contains a term that depends on temperature. The most general statement of the first law (i.e., conservation of energy) is valid even in situations in which temperature is undefinable.
Energy is sometimes expressed as the following equation:
\mathrm{d}E=\delta Q+\delta W\,,
which is unsatisfactory[11] because there cannot exist any thermodynamic state functions W or Q that are meaningful on the right hand side of this equation, except perhaps in trivial cases.

Equipartition of energy

The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential. At two points in the oscillation cycle it is entirely kinetic, and alternatively at two other points it is entirely potential. Over the whole cycle, or over many cycles, net energy is thus equally split between kinetic and potential. This is called equipartition principle; total energy of a system with many degrees of freedom is equally split among all available degrees of freedom.
This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is called the second law of thermodynamics.

Oscillators, phonons, and photons

This section may contain original research. Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed. More details may be available on the talk page. (August 2009)
In an ensemble (connected collection) of unsynchronized oscillators, the average energy is spread equally between kinetic and potential types.
In a solid, thermal energy (often referred to loosely as heat content) can be accurately described by an ensemble of thermal phonons that act as mechanical oscillators. In this model, thermal energy is equally kinetic and potential.
In an ideal gas, the interaction potential between particles is essentially the delta function which stores no energy: thus, all of the thermal energy is kinetic.
Because an electric oscillator (LC circuit) is analogous to a mechanical oscillator, its energy must be, on average, equally kinetic and potential. It is entirely arbitrary whether the magnetic energy is considered kinetic and whether the electric energy is considered potential, or vice versa. That is, either the inductor is analogous to the mass while the capacitor is analogous to the spring, or vice versa.
1. By extension of the previous line of thought, in free space the electromagnetic field can be considered an ensemble of oscillators, meaning that radiation energy can be considered equally potential and kinetic. This model is useful, for example, when the electromagnetic Lagrangian is of primary interest and is interpreted in terms of potential and kinetic energy.
2. On the other hand, in the key equation m2c4 = E2p2c2, the contribution mc2 is called the rest energy, and all other contributions to the energy are called kinetic energy. For a particle that has mass, this implies that the kinetic energy is 0.5p2 / m at speeds much smaller than c, as can be proved by writing E = mc2 √(1 + p2m − 2c − 2) and expanding the square root to lowest order. By this line of reasoning, the energy of a photon is entirely kinetic, because the photon is massless and has no rest energy. This expression is useful, for example, when the energy-versus-momentum relationship is of primary interest.
The two analyses are entirely consistent. The electric and magnetic degrees of freedom in item 1 are transverse to the direction of motion, while the speed in item 2 is along the direction of motion. For non-relativistic particles these two notions of potential versus kinetic energy are numerically equal, so the ambiguity is harmless, but not so for relativistic particles.

Work and virtual work

Work, a form of energy, is force times distance.
 W = \int_C \mathbf{F} \cdot \mathrm{d} 
\mathbf{s}
This says that the work (W) is equal to the line integral of the force F along a path C; for details see the mechanical work article.
Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.

Quantum mechanics

Main article: Energy operator
In quantum mechanics energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. In results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of slow changing (non-relativistic) wave function of quantum systems. The solution of this equation for bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by the Planck equation E = hν (where h is the Planck's constant and ν the frequency). In the case of electromagnetic wave these energy states are called quanta of light or photons.

Relativity

When calculating kinetic energy (work to accelerate a mass from zero speed to some finite speed) relativistically - using Lorentz transformations instead of Newtonian mechanics, Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest mass energy - energy which every mass must possess even when being at rest. The amount of energy is directly proportional to the mass of body:
E = mc2,
where
m is the mass,
c is the speed of light in vacuum,
E is the rest mass energy.
For example, consider electron-positron annihilation, in which the rest mass of individual particles is destroyed, but the inertia equivalent of the system of the two particles (its invariant mass) remains (since all energy is associated with mass), and this inertia and invariant mass is carried off by photons which individually are massless, but as a system retain their mass. This is a reversible process - the inverse process is called pair creation - in which the rest mass of particles is created from energy of two (or more) annihilating photons. In this system the matter (electrons and positrons) is destroyed and changed to non-matter energy (the photons). However, the total system mass and energy do not change during this interaction.
In general relativity, the stress-energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.[14]
It is not uncommon to hear that energy is "equivalent" to mass. It would be more accurate to state that every energy has an inertia and gravity equivalent, and because mass is a form of energy, then mass too has inertia and gravity associated with it.

Energy and life

Main article: Bioenergetics
Basic overview of energy and human life.
Any living organism relies on an external source of energy—radiation from the Sun in the case of green plants; chemical energy in some form in the case of animals—to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidised to carbon dioxide and water in the mitochondria
C6H12O6 + 6O2 → 6CO2 + 6H2O
C57H110O6 + 81.5O2 → 57CO2 + 55H2O
and some of the energy is used to convert ADP into ATP
ADP + HPO42− → ATP + H2O
The rest of the chemical energy in the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains when split and reacted with water, is used for other metabolism (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:[18]
gain in kinetic energy of a sprinter during a 100 m race: 4 kJ
gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3kJ
Daily food intake of a normal adult: 6–8 MJ
It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical energy or radiation), and it is true that most real machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings").[19] Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants,[20] i.e. reconverted into carbon dioxide and heat.

Measurement

A schematic diagram of a Calorimeter - An instrument used by physicists to measure energy
Because energy is defined as the ability to do work on objects, there is no absolute measure of energy. Only the transition of a system from one state into another can be defined and thus energy is measured in relative terms. The choice of a baseline or zero point is often arbitrary and can be made in whatever way is most convenient for a problem.

Methods

The methods for the measurement of energy often deploy methods for the measurement of still more fundamental concepts of science, namely mass, distance, radiation, temperature, time, electric charge and electric current.
Conventionally the technique most often employed is calorimetry, a thermodynamic technique that relies on the measurement of temperature using a thermometer or of intensity of radiation using a bolometer.

Units

Main article: Units of energy
Throughout the history of science, energy has been expressed in several different units such as ergs and calories. At present, the accepted unit of measurement for energy is the SI unit of energy, the joule. In addition to the joule, other units of energy include the kilowatt hour (kWh) and the British thermal unit (Btu). These are both larger units of energy. One kWh is equivalent to exactly 3.6 million joules, and one Btu is equivalent to about 1055 joules.[21]

Energy density

Main article: Energy density
Energy density is a term used for the amount of useful energy stored in a given system or region of space per unit volume.
For fuels, the energy per unit volume is sometimes a useful parameter. In a few applications, comparing, for example, the effectiveness of hydrogen fuel to gasoline it turns out that hydrogen has a higher specific energy than does gasoline, but, even in liquid form, a much lower energy density.

Forms of energy

Main article: Forms of energy
Heat, a form of energy, is partly potential energy and partly kinetic energy.
In the context of physical sciences, several forms of energy have been defined. These include:

These forms of energy may be divided into two main groups; kinetic energy and potential energy. Other familiar types of energy are a varying mix of both potential and kinetic energy.
Energy may be transformed between these forms, some with 100% energy conversion efficiency and others with less. Items that transform between these forms are called transducers.
The above list of the known possible forms of energy is not necessarily complete. Whenever physical scientists discover that a certain phenomenon appears to violate the law of energy conservation, new forms may be added, as is the case with dark energy, a hypothetical form of energy that permeates all of space and tends to increase the rate of expansion of the universe.
Classical mechanics distinguishes between potential energy, which is a function of the position of an object, and kinetic energy, which is a function of its movement. Both position and movement are relative to a frame of reference, which must be specified: this is often (and originally) an arbitrary fixed point on the surface of the Earth, the terrestrial frame of reference. It has been attempted to categorize all forms of energy as either kinetic or potential: this is not incorrect, but neither is it clear that it is a real simplification, as Feynman points out:
These notions of potential and kinetic energy depend on a notion of length scale. For example, one can speak of macroscopic potential and kinetic energy, which do not include thermal potential and kinetic energy. Also what is called chemical potential energy (below) is a macroscopic notion, and closer examination shows that it is really the sum of the potential and kinetic energy on the atomic and subatomic scale. Similar remarks apply to nuclear "potential" energy and most other forms of energy. This dependence on length scale is non-problematic if the various length scales are decoupled, as is often the case ... but confusion can arise when different length scales are coupled, for instance when friction converts macroscopic work into microscopic thermal energy.

Transformations of energy

Main article: Energy conversion
One form of energy can often be readily transformed into another with the help of a device- for instance, a battery, from chemical energy to electric energy; a dam: gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal to the decrease of potential energy. If one (unrealistically) assumes that there is no friction, the conversion of energy between these processes is perfect, and the pendulum will continue swinging forever.
Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass-energy equivalence. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between rest-mass and rest-energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J. J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass-energy equivalence#History for further information).
Matter may be destroyed and converted to energy (and vice versa), but mass cannot ever be destroyed; rather, mass remains a constant for both the matter and the energy, during any process when they are converted into each other. However, since c2 is extremely large relative to ordinary human scales, the conversion of ordinary amount of matter (for example, 1 kg) to other forms of energy (such as heat, light, and other radiation) can liberate tremendous amounts of energy (~9x1016 joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of a unit of energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure by weight, unless the energy loss is very large. Examples of energy transformation into matter (i.e., kinetic energy into particles with rest mass) are found in high-energy nuclear physics.
Transformation of energy into useful work is a core topic of thermodynamics. In nature, transformations of energy can be fundamentally classed into two kinds: those that are thermodynamically reversible, and those that are thermodynamically irreversible. A reversible process in thermodynamics is one in which no energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another, is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal).
As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do produce work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), grows less and less.
Sumber: Wikipedia