Chemical Equilibrium—Part 2: Gibbs Energy - Biology

Chemical Equilibrium—Part 2: Gibbs Energy - Biology

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

In a previous section, we began a description of chemical equilibrium inthe context offorward and reverse rates. We presented three key ideas:

  1. At equilibrium, the concentrations of reactants and products in a reversible reaction are not changing in time.
  2. A reversible reaction at equilibrium is not static—reactants and products continue tointerconvertat equilibrium, but the rates of the forward and reverse reactions are the same.
  3. Wewere NOT going tofall into a common student trap of assuming that chemical equilibrium means that the concentrations of reactants and products are equal at equilibrium.

Here we extend our discussion and put the concept of equilibrium into the context of Gibbs energy, also reinforcing the Energy Story exercise of considering the "Before/Start" and "After/End" states of a reaction (including the inherent passage of time).

Figure 1. Reaction coordinate diagram for a generic exergonic reversible reaction. Equations relating Gibbs energy and the equilibrium constant: R = 8.314 J mol-1 K-1 or 0.008314 kJ mol-1 K-1; T is temperature in Kelvin. Attribution: Marc T. Facciotti (original work)

The figure above shows a commonly cited relationship between ∆G° and Keq:

[ ∆G^o = -RTln K_{eq}.]

Here, G° indicates the Gibbs energy under standard conditions (e.g., 1 atmosphere of pressure, 298 K). This equation describes the change in Gibbs energy for reactants converting to products in a reaction that is at equilibrium. The value of ∆G° can therefore be thought of as being intrinsic to the reactants and products themselves. ∆G° is like a potential energy difference between reactants and products. With this concept as a basis, one can also consider a reaction where the "starting" state is somewhere out of equilibrium. In this case, there may be an additional “potential” associated with the out-of-equilibrium starting state. This “added” component contributes to the ∆G of a reaction and can be effectively added to the expression for Gibbs energy as follows:

[∆G = ∆G° + RTln Q, ]

where (Q) is called the reaction quotient. From the standpoint of General Biology, we will use a simple (a bit incomplete but functional) definition for

[Q = dfrac{[Products]_{st}}{[Reactants]_{st}} ]

at a defined non-equilibrium condition, st. One can extend this idea and calculate the Gibbs energy difference between two non-equilibrium states, provided they are properly defined and thus compute Gibbs energy changes between specifically defined out-of-equilibrium states. This last point is often relevant in reactions found in biological systems as these reactions are often found in multi-step pathways that effectively keep individual reactions in an out-of-equilibrium state.

This takes us to a point of confusion for some. In many biology books, the discussion of equilibrium includes not only the discussion of forward and reverse reaction rates, but also a statement that ∆G = 0 at equilibrium. This can be confusing because these very discussions often follow discussions of non-zero ∆G° values in the context of equilibrium (∆G° = -RTlnKeq). The nuance to point out is that ∆G° is referring to the Gibbs energy potential inherent in the chemical transformation between reactants and products alone. This is different from considering the progress of the reaction from an out-of-equilibrium state that is described by

[∆G = ∆G^o + RT ln Q.]

This expression can be expanded as follows:

[∆G = -RTln K_{eq} + RTln Q]

to bring the nuance into clearer focus. In this case note that as Q approaches Keq that the reaction ∆G becomes closer to zero, ultimately reaching zero when Q = Keq. This means that the Gibbs energy of the reaction (∆G) reaches zero at equilibrium not that the potential difference between substrates and products (∆G°) reaches zero.

German-British medical doctor and biochemist Hans Krebs' 1957 book Energy Transformations in Living Matter (written with Hans Kornberg) [1] was the first major publication on the thermodynamics of biochemical reactions. In addition, the appendix contained the first-ever published thermodynamic tables, written by Kenneth Burton, to contain equilibrium constants and Gibbs free energy of formations for chemical species, able to calculate biochemical reactions that had not yet occurred.

Non-equilibrium thermodynamics has been applied for explaining how biological organisms can develop from disorder. Ilya Prigogine developed methods for the thermodynamic treatment of such systems. He called these systems dissipative systems, because they are formed and maintained by the dissipative processes that exchange energy between the system and its environment, and because they disappear if that exchange ceases. It may be said that they live in symbiosis with their environment. Energy transformations in biology are dependent primarily on photosynthesis. The total energy captured by photosynthesis in green plants from the solar radiation is about 2 x 10 23 joules of energy per year. [2] Annual energy captured by photosynthesis in green plants is about 4% of the total sunlight energy that reaches Earth. The energy transformations in biological communities surrounding hydrothermal vents are exceptions they oxidize sulfur, obtaining their energy via chemosynthesis rather than photosynthesis.

The field of biological thermodynamics is focused on principles of chemical thermodynamics in biology and biochemistry. Principles covered include the first law of thermodynamics, the second law of thermodynamics, Gibbs free energy, statistical thermodynamics, reaction kinetics, and on hypotheses of the origin of life. Presently, biological thermodynamics concerns itself with the study of internal biochemical dynamics as: ATP hydrolysis, protein stability, DNA binding, membrane diffusion, enzyme kinetics, [3] and other such essential energy controlled pathways. In terms of thermodynamics, the amount of energy capable of doing work during a chemical reaction is measured quantitatively by the change in the Gibbs free energy. The physical biologist Alfred Lotka attempted to unify the change in the Gibbs free energy with evolutionary theory.

Energy transformation in biological systems Edit

The sun is the primary source of energy for living organisms. Some living organisms like plants need sunlight directly while other organisms like humans can acquire energy from the sun indirectly. [4] There is however evidence that some bacteria can thrive in harsh environments like Antarctica as evidence by the blue-green algae beneath thick layers of ice in the lakes. No matter what the type of living species, all living organisms must capture, transduce, store, and use energy to live.

The relationship between the energy of the incoming sunlight and its wavelength λ or frequency ν is given by

where h is the Planck constant (6.63x10 −34 Js) and c is the speed of light (2.998x10 8 m/s). Plants trap this energy from the sunlight and undergo photosynthesis, effectively converting solar energy into chemical energy. To transfer the energy once again, animals will feed on plants and use the energy of digested plant materials to create biological macromolecules.

Thermodynamic Theory of Evolution Edit

The biological evolution may be explained through a thermodynamic theory. The four laws of thermodynamics are used to frame the biological theory behind evolution. The first law of thermodynamics states that energy can not be created or destroyed. No life can create energy but must obtain it through its environment. The second law of thermodynamics states that energy can be transformed and that occurs everyday in lifeforms. As organisms take energy from their environment they can transform it into useful energy. This is the foundation of tropic dynamics.

The general example is that the open system can be defined as any ecosystem that moves toward maximizing the dispersal of energy. All things strive towards maximum entropy production, which in terms of evolution, occurs in changes in DNA to increase biodiversity. Thus, diversity can be linked to the second law of thermodynamics. Diversity can also be argued to be a diffusion process that diffuses toward a dynamic equilibrium to maximize entropy. Therefore, thermodynamics can explain the direction and rate of evolution along with the direction and rate of succession. [5]

First Law of Thermodynamics Edit

The First Law of Thermodynamics is a statement of the conservation of energy though it can be changed from one form to another, energy can be neither created nor destroyed. [6] From the first law, a principle called Hess's Law arises. Hess’s Law states that the heat absorbed or evolved in a given reaction must always be constant and independent of the manner in which the reaction takes place. Although some intermediate reactions may be endothermic and others may be exothermic, the total heat exchange is equal to the heat exchange had the process occurred directly. This principle is the basis for the calorimeter, a device used to determine the amount of heat in a chemical reaction. Since all incoming energy enters the body as food and is ultimately oxidized, the total heat production may be estimated by measuring the heat produced by the oxidation of food in a calorimeter. This heat is expressed in kilocalories, which are the common unit of food energy found on nutrition labels. [7]

Second Law of Thermodynamics Edit

The Second Law of Thermodynamics is concerned primarily with whether or not a given process is possible. The Second Law states that no natural process can occur unless it is accompanied by an increase in the entropy of the universe. [8] Stated differently, an isolated system will always tend to disorder. Living organisms are often mistakenly believed to defy the Second Law because they are able to increase their level of organization. To correct this misinterpretation, one must refer simply to the definition of systems and boundaries. A living organism is an open system, able to exchange both matter and energy with its environment. For example, a human being takes in food, breaks it down into its components, and then uses those to build up cells, tissues, ligaments, etc. This process increases order in the body, and thus decreases entropy. However, humans also 1) conduct heat to clothing and other objects they are in contact with, 2) generate convection due to differences in body temperature and the environment, 3) radiate heat into space, 4) consume energy-containing substances (i.e., food), and 5) eliminate waste (e.g., carbon dioxide, water, and other components of breath, urine, feces, sweat, etc.). When taking all these processes into account, the total entropy of the greater system (i.e., the human and her/his environment) increases. When the human ceases to live, none of these processes (1-5) take place, and any interruption in the processes (esp. 4 or 5) will quickly lead to morbidity and/or mortality.

Gibbs Free Energy Edit

In biological systems, in general energy and entropy change together. Therefore, it is necessary to be able to define a state function that accounts for these changes simultaneously. This state function is the Gibbs Free Energy, G.

  • H is the enthalpy (SI unit: joule)
  • T is the temperature (SI unit: kelvin)
  • S is the entropy (SI unit: joule per kelvin)

The change in Gibbs Free Energy can be used to determine whether a given chemical reaction can occur spontaneously. If ∆G is negative, the reaction can occur spontaneously. Likewise, if ∆G is positive, the reaction is nonspontaneous. [9] Chemical reactions can be “coupled” together if they share intermediates. In this case, the overall Gibbs Free Energy change is simply the sum of the ∆G values for each reaction. Therefore, an unfavorable reaction (positive ∆G1) can be driven by a second, highly favorable reaction (negative ∆G2 where the magnitude of ∆G2 > magnitude of ∆G1). For example, the reaction of glucose with fructose to form sucrose has a ∆G value of +5.5 kcal/mole. Therefore, this reaction will not occur spontaneously. The breakdown of ATP to form ADP and inorganic phosphate has a ∆G value of -7.3 kcal/mole. These two reactions can be coupled together, so that glucose binds with ATP to form glucose-1-phosphate and ADP. The glucose-1-phosphate is then able to bond with fructose yielding sucrose and inorganic phosphate. The ∆G value of the coupled reaction is -1.8 kcal/mole, indicating that the reaction will occur spontaneously. This principle of coupling reactions to alter the change in Gibbs Free Energy is the basic principle behind all enzymatic action in biological organisms. [10]

A quantitative relationship between cell potential and concentration of the ions

standard thermodynamics says that the actual Gibbs free energy ΔG is related to the free energy change under standard state ΔG o
by the relationship:

where Qr is the reaction quotient. The cell potential E associated with the electrochemical reaction is defined as the decrease in Gibbs free energy per coulomb of charge transferred, which leads to the relationship Δ G = − z F E . The constant F (the Faraday constant) is a unit conversion factor F = NAq , where NA is the Avogadro constant and q is the fundamental electron charge. This immediately leads to the Nernst equation, which for an electrochemical half-cell is

For a complete electrochemical reaction (full cell), the equation can be written as

Ered is the half-cell reduction potential at the temperature of interest, E o
red is the standard half-cell reduction potential, Ecell is the cell potential (electromotive force) at the temperature of interest, E o
cell is the standard cell potential, R is the universal gas constant: R = 8.314 462 618 153 24 J K −1 mol −1 , T is the temperature in kelvins, z is the number of electrons transferred in the cell reaction or half-reaction, F is the Faraday constant, the number of coulombs per mole of electrons: F = 96 485 .332 123 310 0184 C mol −1 , Qr is the reaction quotient of the cell reaction, and a is the chemical activity for the relevant species, where aRed is the activity of the reduced form and aOx is the activity of the oxidized form.

Similarly to equilibrium constants, activities are always measured with respect to the standard state (1 mol/L for solutes, 1 atm for gases). The activity of species X, aX , can be related to the physical concentrations cX via aX = γXcX , where γX is the activity coefficient of species X. Because activity coefficients tend to unity at low concentrations, activities in the Nernst equation are frequently replaced by simple concentrations. Alternatively, defining the formal potential as:

the half-cell Nernst equation may be written in terms of concentrations as:

and likewise for the full cell expression.

where λ=ln(10) and λVT =0.05916. V. The Nernst equation is used in physiology for finding the electric potential of a cell membrane with respect to one type of ion. It can be linked to the acid dissociation constant.

Nernst potential Edit

The Nernst equation has a physiological application when used to calculate the potential of an ion of charge z across a membrane. This potential is determined using the concentration of the ion both inside and outside the cell:

When the membrane is in thermodynamic equilibrium (i.e., no net flux of ions), and if the cell is permeable to only one ion, then the membrane potential must be equal to the Nernst potential for that ion.

Goldman equation Edit

When the membrane is permeable to more than one ion, as is inevitably the case, the resting potential can be determined from the Goldman equation, which is a solution of G-H-K influx equation under the constraints that total current density driven by electrochemical force is zero:

Em is the membrane potential (in volts, equivalent to joules per coulomb), Pion is the permeability for that ion (in meters per second), [ion]out is the extracellular concentration of that ion (in moles per cubic meter, to match the other SI units, though the units strictly don't matter, as the ion concentration terms become a dimensionless ratio), [ion]in is the intracellular concentration of that ion (in moles per cubic meter), R is the ideal gas constant (joules per kelvin per mole), T is the temperature in kelvins, F is Faraday's constant (coulombs per mole).

The potential across the cell membrane that exactly opposes net diffusion of a particular ion through the membrane is called the Nernst potential for that ion. As seen above, the magnitude of the Nernst potential is determined by the ratio of the concentrations of that specific ion on the two sides of the membrane. The greater this ratio the greater the tendency for the ion to diffuse in one direction, and therefore the greater the Nernst potential required to prevent the diffusion. A similar expression exists that includes r (the absolute value of the transport ratio). This takes transporters with unequal exchanges into account. See: sodium-potassium pump where the transport ratio would be 2/3, so r equals 1.5 in the formula below. The reason why we insert a factor r = 1.5 here is that current density by electrochemical force Je.c.(Na + )+J e.c. (K + ) is no longer zero, but rather Je.c.(Na + )+1.5Je.c.(K + )=0 (as for both ions flux by electrochemical force is compensated by that by the pump, i.e. Je.c.=-Jpump), altering the constraints for applying GHK equation. The other variables are the same as above. The following example includes two ions: potassium (K + ) and sodium (Na + ). Chloride is assumed to be in equilibrium.

When chloride (Cl − ) is taken into account,

Using Boltzmann factor Edit

For simplicity, we will consider a solution of redox-active molecules that undergo a one-electron reversible reaction

and that have a standard potential of zero, and in which the activities are well represented by the concentrations (i.e. unit activity coefficient). The chemical potential μc of this solution is the difference between the energy barriers for taking electrons from and for giving electrons to the working electrode that is setting the solution's electrochemical potential. The ratio of oxidized to reduced molecules, [Ox] / [Red] , is equivalent to the probability of being oxidized (giving electrons) over the probability of being reduced (taking electrons), which we can write in terms of the Boltzmann factor for these processes:

Taking the natural logarithm of both sides gives

Using thermodynamics (chemical potential) Edit

Quantities here are given per molecule, not per mole, and so Boltzmann constant k and the electron charge e are used instead of the gas constant R and Faraday's constant F . To convert to the molar quantities given in most chemistry textbooks, it is simply necessary to multiply by the Avogadro constant: R = kNA and F = eNA . The entropy of a molecule is defined as

The change in entropy from some state 1 to another state 2 is therefore

so that the entropy of state 2 is

If state 1 is at standard conditions, in which c1 is unity (e.g., 1 atm or 1 M), it will merely cancel the units of c2 . We can, therefore, write the entropy of an arbitrary molecule A as

where S 0 is the entropy at standard conditions and [A] denotes the concentration of A. The change in entropy for a reaction

We define the ratio in the last term as the reaction quotient:

This is the more general form of the Nernst equation. For the redox reaction Ox + n e − → Red ,

The cell potential at standard conditions E 0 is often replaced by the formal potential E 0 ′ , which includes some small corrections to the logarithm and is the potential that is actually measured in an electrochemical cell.

At equilibrium, the electrochemical potential (E) = 0 and therefore the reaction quotient attains the special value known as the equilibrium constant: Q = Keq . Therefore,

We have thus related the standard electrode potential and the equilibrium constant of a redox reaction.

In dilute solutions, the Nernst equation can be expressed directly in the terms of concentrations (since activity coefficients are close to unity). But at higher concentrations, the true activities of the ions must be used. This complicates the use of the Nernst equation, since estimation of non-ideal activities of ions generally requires experimental measurements. The Nernst equation also only applies when there is no net current flow through the electrode. The activity of ions at the electrode surface changes when there is current flow, and there are additional overpotential and resistive loss terms which contribute to the measured potential.

At very low concentrations of the potential-determining ions, the potential predicted by Nernst equation approaches toward ±∞ . This is physically meaningless because, under such conditions, the exchange current density becomes very low, and there may be no thermodynamic equilibrium necessary for Nernst equation to hold. The electrode is called unpoised in such case. Other effects tend to take control of the electrochemical behavior of the system, like the involvement of the solvated electron in electricity transfer and electrode equilibria, as analyzed by Alexander Frumkin and B. Damaskin, [4] Sergio Trasatti, etc.

Time dependence of the potential Edit

The expression of time dependence has been established by Karaoglanoff. [5] [6] [7] [8]

The equation has been involved in the scientific controversy involving cold fusion. The discoverers of cold fusion, Fleischmann and Pons, calculated that a palladium cathode immersed in a heavy water electrolysis cell could achieve up to 10 27 atmospheres of pressure on the surface of the cathode, enough pressure to cause spontaneous nuclear fusion. In reality, only 10,000–20,000 atmospheres were achieved. John R. Huizenga claimed their original calculation was affected by a misinterpretation of Nernst equation. [9] He cited a paper about Pd–Zr alloys. [10] The equation permits the extent of reaction between two redox systems to be calculated and can be used, for example, to decide whether a particular reaction will go to completion or not. At equilibrium the emfs of the two half cells are equal. This enables Kc to be calculated hence the extent of the reaction.

6.2 Potential, Kinetic, Free, and Activation Energy

By the end of this section, you will be able to do the following:

  • Define “energy”
  • Explain the difference between kinetic and potential energy
  • Discuss the concepts of free energy and activation energy
  • Describe endergonic and exergonic reactions

We define energy as the ability to do work. As you’ve learned, energy exists in different forms. For example, electrical energy, light energy, and heat energy are all different energy types. While these are all familiar energy types that one can see or feel, there is another energy type that is much less tangible. Scientists associate this energy with something as simple as an object above the ground. In order to appreciate the way energy flows into and out of biological systems, it is important to understand more about the different energy types that exist in the physical world.

Energy Types

When an object is in motion, there is energy. For example, an airplane in flight produces considerable energy. This is because moving objects are capable of enacting a change, or doing work. Think of a wrecking ball. Even a slow-moving wrecking ball can do considerable damage to other objects. However, a wrecking ball that is not in motion is incapable of performing work. Energy with objects in motion is kinetic energy . A speeding bullet, a walking person, rapid molecule movement in the air (which produces heat), and electromagnetic radiation like light all have kinetic energy.

What if we lift that same motionless wrecking ball two stories above a car with a crane? If the suspended wrecking ball is unmoving, can we associate energy with it? The answer is yes. The suspended wrecking ball has associated energy that is fundamentally different from the kinetic energy of objects in motion. This energy form results from the potential for the wrecking ball to do work. If we release the ball it would do work. Because this energy type refers to the potential to do work, we call it potential energy . Objects transfer their energy between kinetic and potential in the following way: As the wrecking ball hangs motionless, it has 0 kinetic and 100 percent potential energy. Once it releases, its kinetic energy begins to increase because it builds speed due to gravity. Simultaneously, as it nears the ground, it loses potential energy. Somewhere mid-fall it has 50 percent kinetic and 50 percent potential energy. Just before it hits the ground, the ball has nearly lost its potential energy and has near-maximal kinetic energy. Other examples of potential energy include water's energy held behind a dam (Figure 6.6), or a person about to skydive from an airplane.

We associate potential energy not only with the matter's location (such as a child sitting on a tree branch), but also with the matter's structure. A spring on the ground has potential energy if it is compressed so does a tautly pulled rubber band. The very existence of living cells relies heavily on structural potential energy. On a chemical level, the bonds that hold the molecules' atoms together have potential energy. Remember that anabolic cellular pathways require energy to synthesize complex molecules from simpler ones, and catabolic pathways release energy when complex molecules break down. That certain chemical bonds' breakdown can release energy implies that those bonds have potential energy. In fact, there is potential energy stored within the bonds of all the food molecules we eat, which we eventually harness for use. This is because these bonds can release energy when broken. Scientists call the potential energy type that exists within chemical bonds that releases when those bonds break chemical energy (Figure 6.7). Chemical energy is responsible for providing living cells with energy from food. Breaking the molecular bonds within fuel molecules brings about the energy's release.

Link to Learning

Visit this site and select “A simple pendulum” on the menu (under “Harmonic Motion”) to see the shifting kinetic (K) and potential energy (U) of a pendulum in motion.

Free Energy

After learning that chemical reactions release energy when energy-storing bonds break, an important next question is how do we quantify and express the chemical reactions with the associated energy? How can we compare the energy that releases from one reaction to that of another reaction? We use a measurement of free energy to quantitate these energy transfers. Scientists call this free energy Gibbs free energy (abbreviated with the letter G) after Josiah Willard Gibbs, the scientist who developed the measurement. Recall that according to the second law of thermodynamics, all energy transfers involve losing some energy in an unusable form such as heat, resulting in entropy. Gibbs free energy specifically refers to the energy that takes place with a chemical reaction that is available after we account for entropy. In other words, Gibbs free energy is usable energy, or energy that is available to do work.

Every chemical reaction involves a change in free energy, called delta G (∆G). We can calculate the change in free energy for any system that undergoes such a change, such as a chemical reaction. To calculate ∆G, subtract the amount of energy lost to entropy (denoted as ∆S) from the system's total energy change. The total energy in the system is enthalpy and we denote it as ∆H. The formula for calculating ∆G is as follows, where the symbol T refers to absolute temperature in Kelvin (degrees Celsius + 273):

We express a chemical reaction's standard free energy change as an amount of energy per mole of the reaction product (either in kilojoules or kilocalories, kJ/mol or kcal/mol 1 kJ = 0.239 kcal) under standard pH, temperature, and pressure conditions. We generally calculate standard pH, temperature, and pressure conditions at pH 7.0 in biological systems, 25 degrees Celsius, and 100 kilopascals (1 atm pressure), respectively. Note that cellular conditions vary considerably from these standard conditions, and so standard calculated ∆G values for biological reactions will be different inside the cell.

Endergonic Reactions and Exergonic Reactions

If energy releases during a chemical reaction, then the resulting value from the above equation will be a negative number. In other words, reactions that release energy have a ∆G < 0. A negative ∆G also means that the reaction's products have less free energy than the reactants, because they gave off some free energy during the reaction. Scientists call reactions that have a negative ∆G and consequently release free energy exergonic reactions . Think: exergonic means energy is exiting the system. We also refer to these reactions as spontaneous reactions, because they can occur without adding energy into the system. Understanding which chemical reactions are spontaneous and release free energy is extremely useful for biologists, because these reactions can be harnessed to perform work inside the cell. We must draw an important distinction between the term spontaneous and the idea of a chemical reaction that occurs immediately. Contrary to the everyday use of the term, a spontaneous reaction is not one that suddenly or quickly occurs. Rusting iron is an example of a spontaneous reaction that occurs slowly, little by little, over time.

If a chemical reaction requires an energy input rather than releasing energy, then the ∆G for that reaction will be a positive value. In this case, the products have more free energy than the reactants. Thus, we can think of the reactions' products as energy-storing molecules. We call these chemical reactions endergonic reactions , and they are non-spontaneous. An endergonic reaction will not take place on its own without adding free energy.

Let’s revisit the example of the synthesis and breakdown of the food molecule, glucose. Remember that building complex molecules, such as sugars, from simpler ones is an anabolic process and requires energy. Therefore, the chemical reactions involved in anabolic processes are endergonic reactions. Alternatively the catabolic process of breaking sugar down into simpler molecules releases energy in a series of exergonic reactions. Like the rust example above, the sugar breakdown involves spontaneous reactions, but these reactions do not occur instantaneously. Figure 6.8 shows some other examples of endergonic and exergonic reactions. Later sections will provide more information about what else is required to make even spontaneous reactions happen more efficiently.

Visual Connection

Look at each of the processes, and decide if it is endergonic or exergonic. In each case, does enthalpy increase or decrease, and does entropy increase or decrease?

An important concept in studying metabolism and energy is that of chemical equilibrium. Most chemical reactions are reversible. They can proceed in both directions, releasing energy into their environment in one direction, and absorbing it from the environment in the other direction (Figure 6.9). The same is true for the chemical reactions involved in cell metabolism, such as the breaking down and building up of proteins into and from individual amino acids, respectively. Reactants within a closed system will undergo chemical reactions in both directions until they reach a state of equilibrium, which is one of the lowest possible free energy and a state of maximal entropy. To push the reactants and products away from a state of equilibrium requires energy. Either reactants or products must be added, removed, or changed. If a cell were a closed system, its chemical reactions would reach equilibrium, and it would die because there would be insufficient free energy left to perform the necessary work to maintain life. In a living cell, chemical reactions are constantly moving towards equilibrium, but never reach it. This is because a living cell is an open system. Materials pass in and out, the cell recycles the products of certain chemical reactions into other reactions, and there is never chemical equilibrium. In this way, living organisms are in a constant energy-requiring, uphill battle against equilibrium and entropy. This constant energy supply ultimately comes from sunlight, which produces nutrients in the photosynthesis process.

Activation Energy

There is another important concept that we must consider regarding endergonic and exergonic reactions. Even exergonic reactions require a small amount of energy input before they can proceed with their energy-releasing steps. These reactions have a net release of energy, but still require some initial energy. Scientists call this small amount of energy input necessary for all chemical reactions to occur the activation energy (or free energy of activation) abbreviated as EA (Figure 6.10).

Why would an energy-releasing, negative ∆G reaction actually require some energy to proceed? The reason lies in the steps that take place during a chemical reaction. During chemical reactions, certain chemical bonds break and new ones form. For example, when a glucose molecule breaks down, bonds between the molecule's carbon atoms break. Since these are energy-storing bonds, they release energy when broken. However, to get them into a state that allows the bonds to break, the molecule must be somewhat contorted. A small energy input is required to achieve this contorted state. This contorted state is the transition state , and it is a high-energy, unstable state. For this reason, reactant molecules do not last long in their transition state, but very quickly proceed to the chemical reaction's next steps. Free energy diagrams illustrate the energy profiles for a given reaction. Whether the reaction is exergonic or endergonic determines whether the products in the diagram will exist at a lower or higher energy state than both the reactants and the products. However, regardless of this measure, the transition state of the reaction exists at a higher energy state than the reactants, and thus, EA is always positive.

Link to Learning

Watch an animation of the move from free energy to transition state at this site.

From where does the activation energy that chemical reactants require come? The activation energy's required source to push reactions forward is typically heat energy from the surroundings. Heat energy (the total bond energy of reactants or products in a chemical reaction) speeds up the molecule's motion, increasing the frequency and force with which they collide. It also moves atoms and bonds within the molecule slightly, helping them reach their transition state. For this reason, heating a system will cause chemical reactants within that system to react more frequently. Increasing the pressure on a system has the same effect. Once reactants have absorbed enough heat energy from their surroundings to reach the transition state, the reaction will proceed.

The activation energy of a particular reaction determines the rate at which it will proceed. The higher the activation energy, the slower the chemical reaction. The example of iron rusting illustrates an inherently slow reaction. This reaction occurs slowly over time because of its high EA. Additionally, burning many fuels, which is strongly exergonic, will take place at a negligible rate unless sufficient heat from a spark overcomes their activation energy. However, once they begin to burn, the chemical reactions release enough heat to continue the burning process, supplying the activation energy for surrounding fuel molecules. Like these reactions outside of cells, the activation energy for most cellular reactions is too high for heat energy to overcome at efficient rates. In other words, in order for important cellular reactions to occur at appreciable rates (number of reactions per unit time), their activation energies must be lowered (Figure 6.10). Scientist refer to this as catalysis. This is a very good thing as far as living cells are concerned. Important macromolecules, such as proteins, DNA, and RNA, store considerable energy, and their breakdown is exergonic. If cellular temperatures alone provided enough heat energy for these exergonic reactions to overcome their activation barriers, the cell's essential components would disintegrate.

Visual Connection

If no activation energy were required to break down sucrose (table sugar), would you be able to store it in a sugar bowl?


Gibbs energy profiles have great utility as teaching and learning tools because they present students with a visual representation of the energy changes that occur during enzyme catalysis. Unfortunately, most textbooks divorce discussions of traditional kinetic topics, such as enzyme inhibition, from discussions of these same topics in terms of Gibbs energy profiles. Examination of the changes in the values of the apparent kinetic parameters KS app , kcat app , and (kcat/KM) app in response to various modes of inhibition may be informative to students when presented in combination with Gibbs energy profiles. Herein, the symbolism of standard Gibbs energy profiles is utilized to derive expressions for the changes in Gibbs energy associated with the apparent kinetic parameters and to describe their behavior in the presence of either a competitive, uncompetitive, noncompetitive, or linear mixed-type inhibitor under rapid equilibrium conditions. The approach is intuitive and complementary to the traditional derivations of enzyme kinetic equations.

Computation of complex and constrained equilibria by minimization of the Gibbs free energy

Calculation of chemical equilibria is a good way to determine the composition of a reacting system. This study presents a method for calculating complex equilibria with multiple reactions and phases and adapts it to externally constrained equilibria by introducing energy and kinetic constraints.

The proposed method is based on the minimization of the Gibbs free energy, taking into account mass (mole and atom) and charge balances. Examples using different thermodynamic models are presented as well as problems with energy and kinetic constraints. The results are in good agreement with the literature. The calculation method can be applied to a wide variety of fields. Possible extension of the work to electrochemical systems is also addressed.


► Minimization of the Gibbs free energy is applied to constrained equilibria. ► Maximizing entropy is replaced by minimizing Gibbs coupled with an energy balance. ► Adiabatic and non-adiabatic problems can be treated by Gibbs minimization. ► Kinetic constraints allow the evolution of the quasi-equilibrium state with time.

You can't measure the energy of ATP itself, but you can measure the amount of energy that is released every time a P group is removed from the molecule, that is about 30 kj/mol.

Here some more details on how the deltaG is experimentally determined.

Hydrolysis of the terminal phosphoanhydridic bond is a highly exergonic process, releasing 30.5 kJ mol−1 energy.

. The actual value of ΔG for ATP hydrolysis varies, primarily depending on Mg2+ concentration, and under normal physiologic conditions is actually closer to -50 kJ mol−1.

The standard amount of energy released from hydrolysis of ATP can be calculated from the changes in energy under non-natural (standard) conditions, then correcting to biological concentrations. The net change in heat energy (enthalpy) at standard temperature and pressure of the decomposition of ATP into hydrated ADP and hydrated inorganic phosphate is −30.5 kJ/mol, with a change in free energy of 3.4 kJ/mol.[17] The energy released by cleaving either a phosphate (Pi) or pyrophosphate (PPi) unit from ATP at standard state of 1 M are:[18]

ATP + H 2O → ADP + Pi ΔG° = −30.5 kJ/mol (−7.3 kcal/mol)

ATP + H 2O → AMP + PPi ΔG° = −45.6 kJ/mol (−10.9 kcal/mol)

. The values given for the Gibbs free energy for this reaction are dependent on a number of factors, including overall ionic strength and the presence of alkaline earth metal ions such as Mg2+ and Ca2+ . Under typical cellular conditions, ΔG is approximately −57 kJ/mol (−14 kcal/mol).

Standard Gibbs energy of metabolic reactions: II. Glucose-6-phosphatase reaction and ATP hydrolysis

ATP (adenosine triphosphate) is a key reaction for metabolism. Tools from systems biology require standard reaction data in order to predict metabolic pathways accurately. However, literature values for standard Gibbs energy of ATP hydrolysis are highly uncertain and differ strongly from each other. Further, such data usually neglect the activity coefficients of reacting agents, and published data like this is apparent (condition-dependent) data instead of activity-based standard data. In this work a consistent value for the standard Gibbs energy of ATP hydrolysis was determined. The activity coefficients of reacting agents were modeled with electrolyte Perturbed-Chain Statistical Associating Fluid Theory (ePC-SAFT). The Gibbs energy of ATP hydrolysis was calculated by combining the standard Gibbs energies of hexokinase reaction and of glucose-6-phosphate hydrolysis. While the standard Gibbs energy of hexokinase reaction was taken from previous work, standard Gibbs energy of glucose-6-phosphate hydrolysis reaction was determined in this work. For this purpose, reaction equilibrium molalities of reacting agents were measured at pH7 and pH8 at 298.15K at varying initial reacting agent molalities. The corresponding activity coefficients at experimental equilibrium molalities were predicted with ePC-SAFT yielding the Gibbs energy of glucose-6-phosphate hydrolysis of -13.72±0.75kJ·mol -1 . Combined with the value for hexokinase, the standard Gibbs energy of ATP hydrolysis was finally found to be -31.55±1.27kJ·mol -1 . For both, ATP hydrolysis and glucose-6-phosphate hydrolysis, a good agreement with own and literature values were obtained when influences of pH, temperature, and activity coefficients were explicitly taken into account in order to calculate standard Gibbs energy at pH7, 298.15K and standard state.

Energy and enzymes

One way we can see the Second Law at work is in our daily diet. We eat food each day, without gaining that same amount of body weight! The food we eat is largely expended as carbon dioxide and heat energy, plus some work done in repairing and rebuilding bodily cells and tissues, physical movement, and neuronal activity.

Although living organisms appear to reduce entropy, by assembling small molecules into polymers and higher order structures, this work releases waste heat that increases the entropy of the environment.

Gibbs Free Energy

Gibbs free energy is a measure of the amount of work that is potentially obtainable. Instead of absolute quantities, what is usually measured is the change in free energy:

where H = enthalpy (the heat energy content) T = absolute temperature (Kelvin), and S = entropy (sometimes called disorder, but a complicated and subtle concept that has more to do with degrees of freedom 6 molecules of CO2 have greater entropy than a molecule of glucose, where the carbon atoms are linked together by covalent bonds).

If ΔG < 0, a chemical reaction is exergonic, releases free energy, and will progress spontaneously, with no input of additional energy (though this does not mean that the reaction will occur quickly – see the discussion about reaction rates below).

If ΔG > 0, a chemical reaction is endergonic, requires or absorbs an input of free energy, and will progress only if free energy is put into the system else the reaction will go backwards.

Free energy and chemical equilibrium

If ΔG = 0, a chemical reaction is in equilibrium, meaning that the rates of forward and reverse reactions are equal, so there is no net change, or no potential for doing work.

This figure from Wikipedia illustrates that reactions will proceed spontaneously towards equilibrium, in either direction, and that the equilibrium point is the minimum free energy state of the reaction mixture. The x-axis (ξ) represents the ratio of products/reactants.

Cells couple exergonic reactions to endergonic reactions so that the net free energy change is negative

ATP is the primary energy currency of the cell cells accomplish endergonic reactions such as active transport, cell movement or protein synthesis by tapping the energy of ATP hydrolysis:

ATP -> ADP + Pi (Pi = PO4, inorganic phosphate) ΔG = -7.3 kcal/mol

Cycle of ATP hydrolysis to ADP and phosphorylation of ADP to ATP. The majority of endergonic reactions in cells are coupled to the exergonic hydrolysis of ATP to ADP. Image by Muessig retrieved from Wikimedia Commons, licensed CC-BY-SA 3.0

Over the next pages we’ll be looking at the cellular metabolic pathways that phosphorylate ADP to make ATP.

Reaction rates
Although the sign of ΔG (negative or positive) determines the direction that the reaction will go spontaneously, the magnitude of ΔG does not predict how fast the reaction will go.
The rate of the reaction is determined by the activation energy (the energy required to attain the transition state) barrier Ea:

Energy diagram of enzyme-catalyzed and uncatalyzed reactions, from Wikipedia

The peak of this energy diagram represents the transition state: an intermediate stage in the reaction from which the reaction can go in either direction. Reactions with a high activation energy will proceed very slowly, because only a few molecules will obtain enough energy to reach the transition state – even if they are highly exergonic. In the figure above, the reaction from X->Y has a much greater activation energy than the reverse reaction Y->X. Starting with equal amounts of X and Y, the reaction will go in reverse.

The addition of a catalyst (definition: an agent that speeds up the rate of a reaction, but is not consumed or altered by the reaction) provides an alternative transition state with lower activation energy. What this means is that the catalyst physically ‘holds’ the substrates in a physical conformation that makes the reaction more likely to proceed. As a result, in the presence of the catalyst, a much higher percentage of molecules (X or Y) can acquire enough energy to attain the transition state, so the reaction can go faster, in either direction. Note that the catalyst does not affect the overall free energy change of the reaction. Starting with equal amounts of X and Y, the reaction diagrammed above will still go in reverse, only faster, in the presence of the catalyst.

Enzymes speed up reactions by lowering the activation energy barrier

Enzymes are biological catalysts, and therefore not consumed or altered by the reactions they catalyze. They repeatedly bind substrate, convert, and release product, for as long as substrate molecules are available and thermodynamic conditions are favorable (ΔG is negative the product/substrate ratio is lower than the equilibrium ratio). Most enzymes are proteins, but several key enzymes are RNA molecules (ribozymes). Enzymes are highly specific for their substrates. Only molecules with a particular shape and chemical groups in the right positions can interact with amino acid side chains at the active site (the substrate-binding site) of the enzyme.

Enzyme-catalyzed reactions have saturation kinetics

The velocity of enzyme-catalyzed reactions increases with the concentration of substrate. However, at high substrate concentrations, the quantity of enzyme molecules becomes limiting as every enzyme molecule is working as fast as it can. At saturation, further increases in substrate concentrations have no effect the only way to increase reaction rates is to increase the amount of enzyme.

Enzyme kinetics plot by Thomas Shafee, CC-BY-SA 4.0 from Wikimedia Commons

The kinetic properties of enzymes are defined by their Vmax and Km

  • The Vmax is the maximum rate at which enzymes can work, at saturating concentrations of substrate.
  • The Km (Michaelis constant), is defined as the substrate concentration that produces 1/2 Vmax, and is a measure of the affinity of the enzyme for its substrate.

Enzyme inhibitors
Enzymes are subject to regulation, and are the targets of many pharmaceutical drugs, such as non-steroidal pain relievers. Many enzymes are regulated by allosteric regulators which bind at a site distinct from the active site.

Noncompetitive inhibitors act allosterically (bind at a site different from the active site). When the noncompetitive inhibitor binds allosterically, it often changes the overall shape of the enzyme, including the active site, so that substrates can no longer bind to active site.

Competitive inhibitors compete with the substrate for binding to the active site the enzyme cannot carry out its normal reaction with the inhibitor, because the inhibitor physically blocks the substrate from binding the active site.
My lecture videos on thermodynamics and enzymes (have audio lag, plan to re-do in shorter segments)

Chapter 5 - Gibbs free energy – applications

The Gibbs free energy is important in biology research because it enables one to predict the direction of spontaneous change for a system under the constraints of constant temperature and pressure. These constraints generally apply to all living organisms. In the previous chapter we discussed basic properties of the Gibbs free energy, showed how its changes underlie a number of aspects of physical biochemistry, and touched on what the biological scientist might do with such knowledge. Here, we build on the introductory material and explore how it can be applied to a wide variety of topics of interest to the biological scientist. A range of examples illustrate when, where, why, and how the Gibbs free energy is such a useful concept. We shall discuss the energetics of different types of biological structure, including small organic molecules, membranes, nucleic acids, and proteins. This will help to give a deeper sense of the relatedness of some seemingly very different topics one encounters in biological science.

Photosynthesis, glycolysis, and the citric acid cycle

This section presents a low-resolution view of the energetics of photosynthesis, glycolysis, and the citric acid cycle. There can be no doubt that the details we omit are important: entire books have been written on each subject! But our aim here is to consider biological energy in a global, qualitative way. We want to try to see “the big picture.”

Watch the video: BIOL 220 chemical equilibria and Gibbs free energy part 2 of 3 (December 2022).