Information

A. Why Chemical Logic? - Biology

A. Why Chemical Logic? - Biology


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

  • Belarusian translation
  • German translation by Aleksandr Kastromin
  • Czech translation by Alex Novak

What is the rationale for yet another biochemistry book? What do I mean by chemical logic?

Many who have taught chemistry (general, organic, biochemistry) from a traditional book invariably believe that the book would be better if it had a different organization or different conceptual framework. Biochemistry Online: An Approach Based on Chemical Logic was written, in part, to deal with issues of topic order and conceptual framework. New topics can then be introduced in a fashion which the students perceive not as random but as a logical extension based on a developing understanding.

Jakubowski, H and Owen, W. The Teaching of Biochemistry: An Innovative Course Sequence Based on the Logic of Chemistry .J. Chemical Education, 75, 735, 1998)

A PowerPoint Slide presentation and a summary of the paper follow.

Powerpoint Slide Show: Biochemistry Based on Chemical Logic

Summary: The chosen topic order should create a coherent and sequential understanding of biochemistry, not a fragmented one without logical connections among topics. Textbook authors offer assistance in addressing these concerns in two ways. They implicitly suggest an order of presentation by how chapters are arranged and they offer philosophical interpretations to describe biochemistry. Scrutiny of the philosophy statements ("Chemistry is the logic of biological phenomena."; "..common molecular patterns and principles underlie the diverse expression of life."' and ..."molecular logic of life.") and chapter organization of textbooks reveals commonalties among textbooks.

This consensus, however, does not lead to a linkage between philosophy and content. The present organization of texts is not derived from the central dogma of biology, since in most books protein structure precedes significant discussions of nucleic acid structure/function. Rather, it seems to reflect evolving tradition based on historical trends in biochemistry research, as evidenced by chapter organization of major biochemistry texts, starting from the 1935 edition of Harrow's Textbook of Biochemistry (4,5). Early texts commenced with discussions of carbohydrate chemistry, followed by lipids and then proteins. Texts from the late sixties onward invariably led with protein chemistry, and deferred carbohydrate and chemistry until much later (5-7).

Although modern authors speak of a "chemical logic", it is not evident in textbook organization. Biochemistry Online was written to present biochemistry in the framework of a higher order organizing principle, based in chemical logic and understanding, from which topics and order of presentation derive. New topics can then be introduced in a fashion which the students perceive not as random but as a logical extension based on a developing understanding.

Chemical Logic

Throughout the course, three major recurring chemical principles become evident: structure determines function/activity; binding reactions initiate all biological events; and chemical principles, such as dynamic equilibria (mass action), and reaction kinetics and mechanisms, derived from the study of small molecules, can be applied to the behavior of macromolecules. The order of the topics is based on evolving chemical logic.

Topic 1 - Lipid Structure: The first topic is lipid and lipid aggregate structure/function, instead of amino acids and proteins, as is typically presented. Prior to taking a biochemistry course, students have had little significant exposure to the chemical properties of macromolecules, so beginning with the study of small molecules makes sense. Since most lipids are amphiphiles, their structural diversity can be simplified by considering them as simple structures with spatially distinct polar and nonpolar ends. Single chain and double chain amphiphiles aggregate in a thermodynamically spontaneous manner to form micelle and bilayer structures, respectively, with the nonpolar parts sequestered from water and associated with themselves, and the polar parts solvent-accessible. This simple model introduces students to the notion that structure mediates properties, to the important concept of intermolecular forces, and to the thermodynamics of the hydrophobic effect, all critical elements ultimately required to understand the much more complicated topic of protein folding and stability. The concepts of mass conservation, dynamic equilibria and kinetics, and chemical potential, are used to understand how aggregation at equilibrium depends on amphiphile concentration. Lipids serve as useful models to introduce stereochemistry and prochirality as well. Likewise, it is easier to understand how torsion angle changes in the aliphatic side chains of phospholipid molecule alter acyl chain packing, than it is to understand the complexities of a Ramachandran plot. From a chemical perspective, it is more logical to introduce the spontaneous self assembly of small amphiphilic molecules into large multi-molecular aggregates than to start with the physiochemical properties of twenty different amino acid which vary in size and hydrophobicity and proceed to the complexities of intramolecular protein folding reactions.

Topic 2 - Protein Structure: The understandings derived from the study of lipids can then be applied to the more complex subject of intramolecular protein folding reactions and protein stability. A more expanded and modern view of the hydrophobic effect and associated heat capacity change is presented, along with the denaturing effect of chain conformational entropy. The role of the hydrophobic effect and H-bonds in protein stability are extrapolated from the behavior of benzene in water and thermodynamic cycles involving the transfer of N-methylacetamide from water to a nonpolar solvent, and from mutational studies. Dynamic and linked equilibria considerations, along with reaction kinetics, are used to describe the varying effects of denaturing (urea, guanidinium chloride) and stabilizing (ammonium sulfate, glycerol) solutes on protein stability, as well as the competing processes of protein folding and aggregation in vitro and in vivo.

Topic 3 - Nuclei Acid and Carbohydrate Structure: The same principles which determine protein structure/function can be applied to the study of the structure and stability of nucleic acids, complex carbohydrates, and glycoproteins.

Topic 4 - Binding: Function now necessarily follows. Since all biological events are initiated by binding, a purely physical process, the logic of chemistry suggests it should be studied next. Indeed, in most textbooks, introductory chapters on protein function focus on the binding of dioxygen, a simple ligand, to myoglobin and hemoglobin. Macromolecule-drug interactions, as well as cell-cell adhesion can be discussed as additional relevant examples. The control of gene expression, a topic of preeminent importance to modern biologists, can be discussed from the logic of chemistry as an essential outcome of the binding of transcription factors and appropriate enzymes to each other and DNA in the active transcription complex. It is particularly important to stress how equilibrium and mass conservation principles, along with reaction kinetics, effectively determine the concentration-dependent behavior of all molecules, including the processes of binding and spontaneous structure formation.

Topic 5 - Binding and Transport: Binding is an antecedent to the expression of biological activity. The simplest expression of activity which involves a simple physical, non covalent, process is binding and transport of solute molecules across a biological membrane. Mathematical analyses of the flux of solute across a membrane catalyzed by a transport protein involves the same assumptions (rapid equilibrium/steady state binding) and leads to the same equations (hyperbolic dependence of flux with outer solute concentration, effect of competitive inhibitors) as when Michaelis-Menten enzyme kinetics mechanisms are modeled.

Topic 6 - Binding and Kinetics: The study of enzyme kinetics follows as a logical extension of the expression of molecular function involving the addition of a more complex step, namely a chemical transformation. Through the study of enzyme kinetics, students learn how to obtain a low resolution understanding of the structure/activity of enzymes and of their chemical mechanisms.

Topic 7 - Binding and Chemical Transformations: Next, the detailed mechanism of specific enzymes whose structures are known is discussed. Preceding this, the basis for catalysis by small molecules is discussed. Following the chemical logic that the properties of macromolecules can be inferred from small molecules, students learn, that with respect to catalysis, enzymes are "not different, just better", than small molecule catalyst, as previously described by Jeremy Knowles (1).

Topic 8 - Energy and Signal Transduction: The final sequence involves specific examples of how enzymes can transduce both energy and information signals into useable outputs. Energy transduction, involving the conversion of light, electrochemical gradients, or chemical energy, into phosphoanhydride bonds, is discussed. Special attention is paid to biological oxidation reactions. Several questions are introduced to provoke discussion and challenge students' knowledge of oxidation reactions. Students propose reasons to explain the fact that oxidation reactions of organic molecules using dioxygen are thermodynamically but not kinetically favored, as well as to explain the need for different types of biological oxidizing agents for energy transduction. Signal transduction at the cell membrane serves as an excellent capstone area of study since it incorporates ideas from each sequence.

References

1. Knowles, J. Nature. 350,121-124 (1991)


Biological engineering

Biological engineering or bioengineering is the application of principles of biology and the tools of engineering to create usable, tangible, economically-viable products. [1] Biological engineering employs knowledge and expertise from a number of pure and applied sciences, [2] such as mass and heat transfer, kinetics, biocatalysts, biomechanics, bioinformatics, separation and purification processes, bioreactor design, surface science, fluid mechanics, thermodynamics, and polymer science. It is used in the design of medical devices, diagnostic equipment, biocompatible materials, renewable energy, ecological engineering, agricultural engineering, process engineering and catalysis, and other areas that improve the living standards of societies.

Examples of bioengineering research include bacteria engineered to produce chemicals, new medical imaging technology, portable and rapid disease diagnostic devices, prosthetics, biopharmaceuticals, and tissue-engineered organs. [3] [4] Bioengineering overlaps substantially with biotechnology and the biomedical sciences in a way analogous to how various other forms of engineering and technology relate to various other sciences (such as aerospace engineering and other space technology to kinetics and astrophysics).

In general, biological engineers attempt to either mimic biological systems to create products, or to modify and control biological systems. Working with doctors, clinicians, and researchers, bioengineers use traditional engineering principles and techniques to address biological processes, including ways to replace, augment, sustain, or predict chemical and mechanical processes. [5] [6]


Chemistry Explained

  • Cooking: Chemistry explains how food changes as you cook it, how it rots, how to preserve food, how your body uses the food you eat, and how ingredients interact to make food.
  • Cleaning: Part of the importance of chemistry is it explains how cleaning works. You use chemistry to help decide what cleaner is best for dishes, laundry, yourself, and your home. You use chemistry when you use bleaches and disinfectants, even ordinary soap and water. How do they work? That's chemistry.
  • Medicine: You need to understand basic chemistry so you can understand how vitamins, supplements, and drugs can help or harm you. Part of the importance of chemistry lies in developing and testing new medical treatments and medicines.
  • Environmental Issues: Chemistry is at the heart of environmental issues. What makes one chemical a nutrient and another chemical a pollutant? How can you clean up the environment? What processes can produce the things you need without harming the environment?

We humans are all chemists. We use chemicals every day and perform chemical reactions without thinking much about them. Chemistry is important because everything you do is chemistry! Even your body is made of chemicals. Chemical reactions occur when you breathe, eat, or just sit there reading. All matter is made of chemicals, so the importance of chemistry is that it's the study of everything.


Deductive Reasoning

Deductive reasoning has you starting with information or an idea that is called a premise. Eventually you come up with conclusions that are based on your original premise. Sherlock Holmes, that detective guy from the books, uses deductive reasoning to solve mysteries. Think of it this way:
(1) If this happens.
(2) and this happens.
(3) then you can come to this conclusion. If the premises are true, then your conclusion should also be true.


Cell circuits remember their history: Engineers design new synthetic biology circuits that combine memory and logic

Engineers at MIT have developed genetic circuits in bacterial cells that not only perform logic functions, but also remember the results. Credit: LIANG ZONG AND YAN LIANG

MIT engineers have created genetic circuits in bacterial cells that not only perform logic functions, but also remember the results, which are encoded in the cell's DNA and passed on for dozens of generations.

The circuits, described in the Feb. 10 online edition of Nature Biotechnology, could be used as long-term environmental sensors, efficient controls for biomanufacturing, or to program stem cells to differentiate into other cell types.

"Almost all of the previous work in synthetic biology that we're aware of has either focused on logic components or on memory modules that just encode memory. We think complex computation will involve combining both logic and memory, and that's why we built this particular framework to do so," says Timothy Lu, an MIT assistant professor of electrical engineering and computer science and biological engineering and senior author of the Nature Biotechnology paper.

Lead author of the paper is MIT postdoc Piro Siuti. Undergraduate John Yazbek is also an author.

Synthetic biologists use interchangeable genetic parts to design circuits that perform a specific function, such as detecting a chemical in the environment. In that type of circuit, the target chemical would generate a specific response, such as production of green fluorescent protein (GFP).

Circuits can also be designed for any type of Boolean logic function, such as AND gates and OR gates. Using those kinds of gates, circuits can detect multiple inputs. In most of the previously engineered cellular logic circuits, the end product is generated only as long as the original stimuli are present: Once they disappear, the circuit shuts off until another stimulus comes along.

Lu and his colleagues set out to design a circuit that would be irreversibly altered by the original stimulus, creating a permanent memory of the event. To do this, they drew on memory circuits that Lu and colleagues designed in 2009. Those circuits depend on enzymes known as recombinases, which can cut out stretches of DNA, flip them, or insert them. Sequential activation of those enzymes allows the circuits to count events happening inside a cell.

Lu designed the new circuits so that the memory function is built into the logic gate itself. With a typical cellular AND gate, the two necessary inputs activate proteins that together turn on expression of an output gene. However, in the new circuits, the inputs stably alter regions of DNA that control GFP production. These regions, known as promoters, recruit the cellular proteins responsible for transcribing the GFP gene into messenger RNA, which then directs protein assembly.

For example, in one circuit described in the paper, two DNA sequences called terminators are interposed between the promoter and the output gene (GFP, in this case). Each of these terminators inhibits the transcription of the output gene and can be flipped by a different recombinase enzyme, making the terminator inactive.

Each of the circuit's two inputs turns on production of one of the recombinase enzymes needed to flip a terminator. In the absence of either input, GFP production is blocked. If both are present, both terminators are flipped, resulting in their inactivation and subsequent production of GFP.

Once the DNA terminator sequences are flipped, they can't return to their original state—the memory of the logic gate activation is permanently stored in the DNA sequence. The sequence also gets passed on for at least 90 generations. Scientists wanting to read the cell's history can either measure its GFP output, which will stay on continuously, or if the cell has died, they can retrieve the memory by sequencing its DNA.

Using this design strategy, the researchers can create all two-input logic gates and implement sequential logic systems. "It's really easy to swap things in and out," says Lu, who is also a member of MIT's Synthetic Biology Center. "If you start off with a standard parts library, you can use a one-step reaction to assemble any kind of function that you want."

Such circuits could also be used to create a type of circuit known as a digital-to-analog converter. This kind of circuit takes digital inputs—for example, the presence or absence of single chemicals—and converts them to an analog output, which can be a range of values, such as continuous levels of gene expression.

For example, if the cell has two circuits, each of which expresses GFP at different levels when they are activated by their specific input, those inputs can produce four different analog output levels. Moreover, by measuring how much GFP is produced, the researchers can figure out which of the inputs were present.

That type of circuit could offer better control over the production of cells that generate biofuels, drugs or other useful compounds. Instead of creating circuits that are always on, or using promoters that need continuous inputs to control their output levels, scientists could transiently program the circuit to produce at a certain level. The cells and their progeny would always remember that level, without needing any more information.

Used as environmental sensors, such circuits could also provide very precise long-term memory. "You could have different digital signals you wanted to sense, and just have one analog output that summarizes everything that was happening inside," Lu says.

This platform could also allow scientists to more accurately control the fate of stem cells as they develop into other cell types. Lu is now working on engineering cells to follow sequential development steps, depending on what kinds of inputs they receive from the environment.

Michael Jewett, an assistant professor of chemical and biological engineering at Northwestern University, says the new design represents a "huge advancement in DNA-encoded memory storage."

"I anticipate that the innovations reported here will help to inspire larger synthetic biology efforts that push the limits of engineered biological systems," says Jewett, who was not involved in the research.


Op-Ed: Why a Nepali doctor is treating the biology — and the sociology — behind mental illness

The Nepali doctor Rishav Koirala is, by his own admission, an unusual Nepali. He’s a fan of Jim Morrison and the Doors, loves European philosophy and practices psychiatry in a country where medical schools offer little or no mental health training. What makes him especially unusual is that as the world embraces the idea that mental illnesses should be seen as brain disorders, Koirala is pushing back.

Mental illnesses are the leading cause of disability in the world. But in Nepal, mental illnesses are considered so shameful that few people get help. After the 2015 earthquake, as doctors from other countries came to diagnose and treat survivors with post-traumatic stress disorder, few Nepalis wanted the diagnosis. Local counselors believed that people with PTSD — which is translated into Nepalese as the stigmatized phrase “mental shock” — had brain diseases or bad karma and were predisposed to commit murder or die by suicide.

The unwillingness to accept diagnosis or seek care might seem odd to some readers. But in any given year, close to 60% of people with any mental illness in the U.S. receive no mental health treatment or counseling.

Most scientists argue that stigma is the biggest barrier to mental health care in the U.S. and the world, and that stigma can be reduced if people understand that mental illnesses are neurological diseases, a proposition Koirala rejects.

As psychiatrist Nancy Andreasen argued in her landmark book, “The Broken Brain,” discrimination against people with mental illnesses derives from ignorance, “from a failure to realize that mental illness is a physical illness, an illness caused by biological forces and not by moral turpitude.”

Dr. Thomas Insel, former director of the National Institute of Mental Health, wrote of mental illnesses, “We need to think of these as brain disorders.”

The focus on the brain in mental health research today is understandable. A person with a broken leg probably won’t hesitate to see a doctor, but the median time from first psychosis to psychiatric care in the U.S. is 74 weeks. Perhaps, the logic goes, a broken-brain model will shift responsibility from the person to the organ.

But there is no evidence that reframing mental illnesses as brain disorders reduces the associated stigma. Wherever doctors describe someone with a mental illness as having a chemical imbalance or abnormal brain circuitry, they provide reasons to fear that person. A German survey showed that the more people learned about the biology of mental illnesses, the more they reported a desire for social distance from people with a psychiatric diagnosis. A U.S. study showed that from 1996 to 2006, the American public increasingly saw mental illnesses as neurobiological, but this did not “significantly lower odds of stigma.”

It’s official, California: COVID-19 has left us sick with worry and increasingly despondent. And young adults — ages 18 to 29 — are feeling it worst.

Koirala does not reject the neurobiological bases of mental illnesses. What he rejects is the idea that such frameworks are helpful in breaking down barriers to care.

A few years ago, Koirala helped set up a temporary “mental health camp” in a remote area of Nepal. Despite misgivings, he let his co-workers call it a “mental health camp, using the Nepalese word dimaag for “mental,” a word that refers to the brain and its ability to function properly. No patients came. Someone with an impaired dimaag will be seen as seriously damaged and might be prevented from marrying, fired from a job or banished from the family.

When he set up the site again several months later, he called it a camp for “headaches.” Patients showed up, almost everyone was diagnosed with depression or anxiety, and they were treated — and got better.

Koirala now talks to his patients less about the brain than their physical symptoms, like headaches and fatigue, or what he calls “the heart.” He tells patients that within every person are two hearts, an inside heart and an outside, or observable, heart. “We are all aware of our outside heart,” he says. “It comprises all the emotions and physical symptoms that we feel and that others can see.” The inside heart, however, the true source of mental illness, is often hidden from our awareness.

To treat the neediest, Koirala traveled to an isolated region of Nepal and encountered a man with schizophrenia. His family immobilized him with a wooden device secured around his foot that locked with a nail above the ankle, preventing his foot from slipping out. They said the device was for his own safety without it he’d run away.

Koirala put the man on an antipsychotic medication and met up with him a few months later. He was a “totally different person” and had made a “remarkable” recovery, Koirala said. Why did the family accept the treatment? Because Koirala understood that culture, not biology, gives meaning to suffering: He depicted mental illness as a disorder of the heart.

Neuroscience may someday generate treatments so curative that mental illnesses will lose their stigma. But we’re not there yet. The brain is far more complicated than any other organ. And mental illnesses are not just biological. They are shaped by more factors than we can imagine — biology, yes, but also childhood, poverty, social supports and social stressors. Experience itself changes the architecture of the brain.

We should, therefore, approach neurobiological models of mental illness with caution and, like Koirala, do what works. That means addressing the lived experience of suffering. Sure, we know that children with attention deficit hyperactivity disorder tend to have subtle differences in brain structure compared with their peers without ADHD, but that finding doesn’t translate into better special education. We know that people with schizophrenia have brain circuits that develop differently, but that knowledge does nothing to diminish stigma, or one’s history of being discriminated against.

We cannot and probably never will see mental illnesses through a microscope, or test for them in a laboratory. That’s not because psychiatry has failed, but because experience isn’t written in our cells. So let’s study the brain while also studying the societies in which we live and suffer. Culture is, at least, something we now have the power to understand and change.

Roy Richard Grinker is a professor of anthropology and international affairs at George Washington University and author of “Nobody’s Normal: How Culture Created the Stigma of Mental Illness.”

A cure for the common opinion

Get thought-provoking perspectives with our weekly newsletter.

You may occasionally receive promotional content from the Los Angeles Times.


Materials and Methods

OE explant culture and labeling.

OE explants were prepared as previously described [39] and cultured with 10 ng/ml recombinant FGF2 and varying concentrations of GDF11 (PeproTech). After 18 h, bromodeoxyuridine (BrdU) cell-labeling reagent was added at 1:10,000 (#RPN201 Amersham). Two hours later, explants were washed with cold thymidine (10 μm Sigma-Aldrich), growth factors replenished, and cultures grown for either 16 or 34 h longer (total culture time was either 30 or 48 h). For 48-h cultures, FGF2 and GDF11 were refreshed after 40 h in vitro.

Explants were fixed and stained with rat monoclonal anti-NCAM H28 and mouse monoclonal anti-BrdU antibody as described [39]. Immunoreactivity was visualized with Cy2-Donkey anti-rat IgG (1:50 Jackson Immunoresearch) and Texas Red goat anti-mouse IgG1 (1:50 Jackson Immunoresearch). To compare the percentage of ORNs produced by INPs in each culture condition, total migratory BrdU + cells were counted in at least 15 fields each of duplicate cultures per condition and scored for BrdU and NCAM immunofluorescence by an experimenter blind to the treatment condition, to ensure lack of bias.

Immunohistochemistry and in situ hybridization to tissue sections.

Embryos were dissected in room temperature phosphate-buffered saline (PBS pH 7.2) and heads fixed in 4% paraformaldehyde in PBS overnight at 4 °C, then cryoprotected, embedded, sectioned, and processed as described [34]. For Ngn1 in situ hybridization, tissue was processed using digoxigenin-labeled cRNA probes [34]. FST immunostaining was performed using R&D Systems goat anti-human FST antibody (10 μg/ml final concentration) and visualized with biotinylated horse anti-goat IgG (1:250) in combination with Vector MOM Immunodetection Kit (PK-2200 Vector Labs) according to the manufacturer's instructions.

Computational methods.

Mathematical analysis and numerical simulation were carried out with the assistance of Mathematica (Wolfram Research). Codes used for all cases shown are provided in Protocols S1–S3.

Accession numbers.

Gene accession numbers used in the manuscript refer to the Mouse Genome Informatics database, http://www.informatics.jax.org/.


The Curious Wavefunction

I n the Wall Street Journal, the physics writer Jeremy Bernstein has a fine review of a new joint biography by Gino Segre of George Gamow and Max Delbruck named "Ordinary Geniuses" which I just started reading.

49 comments:

Just a note: The "open dots" used as "attachment points" look an awful lot like O's. I had several seconds of confusion as to why a beta amino acid contained a peroxide ("well, *that's* not stable!")

If you ever have an opportunity to redo the figure, I might recommend using filled dots instead.

Done, thanks! (I wonder if a beta amino acid with the central carbons substituted by oxygens can be even fleetingly synthesized!)

These types of questions/scenarios are especially important with origin-of-life science.

I use the term "reductionist" differently from you, I think. I call using physics to predict biology "constructionism", while "reductionism" is looking at modern biology and figuring out what was there earlier in the process of evolution.

Why can't Gamow and Delbruck's superintelligent freak being predict giraffes? If He/She knew all the laws of physics, why couldn't He/She have predicted the seemingly "random" events -- "chance" point mutations in proto-giraffe genes -- that led to the existence of giraffes? After all, those were simply caused by radiation damage to DNA, or mis-catalysis by a DNA-replicating enzyme, or . -- in other words, something physical. It seems to me Gamow and Delbruck abandon the reductionist logic prematurely. Their assumption appears to be that truly random events do exist, but is that the case, or do they only appear random to use mere mortals?

Yes, a superfreak could have predicted the set of all possible mutations. But there was still no way to decide which ones among those would prove beneficial and help the species evolve and propagate.

However, your point about the perceived randomness of events is an interesting one. "Random" does not necessarily mean non-deterministic. In my head it has more to do with probabilities. Random events have probabilities that cannot be predetermined and therefore cannot be predicted.

Given limitless computational power, it could have predicted giraffes as a possibility, among billions of other possibilities. It could not have said with certainty that giraffes, as we know them, would occur. Prediction through billions of bifurcation points is not possible. This is a key observation of chaos theory, in which small initial differences lead to widely divergent outcomes, rendering long-term prediction impossible.

I have to say that I like Paul's usage of the phrase "constructionism." It suggests - at least to me - that while one should consider it necessary to be able to bridge physics to chemistry and chemistry to biology (and I suppose physics to biology as well), that it is not going to be sufficient to provide a complete understanding.

Of course, I have to wonder if we're sometimes being overly demanding in expecting physics to lead into chemistry and/or biology - for example, the entire topic of protein folding seems to be a fairly popular one in the (bio)chemical blogosphere. There are classic physical systems that still invoke equally lively discussions in the literature (especially thinking of glass-forming systems here), despite having been available on the scientific research buffet for longer. And people are expecting equally or even more detailed physical pictures of protein folding?

Having said that, I suspect it's a function of where one sits - I have the impression that the physicists (and physical chemists) who are interested in biological problems aren't going to be inclined towards explaining the chemistry of amino acids. They're going to be more interested in understanding in, say, signal transduction where they can control the strength of a signal (ligand concentration) and measure the output (some sort of enzymatic activity being up or down-regulated). If the receptor clusters, then they're off to comparing the Ising model vs MWC vs whatever else they can devise via simulations and subsequent comparison to experimental data.

My two cents, change likely is warranted.

In response to Wavefunction: Thanks for your reply. You say that "a superfreak could have predicted the set of all possible mutations. But there was still no way to decide which ones among those would prove beneficial and help the species evolve and propagate."

But the theoretical superfreak can also predict the set of all possible mutations for *other* organisms in the system, not just the giraffe. Why can't He/She enumerate those possibilities simultaneously? This information would be the basis for decisions about which giraffe mutations would prove beneficial.

This obviously implies an astronomical number of theoretical evolutionary pathways. but this is just a thought experiment anyway, so let's pretend He/She can evaluate each step of each pathway based on just physics. What information is lacking for this being to predict evolutionary history, unless we invoke truly random events?

I don't quite follow your point on randomness vs. deterministic events with probabilities, so I'm not sure how that fits in. By the way, I don't want to undermine your article, because I think it's extremely interesting and provocative! I just wanna poke you about it a bit.

FullyReduced: I appreciate your poking, it's a very interesting issue. The problem as I see it is that a lot of evolution has been governed by the propagation of events which might have appeared to be low probability events beforehand. Thus, even if the superfreak could calculate every single mutation in every organism along with every single environmental condition that could lead to these mutations being preserved, how could he/she/it know which one of those countless combinations will actually be the one that finally exists?

You are right that among the countless scenarios predicted by the superfreak will be the universe and earth that we inhabit. But there is still no way to decide beforehand that this particular universe would be the one that actually materializes, part of the reason being that the a priori possibility of such a universe arising might be very low and there is no reason why the superfreak will pick a low probability event as the preferred one.

MJ: I find your mention of other (poorly understood) classical systems interesting. As one example of why we may perhaps be overly demanding, consider that we cannot even accurately calculate the solvation energy for simple organic molecules (except for cases where you parametrize the system to death and use a test set that's very similar to the training set). With our knowledge at such a primitive level, it might indeed be overly demanding to try to predict protein folding which is orders of magnitude more complex.

By the way, reductionism is supposed to imply the kind of constructionism (sometimes called "upward casusation") that Paul mentions. The fact that it does not speaks volumes.

Interesting argument. I agree with you for the most part, though I feel that all of your chemistry examples are actually classified as "biochemistry."

I suppose when one gets down to it, it's not just chemistry and biology - any system where one is looking at the behavior of many (interacting) entities is going to be complicated, and - I will be overly generous here - deriving its properties from first principles is going to be an extremely opaque process at best. People are still getting headaches from the entire "strong correlations in condensed matter" problem. Not being able to break out the "non-interacting, independent particles" approximation really irritates people. Especially when it fails to properly account for the properties that don't naturally fall out of said approximation. Heh.

I do think, though, that the conceptual tools and formalisms that one develops in the physics can find fruitful new applications in biology and chemistry - although how much of that is just the unreasonable effectiveness of mathematics is always up for debate, I suppose.

As a related followup to my previous comment in this thread, someone fortuitously sparked my memory today - there are those chemists incorporating parity violation into their calculations to explain chirality I remember hearing that the expected spectral differences might be too small to reasonably observe for the lighter elements spectroscopically, so they were starting to look at heavy element compounds.

@Anonymous: Interesting point: "Prediction through billions of bifurcation points is not possible. This is a key observation of chaos theory. "

But you use the term _prediction_, i.e. a guess from a human perspective with (by our very nature) limited data. In other words, it's not clear the findings of chaos theory negate determinism rather, they seem to refute our ability to predict deterministic systems given our imperfect ability to gather information about the natural world.

I guess what I'm trying to do here is separate out what's theoretically possible from what _we're_ capable of. (Of course, "theoretically" implies theories _we_ came up with, so maybe this is ultimately a dead end. ) Thoughts?

@WaveFunction: I think we may have different assumptions about what kind of predictive calculations this theoretical superfreak is capable of. You appear to assume mutations are truly random events and can't be predicted using the laws of physics. On the other hand, I assume mutations are determined by the physics of intermolecular interactions, incoming environmental radiation, etc. and that history proceeds in a stepwise fashion in which each step can be predicted from (1) the last step and (2) the laws of physics. (Of course, for this to be true, I'm assuming the superfreak has _perfect_ knowledge of the _true_ laws of physics, which of course we as a species do not currently have.) Thus the superfreak has knowledge of each mutation, and -- coupled with His/Her _complete_ knowledge of the environment -- can predict whether it will be retained.

But that's a purely theoretical point, and I freely admit that large swaths of evolution were determined by (what _we_ see as) random events. From _our_ perspective, chaos theory comes into play here, as @Anonymous mentioned.

Thanks again for the article -- good stuff here.

Reduced: You are right that the findings of chaos theory don't preempt determinism there is a reason why the field is termed "deterministic chaos". I have always found the line between the lack of prediction "in principle" and "in practice" somewhat fuzzy in case of chaotic systems. These systems are definitely (mostly) unpredictable in practice. But being predictable in principle would mean being able to specify the initial conditions of the system to an infinite degree of accuracy. I don't know if this is possible even in principle.

MJ: Do you know of any parity-themed papers on chirality for the intelligent layman?

@Wavefunction: Great point on infinite precision of initial condition defintions -- I feel that's finally the bridge between practical and theoretical limitations I was looking for.

By the way: for posterity's sake, do you know what happened to my previous @Wavefunction post? It seems to be AWOL, and the conversation is kind of disjointed without it.

The old adage: "the more you know about physics, the simpler it gets and the more you know about biology, the more complicated it becomes."

However,fundamentally it is all physics. The fact that we cannot grasp the connection is our epistemological shortcoming. Moreover it is clear that Nature is not perfect so all it has to do is work. Maybe 25 amino acids will work better than 20, maybe a different protein fold along the way would not lead to cancer, but it does not matter. Eventually evolution will sort things out, given the right environment.

I unfortunately don't know of any good review papers off the top of my head, but I would imagine if you search for Peter Schwedtfeger (the big-name theorist down in NZ), you'd eventually find something suitable. The entire "parity violation and chirality" topic was something that momentarily caught my eye when I was puzzling over a sideline topic a while back. From what I know, there hasn't yet been any experimental verification, although various metrology/precision spectroscopy groups are going after it.

Also, something to think about in relation to chaos and being able to specify initial conditions - in classical systems, you describe your system in terms of its particles' position and momenta. Given that, one can specify said position and momenta exactly. When one moves into quantum mechanics, you suddenly now have a distribution in position & momenta that is a small "patch" in phase space that is proportional to Planck's constant, as one can only jointly localize position and momentum so far. I suppose this is why I find the mere notion of quantum chaos to give me headaches thinking about the evolution of little hyperblobs in six-dimensional phase space. Heh.

I think a better analogy for physics compared to the biology and chemistry examples given would be whether the supersmart freak could predict the number and location of stars in the galaxy/universe and how many planets are around each star.

Chemists, knowing the fundamental laws of chemistry, can give knowledge of the properties and reactivity of as yet unknown compounds in much the same way physicists can with atoms.

Biologists cannot be compared because biology deals with the specific system of life that has already arisen. To ask why this freak couldn't biology predict a giraffe when a giraffe isn't part of its biologic system is not an apt comparison in any way.

Adding my voice to some of the others:

If physics is deterministic (so let's suppose that a nonlocal hidden variable interpretation of quantum mechanics is correct), and if our superfreak (or Laplacian demon, as the more traditional account has it) knows the complete physical state in addition to all the laws (and has an unlimited computational capacity), then the superfreak will be able to predict the existence of giraffes and every other biological detail.

Leaving out relevant physical details (i.e., the initial conditions) does not not show that biology is non-physical. Of course, it is true that physical dynamics alone will never tell us what sort of creatures evolve and which don't, but why would we ever think it might? Mere physical laws can't even tell us that there will be protons and neutrons.

Physicalist, Bryan and Andre: Just want to make sure we carefully define what we mean by the reduction of biology to physics. It does not mean proving that all of biological matter is composed of basic subatomic constituents which is an obvious fact. It really means the "constructivism" that Paul was talking about. If we can truly reduce biology to physics, it must mean that we should be able to at least in principle do the opposite construct the present biological world starting from the basic laws of physics.

However it is not clear how we could go about doing this even in principle. The question is not just one of epistemology but of ontology. Again, I think Kauffman's example is very cogent. Even if the structure of the mammalian heart could be predicted in principle, it would be impossible to predict beforehand that the most important function of the heart among myriad others is to pump blood. The problem is not just epistemological in that we lack knowledge of all the conditions that could ultimately lead to a heart but ontological, namely that even if we had the knowledge we would be unable to assign probabilities to various scenarios. To me this seems to be the basic issue.

I am not sure the existence of protons and neutrons was as subject to chance and circumstance as the evolution of the giraffe since it can be predicted based on very basic principles of energetic stability and knowledge of the five forces. So can the synthesis of the elements. But everything from then onwards seems much more subject to chance and accident.

But you cannot necessarily construct the present physical world from the basic laws of physics, let alone the chemical or biological worlds. This is why the idea of predicting a giraffe doesn't seem to fit with the idea of predicting the elements.

Could the planet earth be predicted from physical laws (not the life on earth, but the specific planetary make-up)? That is more akin to the giraffe example.

More interesting (and IMO more appropriate) questions would be the following: Could, starting from the basic universal physical laws, complex or sentient life be predicted? Can the idea of biology itself be predicted?

Question. Does the difference between biology (and chemistry) and physics boil down to the difference between inductive logic and deductive logic? Inductive logic (biology) reasons from a specific case to a general pattern, whereas deductive logic (physics) reasons from or applies general axioms and principles to a specific case. JR, Greenville, SC

So did the laws of physics evolve or have they always "existed?"

Anon 2: I am one of those people who think of a law as a compressed description of a set of regularities in nature. In this sense something that represents the law that we use must have existed since the beginning.

Andre: I think that's a much better and more challenging question to answer and it takes us into all kinds of philosophical territory including the distinction between living and non-living. I am not sure physics could have predicted the existence of biology as we know it. But given enough time it probably could have led to aggregates of matter that demonstrate at least some features of life (at least growth). However from an ontological viewpoint I don't think physics could have predicted life since according to physics, life is nothing but a special but still uninteresting arrangement of quarks (or strings or whatever the physicists are calling it these days). There is no way a physicist could predict the various functions that the arrangement corresponding to, say a human being, could perform.

Anon 1: To some extent yes and that has been the main problem with reductionism, although it has also been responsible for reductionism's phenomenal triumphs.

Here is a longer response to anon 2 if it is not off topic. The answer to anon 2 is neither one of the alternatives. The laws of physics were created. WHEN THERE HAS NEVER EVER BEEN A CAR, this is like no one starting to build a car with no rules or ideas about what a car is, what it looks like, how it works, what it does, how it is different from a carrot, etc. On the other hand, if no one starts to build something, and something just appears by chance, the order and regularity that we see in the universe is astonishing. It is one thing if rules exist from the beginning for organizing creation according to statistics and chance. It is quite another if the rules themselves are the product of chance and appear out of "blind, thoughtless, mindless nothing”. How can a plan for the universe appear out of thoughtless mindless nothing? This is like waiting around for a rock that does not exist to have an idea. J. R. Greenville

Interesting article, and very instructive as to the deep complexity of biochemistry. Thank you. But I do think the premise really just traffics in the different semantic usages of "deterministic," the level of certainty you ascribe to the laws of physics as currently understood, and the capabilities of your hypothetical observer.

Let's say your "superfreak" has, in the religio-philosophical sense, complete omniscience but is still time-bound (i.e. does not simultaneously exist in the future as well as the past). If the "superfreak" has complete information about the laws of physics, AND has existed since the beginning of the universe, AND has the capacity to store and process information about every particle and wave function in the universe, then why couldn't he predict everything as it will actually turn out? The "random" mutation resulting from a "random" particle hitting a "random" atom in a "random" protein is only random if you assume that the "superfreak" hasn't followed each of those atoms, particles, etc. since the beginning of time.

Of course, if one assumes that the "superfreak" is bound by laws of quantum mechanics as currently understodd -- so that uncertainy and probability are built in as part of the laws he "knows -- then it's true that he couldn't predict everything. But such a definition in the hypothetical makes physics, as well as biology, non-deterministic. At that point, all you're saying is that biology is "less deterministic" because it involves larger sets of particles, but each of those particles is itself fundamentally unpredictable outside of probability.

Curious Wavefunction says:
"If we can truly reduce biology to physics, it must mean that we should be able to . . . construct the present biological world starting from the basic laws of physics."

Again, this form of "reduction" is just a non-starter. If you insist that we only have reduction when the laws (and only the laws) specify some feature of the world, then nothing can be reduced (except the laws themselves).

The physical laws are compatible with the complete absence of matter, so the laws are never going to tell you whether there's matter or a total vacuum.

The criterion for reduction that you're using is unhelpful, because on this account nothing can be reduced to physics.

A much more useful account of reduction is one which asks whether we can predict some feature if we are given both the laws and the complete physical state (and unlimited computational power, since we're interested in ontology not epistemology).

In this case, it seems clear that the Laplacian demon (superfreak) would predict the existence of giraffes (though the demon might not call them "giraffes").

"Even if the structure of the mammalian heart could be predicted in principle, it would be impossible to predict beforehand that the most important function of the heart among myriad others is to pump blood."

I know famous people like Fodor and Searle make this claim (I didn’t realize Kauffman did I’ll have to look at his book at some point), but it’s just wrong:

(a) Even if we insist that one needs to know the evolutionary history of a trait to know its “real function,” the Laplacian demon would have all of that information available. It knows the complete history of the total physical state of the universe.

(b) If our demon (“superfreak”) is smart enough to care about which functions are “the most important” then it should have little difficulty recognizing that the function of the heart is to pump blood (even without peaking at the past). It would be able to recognize certain self-regulating processes that maintain themselves against the flows of entropy, and it would be able to recognize that the heart’s circulating blood is an important component of this self-sustaining process (whereas, for example, the sound the heart makes is not).

Now, if we stipulate that our demon is not allowed to care about any structure or order above the level of particles, then you’re right that the demon will be ignorant of biological facts. But with this stipulation, the demon would also be ignorant of the shape of planets, the temperatures of stars, the rigidity of ice, and so on and so on. But this just shows that we shouldn’t make such a stipulation if we’re trying to figure out the ontology of the world.

Re: most recent posts from @Anonymous and @Physicalist:

I completely agree! I think I was stumbling to express similar thoughts earlier -- glad to see some backup. Thinking back now, the argument that initial conditions cannot in principle be precisely defined is really a limit to the superfreak's ability to predict *any* physical, chemical, or biological feature -- not just biological features like giraffes. So there's still no genuine distinction between the fields in that sense.

Also, function is a much fuzzier concept that physical existence, which makes it hard to blame the superfreak for any inability to predict function ab initio.

That said, to the extent that function can be defined, perhaps on the grounds of persistence of features/structures despite high entropy as Physicalist suggests (essentially a historical definition), the superfreak would have all the necessary information to make such a judgment, because He/She would know the entire physical history of the universe.

Again, thanks for the fascinating thought experiment, Wavefunction, but I've ended up entirely unconvinced!

I agree with FullyReduced's comment. Sadly the article is yet another example of a scientist misunderstanding the two different meanings of "reductionism". The first, simpler one, is theoretical and has to do with composition: if I take apart a person or a cell or even my alarm clock, I won't find any fundamental particles or forces unknown to physics. The second meaning is practical and has to do with explanation and prediction and suggests that all phenomena are best explained at the level of physics. The first is a bedrock of modern science and should be more widely promoted. The second is endorsed by essentially no scientists but is often confused by the public with the first. The first is the reason the superintelligent freak could (in principle) predict the entire history of life on earth. The second fails because we mere mortals can't. I wish scientists like this blog author and Kaufmann would stop doing a disservice to the public and be more clear about the two different meanings.

I find this debate fascinating and want to thank everyone for contributing. Firstly with reference to FullyReducible’s point, I am pretty sure that I (and presumably Kauffman) are not confusing the two types of reductionism. In fact the first statement- that everything is ultimately composed of quarks of strings or whatever- is not even reductionism it’s an obvious fact that nonetheless tells us nothing about complexity since there’s no context-specific dependence built into it For instance it cannot even tell us why two molecules with exactly the same atomic composition will have wildly different properties (again as a function of their environments).

Now that we have gotten the first kind of non-reductionism out of the way, let’s focus on the second kind which matters. I don’t know why it’s so hard for the reductionists here to understand the difference between enumeration of all possibilities and the assignment of probabilities to each of these possibilities. I have already agreed that a superintelligent freak could list all of the countless events that would encompass the random mutations and effects of chance that we are talking about. But it would be impossible to assign a priori probabilities to all these events and predict that the net probability of our current universe existing is 1. This would be possible only if the superintelligent freak knows the entire future of the cosmos, in which case the discussion becomes meaningless and unscientific.

Now let’s talk about function. I find FullyReduced’s statement about function being fuzzy very interesting (and it probably means we agree more than you think!) since that’s precisely why reductionism fails when it comes to function. It is precisely because ‘function’ is a result of the laws of physics compounded with chance that it’s difficult to predict on the basis of the laws alone. This leads into Physicalists’s objection that the Laplacian demon would be able to predict the function of the heart based on the environment in which it is embedded. But this environment itself is a result of countless chance events and encounters. So even if the demon could enumerate the many possible functions of the heart in advance, it would not be possible to say which function would turn out to be most important in our current environment. We are again facing the distinction between the a priori enumeration of possibilities and the assignment of weights to those probabilities. With reference to Physicalist’s last statement, it’s not so much that the demon is not allowed to care about structure, form and function but it’s really that she does not even know which form and function she should care about.

This discussion also leads into Physicalist’s very interesting point about nothing possibly being reducible to physics since the laws of physics support a universe without matter. That is absolutely true. In fact that’s precisely why I find the idea of multiple universes, each compatible with the laws of physics, so alluring. Multiple universes will allow us to make a perfectly good case for non-reductionism without destroying the utility and value of the laws of physics. Extending the distinction between enumeration and valuation, it would mean that the laws of physics could indeed list every possible universe that can exist but are agnostic with reference to our own universe.


Comparison With Classical Methods to Understand Localization and Association

Protein localization is a critical parameter governing protein function. For instance, many proteins gain new associations, or functions upon translocation leading to important cellular responses. In some cases, the amount of translocation or partitioning of a protein between different organelles can be minimal. For instance, only a 2- to 3-fold increase in nuclear RNR-α levels can elicit suppression in DNA synthesis (Fu et al., 2018). Whether such small fold changes could be reliably detected by APEX localization studies and similar methods, in our opinion, remains to be conclusively proven.

The question of where proteins localize has been studied traditionally by immunofluorescence (IF) and fractionation. Both methods are powerful and often give consistent outcomes. These methods are ostensibly quantitative and so in principal can give an idea of relative amounts of protein in one locale over another and can measure even quite small changes.

However, it is worth remembering that traditional methods tend to suffer from limited spatial resolution and low sensitivity. This is for a number of reasons. First, both readouts are typically made by antibodies, so validating specificity through the use of clear controls (knockout/siRNA) are important and in reality in IF and western blotting, background labeling can limit signal to noise. Both methods suffer from intrinsic artifacts: for IF fixing can affect protein localization antigen presentation, whereas use of fluorescent proteins can affect target protein localization during fractionation proteins can leak from membranes or there can be contamination from unintended structures. Thus, in our opinion at least, perhaps the biggest improvement that reactive labeling methods bring to localization studies is the ability to couple an unambiguous readout (MS) to stringent tagging protocol that is strongly spatially restricted.

There is estimated to be 650,000 protein-protein interactions (PPIs) in human cells, although this number reflects only a fraction of a percent of the total number of possible pairwise interactions (Stumpf et al., 2008). There are likely many more possible associations when one considers protein-DNA/protein-RNA interactions and non-degenerate higher order complexes. Many of these PPIs are robust, with relatively long half-lives and Kd's in the nanomolar range. Such interactions can be readily assessed by classic methods such as co-IP, native gel, or 2D-PAGE gels. These methods have benefits in that they can be carried out in native cells, tissue, etc. However, requirement for lysing of the cells can introduce artifacts due to dysregulation of cellular compartmentalization, allowing interactions that do not happen in the cell to occur (Fu et al., 2018), or loosing weaker interactions (French et al., 2016). Weaker/more transient associations can be studied by semi-classical methods such as cross-linking (either chemical or UV). Crosslinking methods have the benefit of “trapping” the complex in the cell, prior to lysis, giving more confidence of cellular relevance, and eliminating the possibility of post lysis association. However, the use of reactive cross-linkers also potentially brings in possibilities of off-target cross-linking, can perturb cellular homeostasis, can mask epitopes, and may not be compatible with other transformations/experimental protocols. The reaction products of cross-linking experiments are also complex aggregates that require extensive verification and (typically) excellent antibodies that have been rigorously validated. However, oftentimes protein complexes/aggregates can be resolved using SDS-PAGE, allowing for identification of hetero/homo-dimers and/or higher-order aggregates to be assigned with reasonable accuracy (Aye et al., 2012).

Even though post-lysis associations are minimized by cross linking, there is little information offered concerning where in the cell this association occurs. This can be addressed by imaging experiments. Fluorescence colocalization of FP, or otherwise tagged proteins, or immunofluorescence has been used to visualize associations in live cells (Pedley and Benkovic, 2017), as has FRET (Kenworthy, 2001) and similar methods (Coffey et al., 2016). The use of proximity ligation (Fredriksson et al., 2002 Bellucci et al., 2014), which is read out via immunofluorescence on fixed cells, is also increasing. This method uses DNA-tagged antibodies that when in 𠇌lose” proximity (40 nm) can template a rolling PCR reaction, allowing for puncta to be observed in specific cellular compartments where an association occurs. This method is signal amplifying, and hence very sensitive. However, since the distance covered (40 nm) by this method is much larger than most proteins, resolution is likely insufficient to “prove” a 𠇍irect” interaction.

There are numerous genetic methods to probe PPIs. The most commonly investigated method is the yeast-2-hybrid (Y2H) assay (Vidal and Fields, 2014). This method uses a split transcription factor one terminus of which is fused to a bait protein, and the other terminus of which is typically fused to a series of test proteins. Pairwise combinations of the bait and each test construct are expressed in yeast. When the bait and a test protein interact, the split transcription factor is able to form a viable protein, and typically drives transcription of a gene required for survival, such that only cells expressing proteins that interact with the test protein survive. Aside from the requirement to use ectopic protein and the fact that the native proteins are not used, criticism has been levied at this method because yeast is not a similar environment to human cells in terms of complexity, organelle structure and the posttranslational modifications it is capable of. Interactions must also happen in the nucleus. Furthermore, many Y2H methods are based off a 2 micron-plasmid system (Chan et al., 2013) that gives high expression of each protein, which “may” provide false positives. However, false positives are clearly not as detrimental as false negative, which are also abundant due to incomplete coverage of screening libraries, incomplete expression and poor folding. The use of autosomally replicating sequence-containing plasmids can also alleviate the issue of high protein expression/high copy number (Newlon and Theis, 1993).

Y2H has been extended to mammalian cells, where more complex modifications are possible, but many of the same issues remained, and the library generations are arguably more complex. Non-allelic non-complementation is a screening method that looks for unexpected non-complementation (i.e., where a cross of two strains with mutations to different genes do not give viable offspring) and can be carried out in numerous organisms (Firmenich et al., 1995 Rancourt et al., 1995 Yook et al., 2001). The likely explanation for such an effect is that proteins reside in the same pathway, and commonly these proteins form a complex that is so depleted in the double heterozygote complementation is not possible. Although this is clearly an indirect assay, it has proven very informative and variations of this assay have been used to uncover interesting aspects of cancer biology (Davoli et al., 2013). Aside from these in-cell-relevant experiments, phage display has also been used for HT-protein protein interaction screening (Gibney et al., 2018). This method is of course sensitive and accurate. However, it lacks the ability to be employed in cells (Kokoszka and Kay, 2015).

Chemotype Specific Sensing and Signaling: REX Technologies

REX technologies developed by our laboratory were ultimately aimed at studying the signaling function of reactive electrophilic species (RES) in living systems with individual-protein specificity and in precise space and time (Figure 4) (Fang et al., 2013 Lin et al., 2015 Parvez et al., 2015, 2016 Long et al., 2017a,b, 2018a Hall-Beauvais et al., 2018 Surya et al., 2018 Zhao et al., 2018). The method uses custom-designed bi-functional small-molecule probes [such as Ht-PreHNE for controlled release of a native electrophile 4-hydroxynonenal (HNE)]. One terminus of the probe binds HaloTag irreversibly by virtue of a pendant alkylchloride function. The other end of the bi-functional probe delivers a payload of a specific reactive electrophilic species, e.g., HNE, upon light illumination (t1/2 of release for various enals/enone-derived electrophiles, ρ min) (Lin et al., 2015). Upon RES liberation, sensor proteins responsive to a given RES have to rapidly intercept the RES prior to diffusion and/or degradation/metabolism (Liu et al., 2019). Thus, the concept underlying REX technologies is unusual in that it harnesses intrinsic “reactivity/affinity-matching” between the released ligand and (a) POI(s) (Long and Aye, 2016, 2017 Long et al., 2016, 2017c Parvez et al., 2018 Poganik et al., 2018 Liu et al., 2019). HaloTag-targetable photocaged probes such as Ht-PreHNE (1� μM) are tolerated by cells for > 2 h, and worms/developing fish for several days (Parvez et al., 2016 Long et al., 2017a,b, 2018a Hall-Beauvais et al., 2018 Surya et al., 2018 Zhao et al., 2018). Ht-PreHNE does not affect DNA damage response, ubiquitination, and several other essential processes in cells and fish (Parvez et al., 2016 Long et al., 2017a,b Zhao et al., 2018).

Figure 4. REX technologies to interrogate precision electrophile signaling (T-REX) and mine kinetically-privileged sensors (KPSs) to specific reactive electrophilic species (RES) (G-REX). (A) T-REX electrophile delivery. A functional Halo-POI fusion protein is expressed either transiently or stably in live cells, worms, or larval fish. Treatment of these living models with a bio-inert REX probe [photocaged-RES (with or without alkyne functionalization)] (1� μM, 1𠄲.5 h, depending on the system) results in stoichiometric covalent binding of the probe to Halo. After several rounds of exchange with fresh growth media/buffer containing no probe (to washout the unbound REX probe), light exposure (1𠄳 min, 365 nm, 0.3𠄵 mW/cm 2 , depending on the system) liberates a specific RES of choice (with or without alkyne-modification) within the microenvironment of Halo-POI, thereby giving the POI the first refusal to the RES. Labeling occurs provided the POI is a KPS of this RES. Provided the resulting substoichiometric RES-modification of the POI is sufficient to elicit either gain-of-function or dominant-loss-of-function signaling responses, T-REX presents a unique means to directly link target-engagement to function. [We define such sensors that can elicit dominant responses at low-occupancy as privileged first responders (PFRs)]. When the alkyne-modified version of the probe is used, the magnitude of measured responses can be quantitatively correlated with the POI-target-occupancy (by fluorescence-gel-based analysis following Click coupling of the alkyne-functionalized-RES-modified-POI with an azido-fluorophore). (B) G-REX profiling. G-REX enables genome-wide direct ID of KPSs under controlled and RES-limited conditions. Cells ectopically expressing HaloTag-protein are treated with the same REX probe used in T-REX (but the alkyne-modified version) under conditions similar to those deployed in T-REX. Without fusing Halo to any proteins, G-REX approach that allows for user-defined time-, dose-, and locale-controlled release of a specific RES is set to directly capture (localized) native sensors (i.e., KPSs) most responsive to the liberated RES, at low-occupancy covalent RES-modifications. Cell lysis and click coupling with biotin-azide followed by streptavidin enrichment engender RES-bound KPS(s) to be ID⟭ by digest LC-MS/MS. The resultant top hits can be functionally validated using T-REX (A).

We discuss below two different REX technologies, as well as potential or yet-unnoticed shortcomings of the method.

T-REX: Target-Specific Reactive Small-Molecule Sensing and Signaling

T-REX (Figure 4A) uses a HaloTag-POI fusion to give the specific POI first refusal for the RES (e.g., HNE) photouncaged from Halo (Fang et al., 2013 Lin et al., 2015 Parvez et al., 2015, 2016 Long et al., 2017a,b, 2018a Hall-Beauvais et al., 2018 Surya et al., 2018 Zhao et al., 2018). In this way, a specific POI, providing it is HNE-sensitive, can be HNEylated in the backdrop of a largely unperturbed cell. T-REX gives relatively high RES-occupancy of a specific POI, but incurs very little RES-modification/stress of the total proteome (Parvez et al., 2016 Long et al., 2017a,b Zhao et al., 2018). Thus, T-REX is also a highly spatially-restricted method and has proven to be compatible with numerous other chemical biology/genetic techniques. Finally, because individual POIs are modified, functional downstream responses elicited as a consequence of specific POI—RES interaction can be read out. Interestingly, proteins that are appreciably modified by HNE under T-REX tend to undergo phenotypically-dominant effects as a consequence of substoichiometric-HNEylation (Lin et al., 2015 Parvez et al., 2015, 2016 Long and Aye, 2017 Long et al., 2017b Zhao et al., 2018). Thus, T-REX has established that some proteins are wired to react rapidly with HNE and to modulate signaling at fractional occupancy. We have dubbed such proteins privileged first responders (PFRs) (Long and Aye, 2017 Parvez et al., 2018 Poganik et al., 2018 Zhao et al., 2018 Liu et al., 2019). Using T-REX, HNEylation, at individual protein-specific levels, has been shown to impact numerous critical signaling subsystems and pathway intersections, including ubiquitination (Zhao et al., 2018) and phosphorylation (Long et al., 2017b).

The POI-specific nature of T-REX renders the method not particularly high-throughput. G-REX (vide infra) (Zhao et al., 2018) can assume this role if it is needed. Critically, because T-REX uses ectopic expression, RES-labeling and downstream signaling require the HaloTag protein to be fused to POI and expressing the POI and HaloTag separately and replicating T-REX in this “split” control system ablates both the POI RES-modification and signal propagation downstream (Lin et al., 2015 Parvez et al., 2015, 2016 Long et al., 2017b). Similar controls were recently introduced and shown to be effective for APEX2 (Ariotti et al., 2015, 2018). We have also identified point mutants that are enzymatically or functionally active but do not sense the RES delivered under T-REX conditions (Long et al., 2017b Surya et al., 2018 Zhao et al., 2018). Notably, these mutants are also refractory to downstream signaling changes induced upon T-REX (Long et al., 2017b Surya et al., 2018 Zhao et al., 2018).

T-REX has found application to several model organisms, such as C. elegans and larval zebrafish (Long et al., 2017b, 2018a Hall-Beauvais et al., 2018 Zhao et al., 2018). G-REX has as yet not been so applied. T-REX was used in fish embryos to study the effects of HNEylation of two different sensor proteins, Ube2V2 (Poganik et al., 2018 Zhao et al., 2018) and Akt3 (Long and Aye, 2017 Long et al., 2017b). It was noted that in these systems, expression of the transgenes was similar to that of the endogenous proteins (Long et al., 2017b Zhao et al., 2018), rendering the systems more “natural” than that in cultured cells where the level of Halo-POI-overexpression was significant. Satisfyingly in both cases, delivery and downstream signaling was observed in zebrafish similarly to cell culture. However, because of the implicit requirement of UV-light that is poorly tissue-penetratable, whole organism studies with T-REX on, for instance, mice or adult fish, are not yet possible. This current limitation would not restrict use in certain organs like the brain or blood, however. Two-photon-based photocages would render REX technologies more broadly compatible and would also lower the overall impact of the method on UV-sensitive molecules/processes, such as DNA-synthesis/repair and RNA regulation.

G-REX: Genome-Wide Assay for Protein Reactivity With Specific Electrophiles

G-REX (Figure 4B) was established to address limitations underlying with the existing RES-sensor profiling strategies, which rely upon high doses of reactive covalent chemicals for long periods of time. Such flooding strategies tend to incur significant off-target effects due to mass action. These approaches, although they likely achieve high occupancy and modification of multiple potential targets, also affect physiology through, for instance, perturbing cellular redox environment, and inducing stress and apoptosis. RES-permeability, intracellular distribution, metabolism, and specific subcellular redox environments, etc., altogether render the consequences of cell treatment by a reactive molecule such as HNE highly context dependent.

G-REX is designed to release a small, defined pulse of (alkyne-functionalized) RES [e.g., ߥ μM of HNE over 2𠄵 min in HEK293T cells with ubiquitous Halo expression (Zhao et al., 2018)]. Under these controlled conditions, PFRs to HNE are identified. HNEylated proteins are biotinylated by Click coupling with azido-biotin, precipitation, resolubilization, and streptavidin enrichment followed by mass spectrometry. Using this approach, several PFRs to HNE, including Ube2V2 and Ube2V1, were identified as well as numerous known HNE sensors. Importantly, any enriched hits from G-REX can be validated for HNE-sensing and HNEylation-specific signaling function using T-REX. By contrast, G-REX is not intended to study downstream signaling.

Using G-REX—T-REX coupled strategy, Ube2V2 and Ube2V1 were validated to be HNE-sensitive and modification impact respective signal propagation downstream (Zhao et al., 2018). Several biochemical methods further document these findings. Thus G-REX is an unusual strategy in that it is a global method that aims to achieve only low-occupancy on-target proteins (Liu et al., 2019). Its spatial resolution is currently unknown, although HaloTag itself has been successfully localized to specific subcellular compartments. It remains unknown how diffusive/reactive HNE is, which may intrinsically limit this method's utility to organelle-specific release.

G-REX has several method-specific limitations. First, G-REX only releases a brief and low concentration pulse of RES. Thus, G-REX is a “target-poor” strategy and could potentially miss some privileged sensors. Such issues can be circumvented by repeating experiments numerous (3 or more) times and further integrating quantitative proteomics such as SILAC (Ong et al., 2002) or TMT-labeling (Thompson et al., 2003). However, MS analysis is costly and time consuming and these constraints should be considered when planning/choosing G-REX. To enable target ID, an alkyne-functionalized variant of native RES is used in G-REX. For lipid-derived electrophiles (LDEs), alkyne tagging is minimally (if at all) invasive, although alkynylated versions of many drugs have been successfully deployed for successful target ID (Wright and Sieber, 2016 Parker et al., 2017). Radioisotope tagging or antibody affinity methods present alternatives to alkyne. However, antibodies are much lower sensitivity than alkyne-based Click coupling/enrichment, and radioisotope incorporation may still prove difficult to apply to highly reactive electrophiles, where there is significant background radioactivity, especially given the low occupancy of RES-modification that underlies G-REX. To users' benefit, biotin/streptavidin-based enrichment permits the non-alkynylated electrophile to be used as an “ideal” control for comparison.


Fatty acids synthesis

When acetyl-CoA is abundant, liver and adipose tissue synthesize fatty acids. The syntheis pathway is quite similar to the reverse of b -oxidation, but presents several imporatant differences:

  • it takes place in the cytoplasm, rather than in the mitochondrion.
  • uses NADPH as electron donor
  • the acyl carrier group is ACP (Acyl Carrier Protein), instead of coenzyme A.

Fatty acids synthesis uses acetyl-CoA as main substrate. However, since the process is quite endergonic acetyl-CoA must be activated, which happens through carboxylation. Like other carboxylases (e.g., those of pyruvate or propionyl-CoA), Acetyl-CoA carboxilase uses biotin as a prosthetic group.

Malonyl-CoA is afterwards transferred to the acyl carrier protein (ACP), yielding malonyl-ACP, which will condense with acetyl-ACP (sinthesized likewise from acetyl-CoA).

In animals, every step of palmitic acid (the 16-carbon saturated fatty acid) synthesis is catalyzed by fatty acid synthase, a very large enzyme with multiple enzymatic activities. Butiryl-ACP produced in the first reaction will be transformed in butyl-ACP (the 4-carbon acyl-ACP). The reaction sequnce is the reverse of b -oxidation, i.e., reduction, dehydration and hydrogenation:

Butyl-ACP can afterwards condense with another malonyl-ACP molecule. After seven rounds of this cycle palmitoyl-ACP is produced. Palmitoyl-ACP hydrolysis yields palmitic acid. The stoichiometry of palmitic acid synthesis is therefore:

Acetyl-CoA + 7 Malonyl-CoA + 14 NADPH + 7 H + ---> palmitic acid + 7 CO2 + 14 NADP + + 8 CoA + 6 H2O

Longer (or unsaturated) fatty acids are produced from palmitic acid by elongases and desaturases.

Fatty acid synthesis happens in the cytoplasm, but acetyl-CoA is produced in the mitochondrion. Therefore acetyl-CoA must cross the inner mitochondrial membrane before it can be used in fatty acid synthesis. This is performed by the citrate shuttle: citrate is formed in the mitochondrion by condensing acetyl-CoA with oxaloacetate and diffuses through the membrane into the cytoplasm, where it gets cleaved by citrate-lyase into acetyl-CoA and oxaloacetate, whic, upon reduction to malate, can return to the mitochondrial matrix. Malate can also be used to produce part of the NADPH needed for fatty acid synthesis, through the action of the malic enzyme. The remainder of the NADPH needed for fatty acid synthesis must be produced by the pentose phosphate pathway.


Watch the video: Μάθε να δημιουργείς ΕΝΤΥΠΩΣΙΑΚΟ γραπτό στη Βιολογία για τις Πανελλήνιες. Biology maniax (January 2023).