2nd law ... and self-organization | from Howard A. Landman | 29 Sep 2007
12:26 AM: Hey Brig, Your page on the second law of thermodynamics (http://www.panspermia.org/seconlaw.htm) is clearly written and has lots of useful references. However, I believe you've gotten it somewhat wrong, and that its trivial to demonstrate so.
If there were no link between heat entropy and configurational entropy, then endothermic reactions could not happen. Yet they do: consider ammonium chloride dissolving in water, for example. The heat entropy goes down, but the configurational entropy goes up more. The reaction proceeds in that direction.
The two forms of entropy (which you say are completely unrelated) are thus physically interconvertible and equivalent, just like different forms of energy. Q.E.D.
Boltzmann's constant is just a conversion factor between dimensionless logical entropy and entropy with the usual physical dimensions. Many creationists (e.g. Dr. Gitt) deny this but it's really quite straightforward. This conversion was already understood by Szilard in his analysis of Maxwell's Demon (although other aspects of that analysis have been superceded by Landauer et al).
(By the way, engineers often convert problems to dimensionless entropy because it allows them to temporarily ignore physical dimensions and simplify the analysis. If you do a literature search you will find many such papers ... I know of over 50. It's a pretty standard technique these days ... textbook stuff.)...Howard A. Landman
P.S. On the topic of self-organization, in addition to of course Prigogine, I would highly recommend Manfred Eigen's "Selforganization of Matter and the Evolution of Biological Macromolecules", Naturwissenschaften 1971. It's highly mathematical but quite lovely and, for me at least, very convincing. Prigogine opened the door and let us peek outside, but Eigen takes us a few steps down the road. ...Howard Landman
01:28 AM: Thanks for your thoughtful comments. Configurational entropy is a term I saw in Charles B. Thaxton, Walter L. Bradley and Roger L. Olsen's book. I do not use the term. My term is Logical Entropy. I discuss the difference between organization, which would decrease logical entropy, and order, which characterizes crystals. BTW, I mention Eigen's scheme on my "Neodarwinism" page. I met him in 1996 and asked him about any proof in a computer model that increases organization. None. ...Brig
11:32 AM: That's very odd. There are results from computer models in the 1971 paper. I'm looking at them right now.
12:19 PM: In my opinion, which I have canvassed widely, the computer models all reach a very near limit. All they can do is what they're told to do basically. I have lots about this on my website. Please check. If you have a counterexample, I'm interested. In fact I am trying to offer a prize for the project. To my surprise, there's not much interest!
9/30/2007, 01:34 PM: Well, the problem is a lot more subtle and complicated than most people think.
For example, getting the genome separated from the phenome seems to be enormously important. This was first really driven home to me by F. Gruau's work on "Cellular Encoding" of neural networks. Standard Genetic Algorithms and Genetic Programming run into limits quite quickly when trying to solve e.g. the parity problems of increasing size. This is partly because the genome (data structure which evolves) is the same as the phenome (program which solves the problem). By setting up an environment where these were separated - Gruau uses the genome as a program to *generate* a neural net which solves the problem - he was able not only to solve any size problem efficiently, but even to evolve a "generic" solution which takes an input parameter N and generates a network which solves the problem of size N for any N.
This has relevance to the "RNA World" scenario, where the catalytic activity is directly encoded in the genome. Evolution within that framework would probably have been very restricted at most times, with little opportunity for radical innovation. My sense is that it would have been similar to Eigen's notion that a single species would dominate for a long time, until a better variety appeared, at which time it would rapidly take over. If you will, a kind of "punctuated equilibrium" model at the molecular level. The system basically sits around hanging on until there is a lucky (for the new species, unlucky for the old one) break. "Hill climbing" where small mutations cause incremental improvements can and will occur readily in that model, but jumping from one hill to another is much harder. The standard "big" changes like crossover only help a little.
Once the genome and phenome are separated, however, all kinds of large changes get much easier. A single mutation in a homeobox-style control gene could, for example, make the difference between a squid with a conical shell and a squid with a nautilus-like spiral shell. This may help explain the Cambrian explosion; some new level of control structure in the genome could have led to many new varieties of body shape. The genetic difference between, say, 6 legs and 8 legs can be very small in that kind of environment.
And genome-vs-phenome is only one of a half-dozen such critical issues. If you get any of them wrong, you will find yourself evolving in a box. (Of course, if your desired solution is inside the box, this may not be a bad thing. But it does make open-ended evolution unlikely.)
Anyway, if you have a precise mathematical definition of what you mean by "organization", I'd be happy to take a look at it and see if I think it's already been demonstrated, or could readily be demonstrated, in a computer model. We have a lot faster computers now than in 1971. ...Howard
03:53 PM: No I do not have a precise mathematical definition of what I mean by "organization." The definition I have attempted hinges on being able to quantify "meaning," which I am not able to do. But why doesn't the burden of measuring evolutionary progress fall on those who are sure they have observed it?
10/3/2007, 10:36 PM: "Meaning" is of course difficult - some would say impossible. Shannon stayed well away from it.
One useful approach is to consider the receiver as a physical system with states. A message which does not cause it to change state has no meaning (to it). A message which causes it to change state has meaning. The degree of meaning can perhaps be quantified by the unlikeliness of the end state (relative to the probability distribution over all possible endstates of all possible messages). This has the advantage that it is easy to incorporate probabilistic behavior by the sender or receiver.
[Brig Wrote] But why doesn't the burden of measuring evolutionary progress fall on those who are sure they have observed it?
Well, when someone defines a measure of information and shows increase according to that measure (as has been done by several different measures, methods, and people, including Eigen), someone else saying "Oh yeah? That's not what *I* mean by information!" has almost no semantic content unless they go on to define what they *do* mean by information.
For a particularly horrible example, see "In The Beginning Was Information" by W. Gitt. In the entire book, he never exactly defines what the hell he is talking about, never bothers to prove any of around 30 so-called "theorems", makes assertions that are not just false but actually inherently self-contradictory, and ends up concluding that Jesus (not God generically, but Jesus specifically and personally) created all the information in DNA. Shannon's 1948 theorem proves that any measure of information which satisfies his 3 assumptions MUST have the mathematical form of entropy. Gitt claims his information is different, which means it must violate one of Shannon's 3 assumptions; but this never occurs to Gitt, and he doesn't ever ask or discuss which assumption might be violated.
For a specific example of information gain, consider gene duplication. Most creationists would argue that this creates no new information. But if we look at algorithmic (Chaitin) complexity, that's not quite true. We may have gone from a minimal description of
a = 'AGTACC ... TATCGA' ;
to
a = 'AGTACC ... TATCGA' ;
b = a ;
but there is in fact a slight increase in minimal description length. The delta is not zero. And further (now independent) mutations of the two copies will almost certainly increase it further.Maybe "diversity" is a less loaded term than "information" for this. Or "richness". But anyway, it's quite clear that physical processes can increase that in multiple ways. So if you're arguing against evolution on information-theoretic grounds, you really HAVE TO define some different form and then try to argue that it can't increase, because the notion that information IN GENERAL cannot increase is simply bogus. You have to be more specific than that.
This is, I think, what the "irreducible complexity" folks are trying to do. I believe they have failed so far and will continue to do so, but I also believe they're at least attacking the problem in a semi-reasonable way that, if they somehow succeeded, would actually mean something.
There are many macroscopic self-organizing non-living processes. Benard convection cells in a fluid heated from below. Clouds. Color bands in the Belousov-Zhabotinsky reaction. If you want to argue that organization can't happen naturally, you have to explain why those examples (and thousands of others) don't count. I don't see how to do this without defining terms very carefully.
10/4/2007, 08:37 AM: Howard -- your response is quite informative.... Also, for comments on gene duplication and information increase, see "Macroevolutionary Progress Redefined: Can It Happen Without Gene Transfer?" at http://www.panspermia.org/harvardprep.htm ...Perhaps more later.
10/6/2007, 07:33 PM: [Brig wrote] ...My term is Logical Entropy. I discuss the difference between organization, which would decrease logical entropy, and order, which characterizes crystals.
Well, yes, but as Szilard (1929) and others have noted, configurational and logical entropy are identical except for a multiplicative constant. So your claim that logical entropy is separate from thermodynamic entropy still fails.
Also, all the problems with number of partitions etc. disappear when you consider only the entropy *difference* between starting and ending states; this is the same for any choice of partitions. Thus the partitioning corresponds to a choice of what reference point you decide to call "zero entropy", and nothing more. "dS" is independent of all that.
Finally, it is clear from chaos theory that some enormously complicated structures can be specified in a handful of characters. The difference between the organization of the Mandelbrot set and that of a simple circular disc is only a few bytes. And it is obvious from the success of L-systems, and other ways of modeling the fractal shape of plants, that living things exploit this to the hilt. The morphological rules that control the development of, say, a fern, are probably expressible in no more than a few hundred bits. Looking for this "needle" of organization in the "haystack" of a complete genome measured in gigabits is bound to be difficult. That is, I expect the thing you call "organization" to be a very small portion of the genome, and hard to isolate from the rest. ...Howard
P.S. If you think organisms are getting *less* complex over time, how do you explain the Cambrian Explosion?
10/7/2007, 10:38 AM: (feeling argumentative today) ...Even though the math may be analogous, using Boltzmann's constant to attach physical units to unphysical information is wrong. Shannon never did it. Feynman wrote:"We measure 'disorder' by the number of ways that the insides can be arranged, so that from the outside it looks the same. The logarithm of that number of ways is the entropy."
If I parcel the space into centimeter cubes, the number of ways will be n. If millimeter cubes, is it n^1000? The difference between an initial state and a final state will be different by as similar factor. The scale matters even for difference only. Right?[Howard wrote] I expect the thing you call "organization" to be a very small portion of the genome, and hard to isolate from the rest.
I know you are right about this. This is why "organization" is the right term, not complexity. Your argument seems to be -- Short algorithms can code for complex patterns and shapes, so there is no reason to doubt that chance can compose the algorithms for highly organized life.
I think this is very superficial and hasty reasoning. Might convince a jury of your peers, but not me. I want to see a closed-system demonstration. So far these fail. That leaves room for doubt.
9 Oct 2007, 13:36:00 -0600 (MDT): [Brig wrote] The scale matters even for difference only. Right?
Wrong. Consider the case where we partition the space into two halves, and we know a single particle is in the left half, or we don't. The number of possible states is exactly twice as large in the second case as in the first; therefore they differ by one bit of information or k.ln(2) entropy. (Specifically, the number of states are 1 and 2 respectively, and the entropy k.ln(1)=0 and k.ln(2). The delta is k.ln(2))
Now suppose we divide the space into some huge number 2N of cells, and we either know the particle is in a subset consisting of half (N) of the cells, or we don't. Same physical situation as before, but different representation. The number of possible states is still double in the latter case, therefore the difference is one bit of information or k.ln2 entropy. (Specifically, the number of states are N and 2N respectively, and the entropy k.ln(N) and k.ln(2N). (ln(2N) - ln(N)) = ln(2N/N) = ln(2).)
The delta is identical in both cases. The change in representation has no effect at all.
05:59 PM: Disagree completely. Following Feynman, quoted on my webpage about the 2nd law and referenced (vol I section 44-6, The Feynman Lectures on Physics, Richard P. Feynman, Robert B. Leighton and Matthew Sands, Reading, Massachusetts: Addison-Wesley Publishing Company, 1963.)Consider 5 white molecules on the left and 5 black molecules on the right -- In the example with only 2 volume elements or cells, the number of ways to arrange the white molecules on left, black molecules on right is 1 and the logarithm of the number of ways, the entropy, is zero. (The number of molecules doesn't matter.)
In the example with more than 2 volume elements or cells, the number of molecules does matter. If there are only 10 cells (5 on each side), the 5 white molecules on the left can be grouped all-in-one (5 ways) or 4-in-one plus one loner (5*4=20 ways) or 3-in-one and 2-in-another (5*4=20 ways), or 3-in-one, 2 loners (5*4*3=60 ways), etc. The right side has equally many possibilities, so it's "...etc" squared. The number of ways will be many and the entropy will not be zero.
Feynman assigns entropy to both separated and unseparated states. These clearly are scale dependent. You have focussed on the change in entropy. Even there, I suspect that your example sustains your position only because it considers only one particle. If you doubt it, try your math on the above two examples?
10/11/2007, 12:58 AM: That's probably because you misunderstood my claim. I did not say that the absolute entropy calculated for a given configuration wouldn't be different under different representations - in fact, in my email, it *WAS* different under the different representations. (Go back and look. Zero vs k.ln(N), or k.ln(2) vs k.ln(2N).)
What I claimed was that the difference or delta in entropy, between two different physical configurations, would be the same no matter what the choice of representation (requiring only that the representations all be capable of accurately modeling the configuration).
Given this, it makes sense to view different representations as having different "entropy baselines" or points of "zero" entropy. This corresponds to just adding or subtracting a constant to convert from one representation to another. When you take the delta between 2 configurations, you subtract this constant from itself, leaving zero. It is also worth noting that the "zero entropy" points for different representations are different configurations. They are "apples to oranges".
Curiously, different representations all have the same point of maximum entropy or zero information. So perhaps it actually makes more sense to ask how much information a configuration has, relative to the maximum entropy one. It is this information ("negative logical entropy") which can be converted to work. I'll need to ponder this point a little more, since it was a sudden revelation while writing this email and I haven't worked through all the consequences, but it seems like the maximum-entropy configuration is a more reasonable baseline since it doesn't appear to depend on representation.
10/11/2007, 04:17 PM: Dear Howard -- What I disagree about is your initial contention that scale doesn't matter. Now we seem to have separated the issue into 2 questions. Does it matter for given states (yes); and does it matter for differences between states (still under discussion). I understand that you claim to show, for one particle, that it doesn't matter. I suggested that if you consider multiple particles, it will still matter. Isn't that where we are? Now I have done some math. The entropy *difference* will also be different, as I show on the attached xls spreadsheet. I used a very simple example with 4 particles. With one cell per half, the entropy increase was 2.197225. With two cells per half it was 2.407946.
12 Oct 2007, 17:39:56 -0600 (MDT): Well, first off, your math is wrong. Let's just look at 2 identical particles in 2 partitions. You assume that there are 3 equally likely states, 2-0, 1-1, 0-2. But in fact there are 4 equally likely states, we just can't distinguish 2 of them (if the particles are identical). The probabilities should be 0.25 for the 2-0 and 0-2 states, and 0.5 for the 1-1 state (which has 2 indistinguishable substates). You seem to be assuming that they are all probability 1/3. Make the balls all be different colors and you will get a different answer. There will be 16 configurations, not 9. Your ability or lack of ability to sense color should not change the answer.
If we go quantum, then it also depends on whether you assume Fermi-Dirac or Bose-Einstein statistics (i.e. on whether the particles are Fermions like Helium 3 atoms or Bosons like Helium 4 atoms). For Fermions, assuming no inverted spins, only the 1-1 state would be legal; the others would violate Pauli exclusion.
10/13/2007, 07:23 PM: I was following Feynman's method literally. The two situations are identical, but, I see now that you are correct for measuring likelihoods. Thanks for the help. Seriously. (You may recall that I first suggested you do the math!) Please use your method!But it looks obvious that your correction only makes the situation worse for your original contention, right? The entropy difference between the two methods will be even greater than I calculated. Would you mind responding to this point? (Let's leave quantum theory aside for now.)
10/14/2007, 02:29 AM: Nope, it's dead on what I said. See attached spreadsheet for 4 distinguishable particles. [Howard's spreadsheet shows]
2.772589 - 0 = 2.772589
versus
5.545177 - 2.772589 = 2.772589
[same difference]
12:41 PM: Feynman didn't specify distinguishable particles, but I agree with your method for calculating probabilities. So you have shown me something: entropy differences can be the same at different parcelling scales. Thank you. BTW, I admire your ability to do the analysis at first without having to count states. I'm not that good.But we are still left with the fact that logical entropy pertaining to single states (not differences) is scale-dependent. So when we attempt to use Boltzmann's formula, s=klnW, to attach thermodynamic units to logical entropy, what scale do we use? Brig
02:01 PM: Well, I'm not sure this is even a very useful question. The scale question could just as easily be asked as "What do we call the zero point of entropy?". From that point of view, it is clearly just an arbitrary decision. There is no right answer. Maybe a perfect crystal at absolute zero, but that's not a physically realizable system.
On the other hand, if we flip it around, and ask "What is the zero point of information (or negentropy)?", then there is often a natural point of reference. Because negentropy can be used to do work, the probability distribution at which the system is maximally random - too random to do any further work - is unique and (I think) independent of parceling. We might as well call that "zero information".
I.e., in a classical system of point particles, entropy has a maximum but it doesn't have a minimum, because you can always squeeze the particles into a smaller and smaller box. Conversely, there is a minimum (zero) to information but no maximum (you can always measure things to finer and finer precision and arrange them in greater and greater order). The bounds on minimum entropy and maximum information are quantum in nature, not classical.
For many purposes, the reference point is irrelevant. Work done is equivalent to an entropy difference, and as we have seen, differences are independent of the reference. So, for example, in analyzing a Carnot engine we just don't care.
Also note that it makes a difference whether the system is closed (adiabatic) or thermally open (isothermal); in one case the negentropy controls the amount of work which can be done, and in the other case the Gibbs Free Energy. These are different because a system doing work on something outside itself loses energy to that thing, which in the isothermal case can be replaced by outside heat, but in the adiabatic case cannot. So an isothermal system can often do a little more work than an equivalent adiabatic one.
02:11 PM: By the way, don't beat yourself up too much for not getting all this stuff immediately. It's subtle, and even some very good physicists have made mistakes thinking about it.
A couple of years ago, I studied quantum computing a lot, and at one point I thought I had a scheme for communicating faster than light. I cranked through all the bra-ket calculations; it took me 2 weeks to prove that I was wrong. A few months later, I came up with another FTL scheme. It took me 2 days to prove it was wrong. The third scheme I came up with, it took me 2 hours.
So, I was wrong every single time. But I got better at figuring that out. It's progress. Now I believe that FTL quantum communication is impossible. But it's not a naive belief. I've beaten my head against that wall enough times to have a sense of why such schemes in general can't work. This is one way of learning. ...Howard
28 Oct 2007 / 2:32 PM: ...What I was arguing was much more limited: merely, that sometimes very small amounts of information can have enormous and complicated effects on structure and function, and that therefore, we should not necessarily expect complex organismal organization to require large amounts of genetic information to specify it.
That being said, I note that you seem to be falling prey to the common creationist error of assuming evolution is "random". Evolution has four components: reproduction, heritability, change, and selection pressure. The first two are, as far as we know, not random at all. The change part has random components (such as point mutations) but also components that are not random (such as retrovirus infections, cross-species pollination, symbiosis leading to merged organisms-within-organisms (e.g. mitochondria), etc.). Selection, while it may contain elements of luck, is decidedly non-random over the long term. It's much like a long game of poker, where chance can let anyone win one hand or two, but over time a better player will beat a worse player with certainty approaching one.
We know that the immune system has a way of rapidly generating new DNA coding for new proteins, which are then tested for their ability to bind to antigens. Lymphocytes which can't bind, commit suicide; those which can bind, multiply. For an extreme view of non-random evolution, there is a theory that occasionally, maybe once per species per million years or so, some of this DNA somehow gets fused back into a germ cell and becomes part of the genome. So this would view life as conducting its own little biochemical experiments and in effect directing its own evolution by keeping the most useful results. This has a distinctly Lamarckian flavor, and is quite outside Darwin's original paradigm; nonetheless there is some evidence that it may have occurred, not just once, but many times. "The strangest thing about the theory of evolution is that so many people think they understand it." But it was never a requirement on actual evolution that it, or its products, be easily understandable; only that they work. Evolution is a lot wilder, messier, and weirder than most people think.
It is quite common for creationists to prove that pure random mutations (and nothing else) could not create life-as-we-know-it in a reasonable timeframe, and to claim that they have disproved evolution. But their models don't contain anything corresponding to evolutionary pressure, and sometimes not even reproduction or heritability. This is like putting a marble on flat ground, noting that it doesn't move, and claiming to have proved that it could never roll downhill. It's pathetically wrong. Anyway, I would never say that pure "chance" led to the creation of life. Chance + some kind of replication or heritability + some kind of selection pressure + a source of free energy, perhaps. Even with all those, there are pitfalls to be avoided. But there was a lot of time and a lot of molecules and a lot of energy, and it only had to happen once.
We will probably never have the answer to the historical question of how life actually began. The direct evidence was wiped out billions of years ago. The best we can hope for is to refine our understanding of how life COULD HAVE begun. On that front, great progress continues to be made, but the ideas are still a little fragmentary and disconnected. Despite the claims of some, a complete narrative still eludes us. Much work remains. Howard
29 Oct 2007 / 09:13 AM: Closed-system demonstrations of Darwinian evolution do not sustain its strongest claims. For me this failure is sufficient reason to doubt those claims. I realize that many people are willing to believe the claims anyway, but I need better evidence. I doubt that re-phrasing what we already know will make me see the evidence differently. However, to respond to some of your points:Your specific examples of non-random changes "(such as retrovirus infections, cross-species pollination, symbiosis leading to merged organisms-within-organisms (e.g. mitochondria), etc.)" are all transfer mechanisms. Darwinism must account also for the original composition of the genetic programs that come with the transfers. That process, by my reading, does depend on chance.
The vertebrate immune system is good at a process analogous to safe-cracking. That process does not compose new genetic programs. BTW, the key genetic components of our immune system were apparently acquired by a transfer event.
But I would not like to continue to argue like this ("you seem to be falling prey to the common creationist error..."). I want to see *demonstrations*. For more about this challenge you could see [In Real or Artificial Life, Is Evolutionary Progress in a Closed System Possible?] and the 3 "Next" pages. The computer models (all of which fail so far) do contain "evolutionary pressure, and ...reproduction or heritability." Feeling grumpy again! Best regards. Brig
31 Oct 2007 / 12:36:27 -0600: [Brig wrote] Darwinism must account also for the original composition of the genetic programs that come with the transfers. That process, by my reading, does depend on chance.
Yes, but it is critical to distinguish "depends (at least in part) on chance" from "is completely random and accidental". Life and evolution and poker all depend, at least partly, on chance. This does not automatically make them random or directionless. (Do Islamic creationists have an easier time with this idea? "Fate" and "fortune" play a larger role in the Muslim worldview than in the Christian one. It is no coincidence that dice games like backgammon arose in that culture.)
I dealt with the limitations of some evolutionary computation models at length in a previous email, and what is needed to get around them, so there's no reason to rehash that here. I will only point out that failure of one model does not imply that no successful model exists. I would like to point out, however, that a significant fraction of the human genome consists of transposons, and that many mutations are transpositions which are at least partially under the control of the cell's machinery.
The vertebrate immune system is good at a process analogous to safe-cracking. That process does not compose new genetic programs.
I disagree completely. To develop new binding proteins, the immune system effectively runs a parallel evolutionary experiment in combinatorial chemistry; the data storage for this experiment is in the form of DNA (although the "alphabet" is larger chunks than single codons, and the "sentences" are mutated by transposition of the chunks); the generated DNA programs are heritable by the descendants of the surviving cells, which may multiply rapidly. See e.g. http://www.cs.unm.edu/~immsec/html-imm/diversity.html for a quick and shallow overview.
(So, ironically, if we accept the Intelligent Design argument that the immune system is "irreducibly complex" and must have been designed, this leads us inextricably to the conclusion that the Designer designs systems which use random mutations and selection - i.e. evolution - to achieve their ends.) Howard
31 Oct 2007 / 23:03:47 -0600: [Brig wrote] the simple fact is that no successful model exists. Whaddya got?
All *I* know is that you have some small number of models, none of which succeeds in your eyes. I'd have to spend quite some time looking at your list of models, and your criteria for judging, before knowing whether I agree with that assessment; but my sense is that I probably wouldn't.
OK, maybe chance didn't write some of the frontline programs, maybe they were written by uber-programs. In Darwinian theory, what process wrote the uber-programs?
You still seem to be using "chance" as a synonym for "evolution", which it isn't. And it's a mischaracterization of evolutionary computation to say that evolved solutions are "written" by the executive. I wrote a small GP system in Perl once; it was less than 100 lines of code. Some of the programs that evolved under it were significantly larger and more complicated than that. One view is that the environment itself "directs" evolution, which can be seen as a "learning" process. There are even some rough measures of how much of modern genomes is "environmental" data and how much is "generic control" that is independent of that. (Transposons, for example, would be generic, while beta-galactosidase is environmental.) What you call uber-programs are part of the "evolution of evolvability", which kicked in fairly early (certainly no later than the Cambrian). Gene pools with greater ability to evolve seem to gave been selected for quite heavily; all large organisms alive today have a big chunk of their genome devoted to this, at least 30% I think.
What new programs or features, besides immunity, does this process produce?
Maybe I don't understand your definition of a program or feature. You seem to be saying that a novel protein is neither. What qualifies then?
The Second Law of Thermodynamics is the main referenced CA webpage.
Macroevolutionary Progress Redefined... is another referenced CA webpage.
...Is Evolutionary Progress in a Closed System Possible? is the last-referenced CA webpage.