Entropy, in the non-physical sense

Disclaimer: Before you guys go fleeing for the hills because I used the word “entropy”, don’t worry, because this isn’t going to be (entirely) a scientific piece. These are just some thoughts I have about the literal versus metaphorical meanings of entropy, and I’ll try to keep it simple.

Here’s your thought for the day: does entropy apply to non-physical objects as well?

The Second Law of Thermodynamics states:

The entropy of an isolated system never decreases, because isolated systems spontaneously evolve towards thermodynamic equilibrium—the state of maximum entropy.

Entropy is a measure of the order and disorder of the universe. Put simply, the natural state of the universe is chaos, and any system, no matter how organized and carefully constructed, will disintegrate over time due to entropy. These systems can be anything: human bodies, buildings, large machines…you get the point.

If you view everything with this perspective, I began thinking if the laws of entropy apply to non-physical things as well. First, let’s think about memories. In the literal sense, a memory is a series of electrical impulses in our brain with accompanying neurotransmitters, among other things; in the metaphorical sense, the sum of these impulses and neurotransmitters create the experiences that shape the people that we are.

Do memories degenerate into nothingness? Technically speaking, our brains continue to develop until approximately the age of 25, yet, during that time-frame, we lose memories through means that can’t necessarily be explained by physical reasons. Some scientists would argue that the memories are not gone – that we have simply forgotten the pathways to recall that memory; this is a possibility and there has been some evidence in favor of this. However, what if this is simply non-physical entropy at work? In theory, if we created a perfect brain that was fully mature by age 5 and did not prune or begin to degrade until we were in our 70’s or 80’s, would the memories still fade? Is an idea resistant to entropy as well?

Similarly, what about our relationships? If we began to live for centuries, the universe’s tendency toward chaos and equilibrium dictates that our friendships and relationships would not last that long. If evolution made it so that we were no longer genetically related to our family, would the bonds of family ties eventually go by the wayside? Would a happily married couple, even after decades of perfect marriage, eventually succumb to entropy and have their relationship disintegrate? Basically, the large question, in this particular case and paragraph, is: Is human nature resistant to entropy?

Obviously, not all of this, if anything, is feasible, and it’s nearly impossible to think of a logical answer to any of these questions, but I think it makes for an interesting thought experiment. What are some of your thoughts and insights on this?

Photo courtesy of ScentTrail Marketing

4 thoughts on “Entropy, in the non-physical sense

  1. Speculation and and ill-ordered wanderings follow. You have been warned.

    As a hard naturalist, the issue of whether a memory is anything other than a physical entity is pretty simple: It is not. To elaborate, a memory could be a pattern of synaptic firings within the brain, the words in a memoir, the sound waves produced by playing a recording, etc. I will waffle a little for the sake of sanity and note that one could suppose that information and its physical storage format are distinct. Personally I don’t, but I think this won’t unduly impact the point I’m trying to make.

    It may also be helpful to quickly define the concept of information entropy, as I’ll use it later. Claude Shannon (unreasonably brilliant bugger that he was) developed it to represent how much you could theoretically compress some communication. In words, the Shannon (information) entropy of something is the expected value of the log-probability distribution of the bits. More intuitively, one can think of it as a measure of how ordered a signal is. Thus the information entropy increases as you’re less able to pin down what bits your signal contains.

    To answer your specific question, it sounds like what you’ve proposed in the example of a ‘perfect brain’ is essentially a hard drive, at least where memory is concerned (it could be any other form of information storage thing, but I like hard drives). Advancing the analogy, a hard drive does eventually lose its storage integrity as errors accumulate. The questions for the things stored on it (read: memories) then become: “Can the drive be fixed, can the information be recovered, and is there another thing to transfer the information to?”

    Thus, the dynamics of the information entropy are dependent on the physical constraints of the system storing the information. That is, if you only have one hard drive, the information entropy will increase as the hard drive accumulates mechanical wear. This would be amplified if you had no way to exert effort to recover or protect your data from the effects of mechanical damage to the drive. By extension, if you had some backup drives handy, you could either store the information redundantly, or transfer it when the first drive starts losing integrity.

    Back to ideas, I would argue that the existence of a given idea is dependent on the existence of its physical manifestations (you could say that an idea /is/ its physical manifestations). This means that if there aren’t any ways to store an idea, it will effectively cease to exist. Similarly, if that storage is bad at accuracy (read: humans), then the uncertainty over what the idea really was will increase. If this process reaches a point where the original idea cannot be reconstructed, then the idea is dead, long live the idea. So to speak.

    Now then, relationships. Mean old sociobiologists might say that human nature is the product of entropy, insofar as humans are finite creatures generated by evolutionary processes. Another (slightly trivializing) way of putting this is that once you’ve procreated and ensured the success of your offspring, you and the relationship in which you passed your genes don’t matter anymore. Break up, get hit by a bus, whatever.

    But that’s them (or my straw-man caricature. Please don’t hate me sociobiologists, I really do appreciate what you’re doing.). My own take is that the duration of successful human relationships is a function of the coherence of a match to a person’s desired properties (as well as an implicit function of our evolutionary programming. There.). More specifically, I think of this in two parts: Are they what you want now? Will they continue to be valuable? Not entirely unlike buying a new computer, as it happens. The first thing is fairly easy to figure out, and we’ve got plenty of nice heuristics to help us along. The trouble with the second is that if your species lives a long time and also undergoes cognitive changes over that time, it becomes exceptionally hard to evaluate the long term value of another person. Will he like tuna salad in 30 years? Will I care whether he likes tuna salad in 30 years? And so on. I’ll come back to this in a moment.

    Of course, analogous to the physical examples, human interactions also require work to continue to function. Losing contact with a friend/partner/family member tends to reduce one’s ability to maintain a relationship with them. I find that this phenomenon illustrates nicely the interaction of uncertainty/change and cognitive finiteness. This is because the loss of contact not only enables one to begin to forget the other person, but that the loss of contact also renders direct observation impossible. In the absence of new data, one is forced to rely on an extrapolation of what the other might now be like based only on your previous history with them. I consider it highly unlikely that even with exceptional memory and fantastic computational ability one could actually make that prediction successfully, though it would certainly help.

    It’s worth noting that I’m not exceptionally optimistic about the long term success of human interactions, but neither am I fatalistic. It is certainly possible that two people can diverge after a long duration of successful interaction due to the accumulation of unaccounted changes. I tend to think this is less likely if adequate work has been expended collecting and processing data about the other person.

    TLDR: Everything accumulates uncertainty; It takes work to reduce uncertainty; Too much accumulated uncertainty breaks things.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s