I’ll be really clear from the beginning. I could not code my way out of a virtual, damp, recycled, organically-grown hemp fiber bag if I was given the rest of time – not just my time but ALL time. Even if you enclosed me in said virtual bag with a stack of coding manuals and an appropriate set of computing gear, I would remain in the sack, alone, damp, and surrounded by pesticide, herbicide, and chemical fertilizer-free hemp forever.
But I have the greatest admiration for what coders have done with their various languages, from ones and zeroes up through the hierarchy of (seemingly) always-expanding numbers of languages. I can’t list them all, nor would it be interesting to do so, but these folks out there beyond Firefox browser and WordPress website, out through my monitor, the HDMI cable that connects it to my video card, through a PCI Express high-speed computer peripheral expansion bus into the motherboard, bios, OS, machine language and whatever else it needs to connect with so I can see my typing, and out through my I/O port and LAN line to my cable modem/router and relatively efficient ISP, those folks who have coded this whole improbable mess together into a stable, functioning way to entertain myself and, occasionally, some of you, dear readers, have done is nothing short of miraculous. Well, I say miraculous but it was all a product of human intelligence and our incessant drive to solve problems that many of our fellow creatures weren’t even thinking existed.
Programming – coding – started eking its way into existence in 1801 with the invention of a loom which was run by punchcards (fondly thought of to this day by anyone who took upper level science, engineering, or computer science courses into the ’80s (perhaps beyond, not sure)). Charles Babbage intended to use this punchcard method to create programs for his difference engines and analytic engines, conceived as early as 1821 but never completed during his lifetime. His thinking about the analytic engine was remarkable and to a significant extent foreshadowed computing into the 21st century. Ada Lovelace (or more properly Augusta Ada King-Noel, Countess of Lovelace (née Byron; 10 December 1815 – 27 November 1852), aside from being the only legitimate heir to George Lord Byron, romatic poet, close friend of Percy and Mary Shelley, adventurer and noted profligate, is credited as using her friend Babbage’s ideas to create the first computer algorithm.
Although not directly related, it is interesting to read about the development of automated models (birds, toys, etc.) and devices like the music box and player piano as these types of creations were proto-computers in a limited sense; they only did what they were programmed to do, rather than perform a range of calculations. But onward!
Starting in 1889 Herman Hollerith started experimenting with punchcards (Hollerith cards) and paper tape as methods of feeding instructions to instruments he designed. In 1896 he started the Tabulating Machine Company, later to transform into International Business Machines (IBM). His designs led to creation of the Atanasoff-Berry Computer (ABC) designed to solve linear equations. The Bombe (1939) and Colossus (1943-1945) computers led to valuable breakthroughs in decrypting German communications.
We zoom into the ’60s and the development of FORTRAN, developed in 1954 and released to the public in ’57, around the same time COBOL (1959) was developed by Grace Hopper. The first computer game was called SPACEWAR! and was released in 1961.Steve Russell created it in what he estimated to be 200 hours; he never profited from his work, but it was later released as an arcade game (1977).
The internet started taking shape in the mid-70s and by 1992 the world-wide web was released to limited, but ever-expanding public use. I think I started using internal email on an HP mini-frame around 1983, but I could be off by a year. My academic advisor bought a TRS-80 from Radio Shack in 1982. It occupied a semi-sanctified place in his office and could be used only for those writing up their dissertations, etc. I bought my first computer in 1984, a skinny MacIntosh for about $2500; it had an 7.8336 MHz CPU, 128 KB RAM and there was an 8 KB ROM chip included, along with a 64 KB. I bought the 400 KB 3.5″ floppy disk drive as a peripheral.
I played a game called Wizardry: Proving Grounds of the Mad Overlord. To play it I had to graph out my progress down hallways by turning right or left (if there was a wall, I graphed a wall, if there wasn’t it was a turn or a room that needed exploring). You didn’t know what was going to happen until a text box popped up and told you to open something, cast a spell, smite something (and what that something was), and that kind of thing. To say it was rudimentary is to do it a kindness, but I was hooked. I just found out you can download various versions of the game to this day and play it if you have an inexplicable nostalgia for doing something the old-fashioned way. I also found some YouTube videos, but they were in Japanese so you can go look at those if you’re interested. As far as I can recall, my game was in black and white.
I didn’t play anything else until Doom and Quake came out in the mid-’90s. Both were kind of terrifying; I introduced Quake to my workgroup (I was the manager) and we would play it on the company network after 5 PM. It felt like good team-building (all of them were better than I was so I got killed a bunch) and a little naughty (we were using company assets for our amusement).
Let’s jump again and visit the early ’00s. The Elder Scrolls III: Morrowind came out for PC. For the time, it was a beautiful and innovative fantasy role-playing game (RPG) in which you started off at level 1, went around trying to avoid all creatures and evil non-player characters (npcs) as virtually all of them would kill you if you didn’t have armor, weapons and skills to defend yourself (I use the word “kill” loosely of course as you could always reload a saved game and get out there again; annoyingly, I spent a good deal of time forgetting to “save” and losing a bunch of progress due to absent-mindedness). You would go around collecting herbs, flowers, wood, a bunch of stuff, that you could sell to a local merchant and upgrade your armor; all the while you were earning experience and gaining skills, making it more satisfactory that you wander off the beaten path and encounter bad npcs who would try their best to revert you to a load game. I played Oblivion and Skyrim, both for unhealthy amounts of time. Then Fallout 3 and Fallout: New Vegas. I didn’t really enjoy the Fallout games as much as they are quite bleak in my view, but they were truly incredible kinds of modern story-telling at the same time.
In each case, I became increasingly fascinated by what was happening beyond the screen, in the world where lines of programming interacts with ones and zeroes and become these astonishing worlds full of beauty, characters that talk to you, environments that go through diurnal/nocturnal shifts, in which weather occurs, in which the grasses on a hill and the flowers in a meadow blow in the wind. And in which little pieces of code sense that you are approaching, get really upset with you and do everything in their programmed power to make you reload a game. You can almost hear them chortle fiendishly when you die, particularly if you forgot to save your game in the past hour or more of game play (usually they just walk away and go back to whatever they were doing before you interrupted, which may include farm work, millwork, blacksmithery, marching and patrolling, buying stuff from local merchants, talking to each other, and other bits of coding brilliance.
But the folks who code games are up to much more than just making a game. They are also pushing the limits of what current commercially-available computing will do and making further innovations in central processing units (CPUs), graphic processing units (GPUs), operating systems (OS, like Windows 10 or OS X El Capitan), audio processing, like Creative Labs sound cards, etc. They are pushing for better home computing, better console computing (Playstation, XBox, Wii U, older models, etc.), more stable server and internet protocol technology, more secure computing (getting your account hacked after you’ve leveled up, saved a bunch of armor, weapons, gold, etc. is a horrible thing!). They are redefining the limits of technology for everyone in the world; as the gaming platforms increase in power, all other computing increases in power as well. While “power” in computing might cost a bit more, entry-level computers fall in price and deliver that power to increasing portions of the world’s population. The whole human race lurches forward in its abilities to (1) learn computing skills, even if they are related to writing and calculating applications and (2) make a larger percentage of the world capable of getting 21st-century employment.
I play a massive multiplayer on-line role-playing game (MMORPG). I’ve been doing this in my spare time for the last 3.5 years, again with a huge number of hours. About 50,000 players from around the world play this game on any given day (it is hard to pin a number down on this, but even if you put ±20% error brackets around 50k a whole bunch of people are playing this game, interacting with each other, chatting in text chat within the game, chatting/helping each other using a voice communication application). It is visually beautiful, a huge world with many environments, tough “boss” monsters, new play mechanics (gliding, updrafts, bouncing), new weapon and armor classes, and all of it pushes a good computer pretty hard.
There are a bunch of serious jobs for coders ar0und the world as well. I wish, for instance, that the battle between hackers/virus-and-malware-writers and people who are just trying to use computers would get managed in some reliable way. It is unacceptable that personal bank and credit card accounts are hacked into. It is absolutely frightening that there is some probability that electricity/utility grids will be hacked, damaged and/or crashed. While I personally wish that international governments were more transparent in their information-sharing with their various citizens I also think it is dangerous to have troves of classified documents hacked and shared; my hope is that (1) systems will become more secure (although I don’t know how short of disconnecting them from the internet) and (2) hacking will drive governments to be better at sharing without the hack.
I don’t know for a certainty how many jobs there will be for coders in the future – and coders, of course, is a catch-all phrase that includes many different types of languages, so there is no single number for the number of “coders” needed. I do know that there is an increasing push to get more women to learn coding and more young people to start coding at early ages.
edx.org and coursera.org have a bunch of free courses in a variety of languages for achieving your particular interest (some require pay for certificates, so be aware of that).
There is also sage advice about being wise if you’re thinking of a career coding. This article from TechCrunch summarizes some issues you should know (the title overstates his argument for the sake of rhetoric, but pay attention to what he says):
But be aware that this is difficult, painstaking work that requires creativity in a language with many hard-and-fast rules. Every word, every punctuation mark, every number has a special and inflexible meaning that, if mistyped or misunderstood, may lead to hours or days of painful debugging (we’re not talking fishing a moth out of the machine this time). Programmers, system administrators, database administrators, server administrators, security administrators, etc. may work long hours. Not every job you get involves the adventure in wonderland jobs available at Google or the other tech giants; some are just sweaty, long days hunched in a cubicle, staring into a monitor, slashing away at lines of code that may come to seem infinite in length and complexity – and unintended, perhaps insoluble, consequences. Some of these people have rich lives and many interests, some have normal personalities and live healthy lives. Some live on a steady diet of salty chips, caffeinated soda and stimulants, grow beyond their belt loops and have amazing neck beards, fail to understand humanspeak any longer as they have become extensions of the code in which they live. It’s all up to you whether you become a designer of infinite beauty or an updater of cell phone apps. It’s all up to you whether you learn enough – persistently – to stay employed or whether you fall by the wayside when new hot languages emerge to rule the problem sets.
On the other hand, you’re helping define tomorrow – or might be if you’re good enough.
In my view, I bow humbly in all directions so that all coders, whatever their mission, feel the respect I feel for them. Keep pushing the limits and making new stuff. I can’t wait to see what you’ve been up to!
P.S. I am not a coder or a computing historian; I apologize for any liberties or oversights (which are many) that I have taken with the contents herein.
We are all (I assume) very comfortable with the tangible, observable facts that surround us. I am sitting in a chair at a desk in front of a computer I assembled a couple of Augusts ago from parts recommended on the www. My desk is cluttered with papers, CDs (some music, some software), a few groupings of office supplies, and some random stuff that I haven’t gathered the courage to toss yet.
We are all (I assume) very comfortable with the tangible, observable facts that surround us. I am sitting in a chair at a desk in front of a computer I assembled a couple of Augusts ago from parts recommended on the www. My desk is cluttered with papers, CDs (some music, some software), a few groupings of office supplies, and some random stuff that I haven’t gathered the courage to toss yet. Oh, and a work glove – I really have no idea what it’s doing here. Beyond the desk, there are a few tables, one for a scanner, one for a printer, one for a reading light next to my recliner (I should call this the Sleepinator™, or perhaps the Napinator™, as I only nap (or “have a kip,” thus the British trademark for the Kipinator™ is born) in it). My cat (her name is Emma) is sleeping on the window seat (a little earlier, she was sleeping in my left armpit as I read in the Napinator™).
A brief paws for a picture of my kitty (it’s a little blurry, but captures her majestic qualities quite well I think; as she spends a lot of time sleeping, this is an “action” shot).
The floor has a nondescript light brown carpet but is covered by a Persian rug. Various electronics lie about with a nice efflorescence of cabling (I prefer LAN lines to WiFi), and too many books in boxes (although tidy boxes, I might add). Beyond the walls and windows, all objects as well, lies the planet at large, with a scattering of trees interspersed liberally with asphalt and concrete, grass and weeds, shrubs and (less obviously) the invisible beds of fungi waiting to fruit a body and exhale a cloud of spores so that more invisible beds of fungi will grow (and let’s not forget their friends, the adventitious bacteria, etc.). There are squirrels and a variety of birds with wonderful voices, a few neighborhood cats and when accompanied by their obedient masters a variety of dogs, usually of the small and yappy kind (see majestic cat above). An unnecessary miscellany of automobiles, some small and energy-efficient (relatively speaking), some comically large, supported on wheels that would do a gargantuan earth mover proud, move around out there, rushing on errands that may or may not be as important as indicated by their speed. And then there is lots of earth and rocks and sky and, eventually, ocean and, down further, mantle and magma and other molten earth essentials, simmering away at 3,000 to 3,500°C (5,432°F to 6,332°F for non-scientists and Americans) and at a pressure of 1,250,000 (1.25 million) times the pressure up here in my writing room.
Above our sky lie other stars, other planets and moons and asteroids and comets and meteors with all of the associated atmospheric heterogeneity imaginable (methane or sulfuric acid or nitrogen or hydrogen sulfide of frozen water or… well, just about anything) and maybe other life forms, other squirrels and cats and dogs and grass and weeds and shrubs and trees and intelligent bipeds (I mean, whom among us really knows at this point in our young, relatively unevolved lives; there are, apparently, in excess of 100,000,000,000 (100 billion) galaxies known to date (with the limits of our present instrumentation) and each of those galaxies is estimated to have 100,000,000,000 (100 billion) stars, each with who knows how many planets and moons and asteroid belts and all the rest). There is a ton (by which I mean way more than a ton) of “stuff” around us, very near and extremely far away and we have some idea of what constitutes it all – molecules (small and large), elements, atoms, electrons, protons, neutrons, subatomic particles, weak and strong attractive forces, electromagnetic particles and waves (energy), gravity, all the subatomic particles you can blast out of nuclear hiding places in the various kinds of accelerators we have designed and built.
But all of it, if gathered into a giant ball in giant and ethereal hands like a ball of dirt, composes about 4% of the substance of the known universe. The rest of the universe is composed of “stuff” called dark matter (26% of the universe) and dark energy (70% of the universe). As what I have just said may be new to your way of thinking (and/or you may have just stopped reading as I may be entirely nuts), this is an excellent time and place to watch the following video by Dr. Patricia Burchat of Stanford University.
Note how completely energized she is by these ideas (I really love to see passionate people talk about their work). Now, when Dr. Burchat and others in her field speak or write about “dark” matter, they are using words in a very imprecise way. They are finding words that are place-markers for the mathematics that they have worked through, math that is perched on the shoulders of other math worked through by other physicists and mathermaticians, reaching back to the Greeks. But you need to be a deeply committed practitioner of those disciplines to understand what really underlies the metaphorical “dark matter” and “dark energy.” I am attempting – as Dr. Burchat does – to expand on these insufficient metaphors.”Dark” matter isn’t dark in color – it’s not black (a color that appears to our eyes and minds when an object has absorbed ALL wavelengths of light in the visible spectrum), it is not dark in a spiritual or theological sense, it is not dark in the way that
“Dark” matter isn’t dark in color – it’s not black (a color that appears to our eyes and minds when an object has absorbed ALL wavelengths of light in the visible spectrum, which is in turn a very tiny sliver of the overall electromagnetic spectrum), it is not dark in a spiritual or theological sense, it is not dark in the way that Scandinavian “black” metal is dark (that compels me to reach for the “stop” button).
Dark matter is only apparent because of its influence in the fabric of the universe, its effect on gravitational forces that, by way of Einstein (and Riemann) permeate that blackness up in the sky at night and hold the shiny bits (including our apparently sky-blue bit) in place. The observation of dark matter is seen in the behavior of galaxies; stars at the edge of galaxies, if only under the influence of gravity, should move more slowly than stars closer to the center. They don’t; the speed of stars rotating around the center of a galaxy move at the same constant rate as the stars towards the middle of the galaxy, so there must be matter that is interacting throughout the galaxy that forces the exterior stars to move at that rate. An oversimplified analogy might be that we do not see air, but we see the effects of wind (but air and winds are composed of atoms of gasses and have mass and energy that we understand very well, so this is a poor, earthbound analogy indeed). The effect of dark matter is seen not only in the circulation of outer stars (and their planets, etc.) around the center of the galaxy but in how galaxies cluster together and how the light from individual galaxies smears due to gravitational lensing. This unseeable matter has enormous effects in our universe, but we are still struggling to find a method of “seeing” (this is a poor word to use here) it. For some stunning computer simulations of how the universe might have evolved in the presence of dark matter and dark energy, watch the “full-size” version of the film at this website (bottom of page).
Now, if all 96% of the remaining “stuff” in the universe was dark matter, solar systems and galaxies and clusters of galaxies would tend to cluster and the universe would not seem to be expanding outwards. Instead, we (well, astrophysicists and their ilk) observe a universe that is expanding. Space itself is spreading apart. The hypothesis is that this occurs due to dark energy, the predominant “ingredient” in the universe, one so powerful (in spite of its unseeable nature) that galaxy clusters and the universe that contains them in a web of gravitational force are expanding away from each other, the opposite of what we would expect to see from the more neighborly, clustery behavior of galaxies and their contents.
This is weird suprahuman stuff, stuff beyond touch and beyond our usual intuition, unless one bathes the brain in a nutrient-rich broth of advanced mathematics, physics, chemistry, astronomy, and similar elixirs. The concepts of dark matter and dark energy are elusive to those of us who crawl the earth looking for groceries and the next mortgage payment, but I am extremely (EXTREMELY!) pleased that some of us are paying attention to how this whole amazing thing fits together.
To close, while I was writing this thing I thought about a great Brian Eno song called “Help Me Somebody” from his amazing collaboration with David Byrne “My Life in the Bush of Ghosts.” The song centers on samples of Reverend Paul Morton letting his congregation know what time it is but is fattened up by funk of the most satisfying kind, delivered by Eno, Byrne, John Cooksey (drums) and Steve Scales (congas, other percussion); I dare anyone to stay still while listening to this track.
The “lyric” (i.e. Rev. Morton’s sermon) includes the following, which I will paraphrase:
“It’s so high you can’t get over
It’s so low you can’t get under
It’s so wide you can’t get around”
I obviously dilute Reverend Morton’s intent here, but the song and lyric popped into my mind and seemed to be telling me that this is the nature of the universe – so high, so low, so wide. That’s the 96%. We live in the 4%.
As in all of these weighty posts, I encourage whatever readers I have to explore the additional materials. Some of them might make your brains hurt or itch or explode or collapse in on themselves. All of those are good! Do more of the things that make these things happen! There is great happiness available to those that feed their minds!
Every year, just like you, I have a “birth day,” which is a misnomer as I am not born on that day every year, although I was once. When people ask me why I don’t like to acknowledge my birthday I tell them that time is a continuum. It breezes from one tiny fraction of a second to the next without counting where (when?) it has been or where (when?) it is going. There are no fractions of seconds, of course. We made seconds up and then when those were too large, we fractionated them into as many decimal bits as we needed.
Every year, just like you, I have a “birth day,” a misnomer as I am not born on that day every year, although I was once. When people ask me why I don’t like to acknowledge my birthday I tell them that time is a continuum. It breezes from one tiny fraction of a second to the next without counting where (when?) it has been or where (when?) it is going. There are no fractions of seconds, of course. We made seconds up and when those were too large, we fractionated them into as many decimal bits as we needed. We made minutes up at some point, perhaps when hours seemed too long or work seemed too slow. We made hours up when the days passed like sap in the wintertime. Days, weeks, months and years were strongly suggested by planetary, lunar and solar phenomena. To our credit, we noticed these patterns and live our lives waiting for them to begin – or end – a hard day, a boring hour-long meeting, a cold winter, a hot-and-muggy summer, the wet season, the dry season, etc. For a nice review, have a look at this.
Typically, though, we don’t think of times much shorter than 0.17 seconds. That is approximately the time it takes to count each of the six beats (or in poetry, “feet”) in “one-Mississippi,” etc. The “one” gets sort of two beats and the “Mississippi” goes in four. If we are keyed into a speed sport, we may split things down to the tenth of a second – I’m not sure I can do this, but I’m relatively certain that people who judge these kinds of events may have a refined sense of one-tenth of a second. Then it’s down to the hundredths of a second and, although all sorts of stopwatches and “photo finish” timers work in that realm, I can’t imagine that the human mind can honestly do much more than watch as the hundredths accumulate into tenths.
There are many time intervals that are extremely difficult for humans to comprehend, though, very short and pretty long. At one end of the range, we have a unit developed in physics called Planck time, named after Max Planck, one of the brilliant theoretical physicists of the 20th Century. This unit is defined as the amount of time that it takes for light to travel one Planck length in a vacuum. A Planck length (not a piratical plank length) is very short indeed: 1.616199×10−35 meters (m), which is about 1×10−20 the diameter of a proton, which is very tiny and comes in somewhere between 0.84×10−15 to 0.87×10−15 m. It is conceived of as the shortest theoretically measurable length within an order of magnitude (or a factor of 10). How much time is a Planck time then? It is a mind-bendingly brief 5.39116×10−44 seconds. Let me show you a comparison between numbers. First, we have 1/10 second:
0.10 or 1/6 second 0.17 (the “Miss” in “Mississippi,” let’s say)
Now, let’s show a Planck time: 0.0000000000000000000000000000000000000000000538116 seconds
To say that differently, but not necessarily more helpfully, there are about 2×10+43 of these Planck times in one second (simply the inverse of 5.39116×10−44 seconds), which is obviously a huge number (2 followed by 43 zeros). The links for Planck time and length will allow you to explore this matter more thoroughly, but both use the speed of light (c=3.00×108 m/s), the gravitational constant (G=6.674×10−11 N⋅m2/kg2) and Planck’s constant (actually, the reduced Planck’s constant, which divides Planck’s constant by 2π), which is 1.054571800(13)×10−34 J⋅s. All this to say something quite simple – Planck time (and length) is derived in a fairly straightforward way using some well-established physical constants, although with some very careful consideration by Dr. Planck. His considerations have held up well; Planck’s constant is part of any useful high school chemistry or physics curriculum.
The real takeaway here is that time and action are inextricably linked. For a Planck time to elapse, a Planck length must be traversed by a photon in a vacuum. A photon must start somewhere and, on its way to somewhere else, it must etch a Planck length in space. This linkage is pretty neat, however resolutely transfixed and “motionless” the avid reader may be in their chair. How can I say that? Are we ever still? No.
Consider the amount of time it takes to absorb one photon of the appropriate energy into the electronic shell of an atom. The photon is either moving at the speed of light (in a vacuum) or somewhere close to this speed if it is traveling through a non-vacuum medium. This modified speed of light is calculated by dividing the speed of light (c) by the index of refraction (n). The higher n is the slower the modified speed of light. Here’s a table of diminishing speeds of light:
Index of Refraction
Light Flint Glass
Dense Flint Glass
If the energy of an atom and the energy of a photon, moving at whatever speed after going through whatever medium, are compatible, the photon is absorbed by the atom with an electron quantum leaping proportionately. This process takes about 1 femtosecond or 1x10−15 seconds (or 0.000000000000001 seconds). There is some infinitesimal distance involved in these transitions, but the distances, if they are meaningful at all, are on the order of Planck lengths and do not add meaningfully to the time it takes a photon to be absorbed.
After that absorption occurs, a new cascade of intra-atomic events occur, each with an associated time, each a tiny bit longer, slower, more human-paced, than the absorption event. I would enumerate them, but instead, I’ll just use a picture, a table and a video for your edification.
The other completely nuts thing to keep in mind is that every single molecule and ever single atom in your body is vibrating and rotating – continuously! Every molecule we breathe, eat, digest, incorporate into our teeming collection of collaborative molecules is doing exactly that same thing. Here is a set of nifty .gif images to help you imagine the critical turmoil going on inside (and around) us all:
These represent the different modes of vibration along covalent bonds. In addition to this motion, there are the rotations of each atom at the ends of each bond – and these modes of rotation get complicated really quickly, with spin orientations and precessing (this is what a top does when it spins – the wobble is precession) around axes. It’s all really a maddening, continuous mechanism of complexity. Even if all these molecules inside us were cooled to absolute zero, the motion would continue, although slowed. And all of them are like tiny clocks running at tiny fractions of a second – at an astonishing rate of speed, at roughly 10,000,000,000,000 to 100,000,000,000,000 times per second.
But I am writing about time, not intra-atomic events, and we could all easily be lost inside an atom for the rest of time if caution is abandoned. It is part of the definition of being a chemist – getting lost in the atoms (or at least the molecules). And with phosphorescence events taking a tenth of a second (1×10-1 seconds (or 0.1 s)), we’re at the interval for phosphorescence and can almost comprehend this.
Let’s move on.
Human lives are measured in seconds as well. Nine months of gestation is 23,328,000 seconds (give or take); ask any mother and she will be able to vouch for the satisfaction and endlessness of each second. We go to first grade at 6 years or 189,216,000 seconds and graduate high school after 567,648,000 seconds. Lives get into a murky middle bit after this and people hit benchmarks at various times, but it all comes down to life expectancy in the end. The people in Monaco, one of the richest in the world, have an average life expectancy of 89.52 years, which is 2,823,102,720 seconds – almost 3 billion seconds, people, while the people of Chad, bordered by Nigeria, Niger, Libya, Sudan, the Central African Republic and Cameroon, have a life expectancy of 49.81 years – 1,570,808,160 seconds – very close to being half the average life expectancy of people in the Principality of Monaco, bordered on three sides by France and on the fourth by the Mediterranean, home of casinos, yachts and the Grand Prix. In the United States, average life expectancy is 79.68 years or 2,512,788,480 seconds, 311 million seconds less than the average citizen of Monaco; when stated that way, it seems like a huge difference, doesn’t it?
But we’re not done measuring out human life. In the U.S., we count forward from 0 (zero) B.C. and are currently in the year 2016 as I write this. Two thousand and sixteen years is composed of 63,576,576,000 seconds, only about 22.5 Monaco lifespans ago, but 40.5 Chad lifetimes ago (sort of crazy when you consider it that way). But B.C. (or B.C.E., the term used by anthropologists and anyone studying world history instead of Western European and Middle Eastern history) is just a convenient temporal interrupt in a much longer series of events.
Our species crept into the genome around 200,000 years ago – a time that dwarfs the 2,016 years B.C.E. by two orders of magnitude or roughly 100-fold (100 x). Two hundred thousand years is a whole bunch of seconds – 6,307,200,000,000 seconds, or six trillion three hundred seven billion, two hundred million seconds (the time seems more awesome when typed out as words). But we’re not done yet. Anthropologists have found lots of bones of our ancestors, our nearest relatives to the great apes appearing between 6 and 7 million years ago, 30 to 35-fold more time than for the slow evolution of Homo sapiens, or between 189.2 trillion and 220.8 trillion seconds ago (keep in mind that the 0.2 and 0.8 in those number represent 200 billion and 800 billion seconds).
But let’s keep going. The Cretaceous–Tertiary (K–T) extinction occurred around 65 million years ago; current theories favor a huge meteor striking the earth in the northern Yucatan peninsula; 2,049,840,000,000,000 seconds ago (2 quadrillion seconds). But the earth is believed to have coalesced from hot gases and particles of stardust into something like its current orbit around the sun around 4.5 billion years ago; various models move the digit after the “5” around (is it 4.49 or 4.54?), but there is general scientific consensus around the 4.5 billion figure. 4.5 billion years equals 141,912,000,000,000,000 seconds quadrillion seconds ago, and it was not a livable planet at the time.
The universe, on the other hand, is yet another order of magnitude older. There are at least five models for its age, but the weighted mean of these models puts the age at 12.94 billion years, thus giving the earth about 8 billion years to coalesce into the nasty, raging bit of heat that cooled to what we know and love now. If you do the dimensional analysis here (as I have done so often above), you get a universe that has been in existence creating stars and galaxies and solar systems and planets and moons and asteroids – and that continues to do all of those activities VERY actively right up until today – you get a universe of 408,075,840,000,000,000 seconds (408.1 quadrillion seconds). The universe has been in existence, plus or minus 2.3 billion years or so (see the link above) for 162,399,598.4 average American lifespans (one hundred sixty-two million three hundred ninety-nine thousand five hundred ninety-eight point four lifetimes).
Why have I taken you through a journey from Planck time to the age of the universe? To suggest two thoughts:
When humans try to imagine events in time, all of us start getting a little foggy about the whole business when it exceeds one of our average lifespans; even then, it is a rare twenty-year-old that can imagine what it means to be forty or sixty or eighty and the eighty-year-old increasingly feels that everything happened “as if it were yesterday.”
While I have divided up time into fractions of seconds at one end of the scale (the Planck time) and quadrillions of seconds at the other end of time, time is not a series of discrete events; it is continuous and seamless. If one divides a Planck time by another Planck time, the fraction of a second gets shorter – it is about 1×10-89 seconds. One can keep doing this – infinite divisibility – and never reach the continuous nature of time; it will always result in smaller and smaller fractions of time with seamless continuity of the asymptote.
It is entirely possible that we are at the measurement limits regarding the start of our universe. Our current measurements are “birth-of-universe-dependent,” that is, the phenomena that we measure to determine its age are all related to the birth of this universe, the one in which we are a tiny particle orbiting a tiny sun in a tiny solar system in a huge galaxy, which is one of countless huge galaxies (we keep on finding more galaxies) that comprise the universe as we know it SO FAR. Stephen Hawking currently hypothesizes that our universe is one such event in a multiverse. Consider a near-infinitely dense point somewhere in space-time (a “singularity”). From time to time, the density becomes too dense for the singularity to contain it and it “burps” out superfluous matter into space-time, but not in the plane and/or dimension of our universe. Sometimes, these burps are tiny and are reabsorbed by the singularity, but sometimes a new universe of some magnitude buds off and starts expanding. For additional erudition on this idea, please watch the following videos:
This is heady stuff and nearly impossible to understand, except through metaphor and analogy, without the help of advanced mathematics and profound amounts of deep thought (I am a mere chemist and find that I am boggled by these concepts, but I will not deny their allure (p.s. a mere chemist is different from the mythological mer-chemist)).
I will not get into how long this, our, universe is likely to exist. It is an imponderable but is being pondered. Let’s leave the future to those who speculate on those matters (cosmologists and physicists).To conclude, time is a dimension that is infinitely brief (or continuous) and infinitely long (or continuous). Dividing it into human events is convenient, but none of us should pretend that we understand it except by comparing it with events in our own lives. This is not always true; anthropologists, paleontologists, cosmologists, physicists, geologists live on a timeline that, by nature of their study, makes more sense to them and is relatively unlimited by average lifespans and birthdays. We should be humble when we consider the enormity of what has been observed and consider the enormity of what has been observed and consider carefully what is known while allowing that we are not done observing and trying to learn and probably will never finish unless we cease to exist altogether.
How much strength do you have in reserve? How much resilience to difficulty? When does a torrent of potential injury turn into actual harm? Where does resistance collapse into capitulation? Why are you weaker than I am or, conversely, why am I weaker than you? What insult makes us laugh or tips the scales into tears and decompensation?
The lucky among us face few true tests of these dimensions during our lives, but there are few of us who are lucky. It may even be that the lucky among us strive to feel stress just to feel, while the unlucky spend every waking minute trying to diminish the rain of blows life can mete out.
Stress and strain are as diverse a set of measures as any in the human experience. Physical stress, so well born by athletes, defeats those of us who have no particular capability to run, leap, pull, punch, throw, swat, jump, crunch, dive, dance, twirl or fly. Mental stress can often be born well by those with no physical ability at all and can bring down the most astonishing Olympian.
By analogy, the following diagram graphs stress against strain. A stress is administered to a system, whether an iron bar or a piece of plastic, a human bone or flesh, a mental grocery list or a math theorem. “Material,” whether synapse, bone or flesh, goes through elastic tests all the time. If the elasticity is exceeded, permanent deformation (or learning, to put it differently and more positively) can occur. However, lurking beyond deformation is permanent injury, fracture, capitulation.
The history of life on this planet – and in the universe at large – is one marked by stresses that have crushed some of us, while strengthening others, have imploded stars and created new galaxies, have taken some species to extinction while allowing others to multiply beyond numbering. We are all on the cusp of unendurable weakness, but we strive on.
I was born in 1953 to people I don’t know and raised by people I wish I knew better. I have an academic background in literature and science and have worked in positions of increasing responsibility for over thirty years in one realm of the healthcare industry.
Biographical note: I was born in 1953 to people I don’t know and raised by people I wish I knew better. I have an academic background in literature and science and have worked in positions of increasing responsibility for over thirty years in one realm of the healthcare industry. I am interested in many areas of knowledge; literature and science (obviously), but also film, art, many types of music, various episodes in our peculiar, shared, often ignored history, political behavior (rather than politics), various religions. I wish there were more time in every day and more days in every life. I have more books than I know what to do with and keep on adding things to my wishlist that I may never get to read, but it is better to be curious than not, alive than dead.
The hydrologic cycle is a central phenomenon enabling life on Earth. It is complex on a macroscopic and molecular level and functions interactively with every aspect of our biological, geological, and physical world. Its impact on humanity has anthropological, economic, environmental and social implications that are numerous.
The hydrologic cycle is a central phenomenon enabling life on Earth. It is complex on a macroscopic and molecular level and functions interactively with every aspect of our biological, geological, and physical world. Its impact on humanity has anthropological, economic, environmental and social implications that are numerous.
(“Water cycle,” n.d.)
Yet it all starts with one of the simplest of all chemical species – the water molecule. Only 3 atoms in composition, 18 daltons in mass, less than 300 picometers (282 trillionths of a meter) in diameter, its complexity is the subject of numerous books and articles. Under the right conditions, it is a solid, a liquid, a gas, an acid, a base, a neutral atom (although this is rarely true in nature), a ricocheting billiard ball as a gas and/or a component in a complex, flickering lattice of other water molecules in liquid and solid form.
Even with this level of complexity, it is impossible to understand the water component of the hydrological cycle without understanding that water loves to mingle with other molecules. If water encounters a solid ionic compound, like the wide range of salts found in soil, streams, rivers, lakes and oceans, it pries the ions apart and engages them in three three-dimensional ballet of solubility. If it encounters an acid, it becomes the hydronium ion in the process of dissolving the acid; if it encounters a base, it becomes the hydroxide ion in the process of dissolving the base. If it encounters reactive gases, like carbon dioxide or sulfur dioxide or nitrogen oxides, it forms carbonic or sulfuric or nitric acids. If it encounters something that is dry, like the surface of a stone or a clump of clay, it erodes and moves some of it to another location, sometimes near its origin and sometimes far away. If it encounters discrete materials, it breaks them down and mingles with them. If it encounters organic compounds, some of which are non-polar and not attracted to the water molecule, it causes them to form droplets or micelles, which are then swept along by the water. With other organic compounds, such as esters and ketones and alkenes, it reacts with them to produce polar products, which can then react with other organic compounds.
In living cells, water is the elixir in which life happens. If a tree, a cell, a human or a cat encounters water, it is sipped up and used to fortify these water-dependent structures, which collapse and turn to dust without its liquid sustenance. Water carries inorganic and organic ions; it carries phospholipids and amino acids; it carries nucleic acids and sugars; it encourages fatty acids to circle the wagons and create cell walls, across which the cell’s supplies are pumped by active and passive portals that open and close for water and its many friends. It encourages DNA to spiral inwards as the nucleotides bond and the sugar/phosphate backbone prickle outwards into the cell’s aqueous soup. Information could not travel if not for the charged molecules that water helps create and carry. But enough about water the molecule. Let’s consider water, the cycle.
Let’s pretend, for an instant, that water “starts” somewhere and continues through the cycle from this starting point. Let’s pretend it starts as precipitation. Forget for a minute that precipitation starts with clouds and clouds start with evaporation and evaporation occurs because of wind, sun, and atmospheric pressure. Forget for a moment that water precipitates as a solid now and then. Let’s just pretend it rains. What happens when it rains? Droplets of water between 0.02 and 0.25 inches in diameter reach terminal velocity of between 5 and 20 miles per hour and strike whatever is beneath them. Each raindrop is rarely pure water; for rain to occur, the vapor in clouds condenses around “a microscopic particle of smoke, dust or salt” (USA Today). In a fascinating calculation, Bob Swanson, a weather editor with USA Today, provides an estimate of the number of droplets that fall in a storm:
“Assuming an average thunderstorm is 15 miles in diameter. Assuming a circular base of the storm, the area of the storm’s cloud base is about 175 square miles. Now let’s assume that .25 inches of rain falls from the storm. This yields a total volume of rainfall of around 175 billion cubic inches. Now if we assume a spherical raindrop, the volume of an average size drop would be about 1/10,000th of a cubic inch. Dividing the total rainfall by the volume of an average raindrop gives a total number of raindrops around 1,620 trillion.”
When one also assumes that each droplet reaches terminal velocity, there is tremendous energy unleashed in a storm. Then think of all the storms that happen and all the energy from all the storms. This is a lot of force dropping out of the sky! When old leaves are struck by raindrops, they are ripped from their homes, becoming compost for new life. If dead things are struck by water, bacteria and molds help decay the creature and turn it back into nutrients, parts of other cycles of nitrogen and carbon and sulfur. If a rock or soil is struck, small amounts are displaced and move away from their source. For evidence of what rain can do, examine the Badlands of South Dakota or the gaping tear known as the Grand Canyon or the alluvial plains of South and North Carolina – created from Appalachian precipitation on mountains that were once five times as high as they are now. Yes, some of this was due to the action of rivers, but the rivers were replenished by rain.
Different sizes of raindrops:
A) Raindrops are not tear-shaped, as most people think.
B) Very small raindrops are almost spherical in shape.
C) Larger raindrops become flattened at the bottom, like that of a hamburger bun, due to air resistance.
D) Large raindrops have a large amount of air resistance, which makes them begin to become unstable.
E) Very large raindrops split into smaller raindrops due to air resistance.
One way that water re-enters the hydrologic cycle is through watersheds, defined as “a land area whose run-off drains into any river, stream, lake or ocean” (USEPA, June 1998, p. 1). Run-off doesn’t only occur on the earth’s surface, though. Of the 332 million cubic miles of water on our planet, 97% of it is salt water and approximately 1.7% of it is groundwater (USGS); only 46% of this is fresh water. This is replenished by seepage into the ground from the various types of precipitation. If we were to dig a perfect hole in the ground, we would find the upper layers a mixture of air and water, but lower layers would become increasingly wet. Eventually, we would reach a level where water occupies all of the space between grains of sand and gravel. This level is called the water table.
(“Water table,” n.d.)
Of course, some of the water all courses down streams to rivers and rivers to lakes and lakes to seas and oceans. Some of the water that enters streams and rivers and lakes and oceans weeps out of the ground into these bodies of water, depending on the relative elevation of the water table to the bodies of water in the area. Some of the water in the water table is pumped up for use in homes and factories as well.
The water cycle really gets complex when precipitation falls on and interacts with man-made phenomenon, like roads and highways, or human industries like oil refineries and coal-burning power plants and waste pools for cattle and swine and agricultural fields full of pesticides and herbicides and fertilizers, or human by-products like landfills or untreated waste streams from storm drains. When water, this remarkable molecule, plunges to earth and mobilizes the products of human industry, the entire water cycle becomes contaminated in the process. Water takes our waste and pollutes the rivers, lakes and oceans, creating imbalances in nutrient cycles and killing creatures that depend on a balance between water and salts, nutrients and energy to live their normal lives. Water releases volatile organic compounds from human industry and they become part of our atmosphere. Water mixes with the sulfur and nitrogen oxides and precipitate back to earth as strong acids that change the equilibrium state that nature requires for its magic.
Rights for use of the raindrop illustration are granted by Pbroks13 as follows: “I grant anyone the right to use this work for any purpose, without any conditions, unless such conditions are required by law.”
Bell, J.A. (2005). Chemistry: A project of the American Chemical Society. New York: W.H. Freeman and Co.
Flynn, D.J. (ed). (2009). The Nalco water handbook (3rd ed.). New York: McGraw-Hill Co.
Jacobson, M.C., Charlson, R.J., Rodhe, H., Orians, G.H. (2000). Earth system science. San Diego, CA: Academic Press.
Gruver, J. and Luloff, A.E. (2008). Engaging Pennsylvania teachers in watershed education. Journal of Environmental Education, 40(1), 43–54.
Heimlich, J.E., Oberst, M.C., Spitler, L. (1993). Two H’s and an O: A teaching resource packet on water education. Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Environmental Education.
Lacosta-Gabari, I., Fernández-Manzanal, R., and Sánchez-González, D. (2009). Designing, testing, and validating an attitudinal survey on an environmental topic. Journal of Chemical Education, 86(9), 1099-1103.
Marques, C., Izquierdo, M., Espinet, M. (2006). Multimodal science teachers’ discourse in modeling the water cycle. Science Education, 90, 202–226.
Sträng, M., and Åberg-Bengtsson, L. (2010). “Where do you think water comes from?” Teacher-pupil dialogues about water as an environmental phenomenon. Scandinavian Journal of Educational Research, 54(10), 313-333.
Walker, M., Kremer, A., Schluter, K. (2007). The dirty water challenge. Science and Children, July, 26-29.
Winter, T.C., Harvey, J.W., Franke, O.L., Alley, W.M. (1998). Ground water and surface water: a single resource. Denver, CO: U.S. Geological Survey.
Human beings – we – are storytellers. Each of us, if or when we gain the ability to speak and/or write and/or perhaps even touch, will tell someone a story. We will probably all tell huge numbers of stories, but let’s start with a single story. (Post continues with epistemological musings.)
Human beings – we – are storytellers. Each of us, if or when we gain the ability to speak and/or write and/or perhaps even touch, will tell someone a story. We will probably all tell huge numbers of stories, but let’s start with a single story. It may be a completely factual story that is based on carefully collected data and carefully documented observations. It may be a partially fictional story, where an initial fact is supplemented with personal biases or understandings that are not entirely supported by information available. It may be an entirely fictional story, although these stories tend not to resonate with us unless there is some truth to them. Most of us communicate with each other principally in various forms of fiction. There are about 7.4 billion of us, with an estimated 4.3 births and 1.8 deaths occur per second and an inestimable number of these will never be educated, will never learn to read, will only know what has been told them by the people they know. It may be the case that more stories have been forgotten by people who never knew how to write than have been written since. But this may be a fiction.
The stories are highly varied; some are simple, some are complicated. Some seem simple and aren’t; some seem complicated and aren’t.
We are inundated by stories in various forms. There are the various types of evening and cable news shows doing their bit to tell tales. There are the dramas on Twitter and Facebook and other platforms. There are old stories in books written long ago and there are novels that are barely stories at all, but seem to garner interest. There are newspapers, but not all objects that look like newspapers share a consistent mission. Some papers exist to trade in baseless speculation. Some provide reasonably clear interpretations of events occurring around the world, along with opinions about those events and what might be done about them all. Some have national, regional, political, racial, financial, and other biases. Some have writers who are allowed their biases whether the paper shares those biases or not. There are the stories we tell each other (or failing that, ourselves) around dinner tables or in restaurants or in the cabs of our trucks or in our cars, in trains and in airplanes. There are stories that are developed for work settings and then the stories that the employees tell, which may or may not diverge from the stories developed for them to tell.
I don’t know all of the stories, nor will I or anyone else know them all. New ones are added every second, although the birth rate of stories is not known or even queried. Some stories are dying or already dead, along with the people who tell them. We take inadequate efforts to preserve those stories.
Ancient stories can be entirely factual, although misunderstandings of observations may confound the facts in complicated ways. Modern stories can be entirely fictional, although an entirely fictional story may be rendered only in a fictional language about fictional “things” and thus may be incomprehensible to any of us attending to it.
I have lived some decades in this world, inhabiting this body and moving every so often from one location to another. My stories are what they are and probably include misunderstandings or misapprehensions of data and observations provided by others. They may include facts that are solid as I know them, but may change as the days pass. I’ve found myself accumulating stories and with that inevitable process has come a desire to share some of them. This platform, a factual location or set of locations on one to several cloud servers, is made up of a language of codes. Ultimately, the language is boiled down to two numbers, a fiction that we created centuries – even millennia – ago to differentiate between nothing and a thing; 1 (one) and 0 (zero). This fiction starts at what became zero can extends to infinity in both directions. It includes, because someone saw a need, the inherently fictional realm of imaginary numbers. But 1 and 0 are all that are used here to display these thoughts to you.
Sight (and Light)
I am intrigued by what we believe. It is clear to me that everything we believe is stored within the confines of our minds. If we see something, we do not know the thing itself, we know only what electromagnetic transmissions – what we blithely call “light” – collide with the structures of our eyes, the cornea, lens, retina, the rod and cone cells, the optic nerve (among numerous other constructs), which transmit some version of that collision to our brains and its multitude of cells. We each are confined to those cells, if you will. We then compare that transmitted information, consciously or not, with all the other information of that type that has gone through a similar process earlier in our lives (if it has been retained). It may even be that there is information stored in our genes forming the basis of some of these comparisons (although I may be telling a story here, I must admit). We only know what our minds have on board.
The amazing fact is that we see only light transmitted in the electromagnetic wavelengths (represented by the Greek letter λ or lambda) between 390 and 700 nanometers (1 nanometer (nm) is one billionth of a meter or 0.000000001 meters – oh, and a meter is about 3.4 inches more than a yard). The shorter (in our example, 390 nm is shorter) wavelengths have higher energy, the longer wavelengths have lower energy; scientists worked this out in the early 20th C. and established beautiful relationships:
E=hν (energy equals Planck’s constant (6.62607004 × 10-34 m2 kg / s) times the frequency – this is known as the Einstein-Planck relationship; the “v”-like symbol is the Greek letter “nu”)
ν=c/λ (frequency equals the speed of light (c=3.00×108 m/s) divided by the wavelength of the frequency in question)
Okay, after that brief definition of terms, let’s return to the implication of this information. We see only light that is transmitted from objects we are facing or that is reflected to us when that light has wavelengths between 390 nm (shorter and with the color “violet”) and 700 nm (longer and with the color “red”). We do not see objects if they only transmit infrared electromagnetic radiation (again, the formal term for light) and we do not see objects that transmit microwaves or radio waves, although we have come to use both ranges of light very handily. So, if an object transmits light at 350 nm our eyes do not “see” whatever light is transmitted at that wavelength – that is in the ultraviolet range, which may burn us although we do not see it. We do not see objects that transmit at 1000 nm (1 micrometer or μm) or 1,000,000 nm (1 millimeter or mm) or 1,000,000,000 (1 meter or m). Light, or more accurately electromagnetic radiation, is transmitted at all of these wavelengths and at all wavelengths between. We do not visually detect any of these transmissions at all, although we experience the information carried by them (microwaves carry cell phone signals and radio waves, pretty obviously, carry radio signals, but both of these are translated from their transmitted form into mechanical energy, which is not electromagnetic energy).
I’ve used the terms wavelength and frequency above. A wavelength is the distance (length) between the peak of one wave and the next wave. You can imagine this as a sine wave or between peaks of waves in a body of water (a glass, a pond, an ocean).
Frequency is the number of times a wavelength passes from peak to peak (or trough to trough or node to node) through a unit of time. In physics and chemistry, the time unit used is the second, 1/60th of a minute. The following chart provides a nicely detailed list of relationships between the class of electromagnetic radiation (gamma or γ at the top with a wavelength of one-trillionth of a meter), the VERY narrow band of human-visible light between near-ultraviolet (NUV) and near-infrared (NIR) in the top third, and extremely low-frequency (ELF) at the bottom – with wavelengths of 100 megameters (Mm) or 100 million meters (100,000,000 m).
To summarize, we are washed with electromagnetic radiation, but we only see a very narrow bit of it – and that bit is so amazingly rich, in spite of its brevity, that we spend our entire lifetimes in awe (or should be) of all that is before us. If you are not in awe, you are not trying.
Here’s another complicating factor in what we see. Our rod and cone cells do not interpret the visible spectrum with equanimity; they are better at absorbing some wavelengths of transmitted light from objects better than others. Here’s a chart that shows this inequality:
And to completely befuddle us all, our sun, center of our solar system, but only one sun among countless suns in the known universe, does not transmit equally in all wavelengths:
This figure shows the solar radiation spectrum for direct light at both the top of the Earth’s atmosphere and at sea level. The sun produces light with a distribution similar to what would be expected from a 5525 K (5250 °C) blackbody, which is approximately the sun’s surface temperature. As light passes through the atmosphere, some is absorbed by gases with specific absorption bands. Additional light is redistributed by Raleigh scattering, which is responsible for the atmosphere’s blue color. Regions for ultraviolet, visible and infrared light are indicated.
See that jagged red bit between 390 and 700 nm? That is energy (expressed as irradiance or watts/m²/nm) graphed against wavelengths. The energy is not a flat line, so the light hitting earth is not represented evenly through that range. Lucky for us, in fact, that some of that energy (although still not equally distributed) is absorbed by layers of atmosphere before it ever reaches us. Nonetheless, the objects we see do not absorb wavelengths from the sun equally in part because wavelengths from the sun are not provided equally within our visible range. The light that is absorbed by objects is not what we see either – we see the wavelengths that are transmitted, a sort of inverse image of what the object absorbs. Our eyes absorb light unequally as well, with some rod and cone cells doing service in some ranges, while leaving dips in the absorbance of wavelengths in other ranges.
But there are also an important difference in sources of light that adds to the complexity of what I’ve said above. There is transmitted light – light that is NOT absorbed by objects around us, but is transmitted to our eyes and interpreted as discussed – and there is emitted light – light that arrives at our eyes without being absorbed by anything else. A red (or green or blue) laser emits a specific wavelength of light. For instance, red lasers are available at 660 and 635 nanometers (nm), green lasers at 532 and 520 nm and blue lasers at 445 and 405 nm. While the quantum mechanism used to create the emission is more complex and involves stimulating a material to produce the emission, the light is emitted and, if looked at carefully (not dead-on), you are seeing each of those wavelengths without absorbance by an object and how your eye and brain interprets them. There is also thermal radiation – the radiation we see from suns and stars, although you only see the unfiltered color of those objects if you are in space without any light-absorbing material between you and the thermal object. Here is a clear explanation of the different types of light – and the associated colors – we see: http://homepages.wmich.edu/~korista/color-bb.html.
In spite of all the complexities of the electromagnetic spectrum and our perceptions within it, it is probably safe to say that the objects we see, whatever their transmitted light (colors), are shaped pretty much as we see them. They are assemblages of various arrangements of atoms, many of them are probably transparent until they absorb and emit light according to the material that compose them, but many of the illusions have distinct shapes due to their chemical compositions and we see things because the shapes – as well as the colors – have come to our eyes and become familiar in our minds.
Oliver Sacks, the eminent (and recently deceased) neurologist, has written about conditions in which the mind registers associations that are not as straightforward as I’ve suggested, but I’ll let him speak about that:
At the end of all this we see what we see, of course, but we see a distinctly earth-bound and human version of what there is to see. Other creatures, if we could understand their conversations, might disagree with us. We tell stories about the objects WE see and will never know the objects themselves in an impartial light.
These biases affect our experiences throughout all our senses. We hear because mechanical energy moves the fine hairs, the eardrum, hammer, anvil, stirrup, semicircular canals, cochlea and vestibulocochlear nerve and transmits some interpreted version of this physical jostling of molecules to our brain. To understand this process a little more anatomically, view the following:
The sound happens, but we “hear” an interpreted version that is limited in frequency (Hertz (Hz) or cycles per second (cps) and mechanical energy instead of electromagnetic radiation). We humans, when our ears work properly, don’t hear much below 20 Hz or much above 18,000 Hz (18kHz) (https://youtu.be/H-iCZElJ8m0 – at my age, having listened to the music I like at the volumes I once loved, I run out of frequency detection at about 8.5kHz).
View the linked video and you will see that each sound we hear goes through anatomical processes that are somewhat like the parts of a microphone.
Operation of carbon microphone. When a sound wave presses on the conducting diaphragm, the granules of carbon are pressed together and decrease their electrical resistance.
As scientists have developed better tools, our understanding of infrasonic and ultrasonic (i.e. sounds below and above our hearing range) are important as communication methods for many of our fellow creatures. Recently, elephants have been recorded using frequencies between 1 and 20 Hz to communicate over very long distances: Infrasonic Animal Communication). Many animals – from bats, dolphins and birds to insects use ultrasonic frequencies for a variety of purposes. In some cases, sound becomes a means of seeing: Ultrasonic Animal Communication. We are surrounded by sounds, but we can only hear some of them and understand far fewer.
There is also the matter of sound intensity. We do not hear sounds in our universe that do not rise above a certain pressure, described by the following equation:
There’s a world of sound that is happening, but because the sounds originate with tiny objects (objects expanding and contracting during the day, insect sounds), we do not hear them unless they are (1) recorded and amplified using special equipment or (2) we can listen to the “signal” in the absence of all the “noise.” Here is a whole page of tiny sounds that we may or may not ever hear.
There is much else to say about sound and the impact it makes on us. That “sounds” like an excellent topic for another post.
Taste and Smell (aka CHEMICALS!)
We smell smells. We taste tastes. We touch things and they touch us back. All of these are interpretations of the universe that surrounds us, but even these notions reside in our minds.
Our tongue is honeycombed with “taste buds” or papillae.
They resemble invertebrate life on a coral reef, but they are in your mouth, on your tongue.
Henry Vandyke Carter [Public domain], via Wikimedia Commons
They translate the foods we eat, which are complicated composites of chemicals found in nature and added by food scientists and manufacturers, into impulses through the afferent (or sensory) nerve.
These sensors are complex chemoreceptors, taking signals that are entirely unlike what the eye and ear translates and very much like what the nose translates and turning those signals into impressions that signify a great range of information to us. For instance, this molecule is known as (-)-menthol (or more accurately, (1R,2S,5R)-2-isopropyl-5-methylcyclohexanol) and is found in the peppermint plant. For whatever unknowable reason, we taste (or smell) this and we think “mint!” It provides sensations that are cooling and slightly analgesic. It interacts with a protein receptor known as transient receptor potential cation channel subfamily M member 8 (TRPM8). But we don’t perceive it as a chemical that interacts with a protein receptor; we perceive it as “cool!” and “minty!” Without the protein signalling our brains that it had a fresh load of menthol on board, there would be no cool and minty.
When we place the following substance in our mouths, we think “sweet!”
We call it table sugar or sucrose or (2R,3R,4S,5S,6R)-2-[(2S,3S,4S,5R)-3,4-dihydroxy-2,5-bis(hydroxymethyl)oxolan-2-yl]oxy-6-(hydroxymethyl)oxane-3,4,5-triol.
This also tastes sweet, although it was synthesized by organic chemists and tested by careful methods to evaluate its taste. It is called sucralose, but consumers know it as Splenda®.
By Harbin (Own work) [Public domain], via Wikimedia Commons
The next molecule has a very unpleasant smell and taste and humans almost always gag when they experience it at sufficient concentrations. It’s a little amusing that we gag when we smell this as it is the principal smell and taste component of human emesis, although it also is found throughout biology and is nothing more than a short-chain fatty acid. When it is present in sufficient amounts, usually following emesis or during the putrefaction of an animal, it is an extremely unpleasant smell. Humans principally react to this taste/odor and then associate it with other experiences they have had (illness, too much “fun,” a dead animal in a field, etc.). It is a smell lodged in the human mind, although we all wish we could forget it.
By Calvero. (Selfmade with ChemDraw.) [Public domain], via Wikimedia Commons
We taste something, we smell something, sometimes at the same time we taste it, and chemoreceptors in our nose and tongue communicate a set of information via a nerve into our mind and associations are made. We are not smelling or tasting the entire thing, perhaps, we are just tasting the chemical components that are (1) at a sufficiently elevated concentration to grab our attention and (2) are received in some meaningful way by a protein receptor in a manner that triggers the afferent nerve. This stimulates some kind of association in our minds.
An odd thing about taste and smell is that it has a cultural component. There is a fruit called the durian. It looks like this:
Sort of innocent-looking, if you ignore its spiky exterior and kidney-shaped flesh (the good stuff in fruit is called “flesh,” making it kind of creepy for no good reason). Its taste is something that many in southeast Asia love above all other fruits. It is called the king of fruits. British naturalist Alfred Russel Wallace described it as follows:
The five cells are silky-white within, and are filled with a mass of firm, cream-coloured pulp, containing about three seeds each. This pulp is the edible part, and its consistence and flavour are indescribable. A rich custard highly flavoured with almonds gives the best general idea of it, but there are occasional wafts of flavour that call to mind cream-cheese, onion-sauce, sherry-wine, and other incongruous dishes. Then there is a rich glutinous smoothness in the pulp which nothing else possesses, but which adds to its delicacy. It is neither acidic nor sweet nor juicy; yet it wants neither of these qualities, for it is in itself perfect. It produces no nausea or other bad effect, and the more you eat of it the less you feel inclined to stop. In fact, to eat Durians is a new sensation worth a voyage to the East to experience. … as producing a food of the most exquisite flavour it is unsurpassed.
Travel and food writer Richard Sterling writes:
… its odor is best described as pig-shit, turpentine and onions, garnished with a gym sock. It can be smelled from yards away. Despite its great local popularity, the raw fruit is forbidden from some establishments such as hotels, subways and airports, including public transportation in Southeast Asia.
It is thought the principle reason for the second reaction is that it contains butyric acid in sufficient quantities to make it redolent of “gym sock,” as Sterling says. But here we have a cultural filter in gear. Some of our brains say “ugggh – butyric acid – I want to heave!,” while others say “mmmm – durian – I want to have it now!” Even with chemo-received information, our minds make of it what we individually will. I have looked for a video of people reacting to the smell of durian, but every video had some stagey nonsense or western bias that made it unsuitable. Suffice it to say that opinions differ markedly.
I have a thing about smelling perfume in airplanes, particularly when the airplanes are bucking around in some turbulence; the combination triggers dizziness and nausea and makes it more likely that I will grab for that convenient little bag in the seat pouch before me. That is not the intent with perfume, but for whatever reason intense smells are more likely to trigger emesis than the general smell of an airplane.
And that brings me to something we “smell” and “taste” every day, although we never really do either. We smell and taste more of it than anything else in our lives. It is, of course, air. This complex solution of gases – nitrogen, oxygen, argon, carbon dioxide, and so forth (as listed in the table below) – have no smell or taste that I can describe. Too much carbon dioxide and we feel short of breath; it has a sort of stale smell (perhaps). Too much methane and we might smell something that reminds us of a petroleum product. Sulfur dioxide is used in some dried fruits and some wines and is not usually considered a pleasant odor, although I would be hard-pressed to describe it. Ammonia is the smell associated with smelling salts – it causes the nose to constrict and our eyes to water – we want no more of that smell! But when these sixteen gases are mixed in something like the percents listed below, it smells of nothing at all. We might smell a wood fire or bakery. We might smell a diesel engine or incident of flatulence, but all of those smells are extraordinary and do not represent what we, as adults, breathe in and out somewhere between 12 and 20 times a minute. It smells and tastes of “all is good, all is normal.” But we also know when something is wrong or unusual or poisonous or dangerous, and it is not always because we have been in that situation before. It is most probably because we know what normal and good is and what it isn’t. Our minds know that something is up.
It’s all about signal-to-noise, with noise being good, old-fashioned, regular air and signal being anything that alerts us to something odd.
Gaseous composition of dry air.
trace to 0.0008
trace to 0.000025
trace to 0.00001
trace to 0.000002
trace to 0.0000003
* Low concentrations in troposphere; ozone maximum in the
30- to 40-km regime of the equatorial region.
Mackenzie, F.T. and J.A. Mackenzie (1995) Our changing planet. Prentice-Hall, Upper Saddle River, NJ, p 288-307.
(After Warneck, 1988; Anderson, 1989; Wayne, 1991.)
Soft, rough, cold, hot, sharp, dull, furry, hairy, smooth, spiky, watery, oily, breezy, windy, rainy, snowy, painful (dull pain, sharp pain, terrible pain, pain all over), tingly, silky, satiny, creamy,…. All words that start with a finger or a tongue or a foot or a calf or a back or a belly contacting something… well, except for the feelings of pain, which may be acute or chronic, external or internal, slight or severe or so many degrees and types in between. But most of them start with a touch and we find a word in our made-up universe of words to describe how that object is interacting with us. If is oily and we touch it with our tongue, that initial impression may be preceded or followed by taste and smell – does it taste hot or is it hot, does it taste like a salad dressing or does it taste milky, we find a way to add our senses together and tell a story about that which we have sensed.
Medical scientists have a more analytical approach in appreciating our touch. From this article (http://www.ncbi.nlm.nih.gov/books/NBK390/), a world of terms comes to help us understand the world we touch. Many of the terms inform the extent of sensation – it’s absence or decrease or excess – even the sensation of touch when no contact has been made. Some sensations are felt within the body or displace the sensation from where it occurs to another location. It is, in some people, as mysterious a sense as any other, with as many subtleties and nuances as any sense.
But most of us use it daily to make our way. We feel the press of our feet as they are pulled to earth, with two foot-shaped bits of this enormous home pushing back at us. We feel doorknobs and car doors and glass doors and bathroom seats and water washing and winds breezing or sliding or pushing by. We feel forks and knives and spoons and dishes as we ladle food into our mouths for it to be touched and tasted and smelled on its way to creating energy.
In the absence of sight, we rely on our retained senses more. We learn to read with our fingers, we pay greater attention to what we hear, we may attend to our sense of smell more vividly, although studies have determined that there is no heightened senses in these realms, just a more vivid attendance to them.
Perhaps most importantly, our bodies tell us stories about the world around us – its shape and size, its presence and absence, the state of the weather, in ways our other senses do not.
But all of the senses together tell us stories, or contort those tales, in ways that inform our lives and make them beautiful to live.
Mind and Brain
The brain contains numerous areas responsible for the functions that permit life. Its most entertaining function, though, is as the seat of the mind. The mind has been a matter of debate since Grecian philosophers and probably before, but the Greek philosophers were lucky enough to have many of their thoughts, wise and not, on-point and ludicrous, passed down through the ages in written form. It is highly probable that there were other adept human thinkers before the Greeks, in locations other than the eastern Mediterranean (let’s define the eastern Mediterranean as all the way from the eastern shore of Italy to the western edge of Egypt, plus at least some hundreds of miles inland from all point in between). It may be that their records were destroyed by some egomaniacal ruler – or perhaps several. It may be that their writings were destroyed in the Alexandrian fire. It may be that their traditions were still vibrantly rooted in the oral tradition, that their sense of mind was far closer to what contemporary cognitive scientists believe than what a few Greeks believed, but we know not.
The mind is where we perform sense synthesis. We take our interactions with what we know of the physical universe through sight, sound, taste, smell, touch, and make a world within the lumpy lopsided ovoid of our skull. In spite of the sensual challenges I’ve posed above, many of us, at least as circumscribed by our various cultures, agree that chairs are usually chair-shaped (although sometimes quite oddly), trees and rocks are very much themselves, albeit in a staggering array of shapes, sizes and colors, the sky is sometimes very black, sometimes very grey, and sometimes quite blue, and often enlivened with the glorious hues of sunrise, sunset and countless nuances all day long.
There is a notion of the sensing homunculus, depicted as follows:
In this diagram, somatosensory and motor cortices in the right cerebral hemisphere are flayed open to show (1) where various senses and actions are experienced or directed and (2) how various of those sensory and motive skills dominate our experience of our worlds, internal and external. For instance, the very large hands in each diagram indicates that these are enormously important sensing and manipulating units, while the hip is relegated to inferior sensing and motor importance. The parts and regions of our bodies do not sense all things in equal measure, so yet again we find our minds presenting a distorted version of that which surrounds us.We know what we know, but we know it because we sense it and then make something of that which we sense.
With our other four senses off, we can traverse a space and feel gravity attract our mass, with the enormity of the earth, through the soles of our feet, up through our calves, thighs, pelvis, abdomen, thorax, back, neck, head, and through our dangling hands, arms, shoulders. We can encounter what we call a “wall,” and know whether it is coated in gloss or semi-gloss coating, or in metal, wood, cloth, ceramic, plastic, or at least have an idea from our touch because that sense integrates with our previous experience, which gives color and texture and dimension to objects we experience with touch and names the objects.
But the integration is beyond touch. The sight is beyond seeing. The smell is beyond chemicals. The taste is beyond what touches our tongues. The sound is both the direct sound and all the reflected sounds, but is beyond even these to our experience of these sounds.
Then there is what happens automatically, what happens with time and experience, and what happens that is beyond the automatic and beyond experience. There are thoughts – from the mundane to the ecstatic, from routines to dreams, from those grounded in what we perceive with our senses to those that we fabricate to explain what cannot be adequately understood. Feelings lull and hum and percolate and spike and surge and they are feelings about other beings – plants and animals and microbes – and things that are personal, but often communal or cultural as well.
And, from what I can tell, neuroscientists and psychologists still don’t have a sufficient understanding of how tissue we call nerves and synaptic clefts and electrochemical potentials between those cells and all of the connections among those fibrous tendrils and gaps in all of the various structures of the brain, that bulbous growth at the top of our spines, how all of that stuff and electricity integrates and becomes “who we are” and “what we know” and “what we remember” and “what we think and feel” and “how we become more of what we are every day that passes.” We, the sum of all human knowledge at this juncture in our continuous questioning of ourselves and everything that surrounds us, run out of road – reach an invisible, yet pliable and perhaps yielding, wall in what we can know about all of “it.”
We probe, though. Some of the interesting work being done is happening at MIT. Drs. Rebecca Saxe and Nancy Kanwisher are among the principal workers in cognitive science.
Rebecca Saxe: How we read each other’s minds
Nancy Kanwisher: A neural portrait of the human mind
Beyond these systems, and beyond a description of what physically constitutes a brain and how the general “plumbing” seems to be wired, yet flexible, there are videos such as the following that provide an animated map of where stuff happens (to the extent it is currently understood) and where those experiences are stored (see last parenthetical). Virtually everything else is more ephemeral than neutrinos.
If I were exploring facile notions of reality, I could now suggest that we all live in a reality dictated by a giant computer-mind as posited in “The Matrix” movies or “I Have No Mouth But I Must Scream,” the terrifying Harlan Ellison story. But I am not.
Somehow, most of our minds, past or present, agree that our reality is a shared one. There are objects in our world, some of them near (the glasses on my nose, the trees and birds and asphalt and buildings outside) and some far (those objects that Hubble, CHANDRA, numerous others share with us, but that are beyond our immediate senses except for the intervention of powerful optics and false color imaging; the same could be said of sounds (mechanical energy) gathered from afar). It may be that people from from various cultures would describe commonly perceived objects differently – perhaps the parable of the seven blind men and the elephant applies here. If we are not speculating about varying perceptions of common objects, it is fairly predictable that we humans will come up with different explanations for different phenomena – for instance, why does the sun “rise” in the east and “set” in the west?
But this is the thing – whatever it is that surrounds us, whatever we see, hear, smell, taste and feel, our sense of our surroundings is something that occurs in each of our brains, individually. It sounds simple, but it is not. It sounds obvious, but it is so complex that we do not understand it in any completely satisfactory way. It is my belief that human beings get in the deepest trouble available to them when they lean too far out over their skis, so to speak, or when they get their cart before their horse or count chickens before they are hatched – we have a lot of cautionary analogies in our languages for this way of behaving, but we do it a lot anyway.
In 1961, the concept of the unreliable narrator was coined by Wayne C. Booth in “The Rhetoric of Fiction” (a convenient summary can be found here). While this concept applies to characters in fiction going back to Plautus (254 – 184 B.C.E.), the unreliable narrator may be applied to much of what we human beings say and write – in public or private. Vide infra.
There are several kinds of stories.
There are stories people tell each other that are, effectively, catalogs of factual events, subtitled with necessarily subjective commentary on how the storytellers felt during the events. These are simple conversations.
There are stories that attempt to explain events, but abridge the events so that details that are critical to understanding the facts are omitted for brevity or because the storyteller doesn’t understand their import, either to their general audience or to individuals within the audience. These are personal, communal and general histories.
There are stories that attempt to explain events, but fail due to the storyteller’s misapprehension of crucial phenomena within the event, or the event as a whole. These can be told out of good will (the teller truly wants to help their auditors understand the event) or arrogance (the teller understands that they don’t understand, but want to pretend that they do to make themselves seem more powerful in others’ eyes than they currently are perceived). These can be simple stories about complicated events, but can also be histories, autobiographies, biographies and religious stories. It applies to out-dated scientific theories as well (I’m looking at you, Ptolemy and Aristotle).
There are stories that are fictions, communicated to the auditor or reader as such, and that are intended to reveal nuances of life as we have known it or assume it may be some day. Sometimes, they are also just meant to entertain. These are what we commonly think of as “stories.”
There are stories that explain phenomena in words, in pictures, in mathematical formula (the mathematics of these stories are the key to understanding them), that attempt to define how life and objects in the universe we perceive seem to behave. Some of these stories, once rendered to anyone who will read them or listen, become false, become fictions, when our understanding improves – and that is part of the intent of science – that something intended as a fact becomes only partially true or even completely false when data is queried further. Some stories, although constantly queried for improvements, stand up to brutal scrutiny and become as true as anything we can know. There are three types of stories in this category: (1) outmoded hypotheses, theories, and laws; (2) current hypotheses; (3) current theories, over-arching theories, and laws (How Science Works).
Then, of course, there are various kinds of lies. Our world is full of lies and humans aren’t the only ones telling them. Is an insect that looks like a twig or a leaf telling its likely predators that it is a delicious insect? Does an octopus that adapts its color to match a rock telling predators it is an octopus? Do hognose snakes and opossums “play” dead? Does Koko tell the truth when she tells her human companion that a kitten ripped a sink out of the wall? We have taken lying to an entirely different level – but they all provide a basis for understanding why lying occurs among living things. Of interest is recent research into “tactical deception” among primates and why that may play an important, although often annoying, part in how we have evolved.
The whole point of this initial post is that (1) humans receive information from their senses, (2) human brains interpret that sensory information in very complicated ways that we do not completely understand, (3) human brains add information that is non-sensory, e.g. a dream, a feeling, an intuition (correct or incorrect), a belief, a thought, a concept, an hypothesis or two, that may or may not have any basis in reality, except for that individual or family or community or culture (but is usually individual) and (4) because of these factors, we should be more humble about our non-sensory, individualized senses of the universe in which we live, but we are often (very often) not! We privately believe or publicly state personal beliefs or communal/cultural belief stuff without an iota of sensory evidence to support these beliefs – and we believe others should believe as we do, in spite of the utter tenuousness of our belief.
But that’s what we do.
This collection of words, stories concocted of ones and zeros, will discuss this problem and try to come to some rational beliefs about belief. In spite of the length of this introduction, every aspect of what I have introduced above is far more complicated than I have suggested (e.g. electromagnetic radiation, structures of the eye and ear, structure of the brain, etc.). In general, I believe that the complicated stories are more likely to resemble objective truth than simple stories. For instance, it is more likely that it has taken the universe 12 to 14 billion years (note range, which implies hypotheses that are being investigated) to reach its current state than it is that it took 6,000 years. Why? Because the complexity of the process that arrives at that range is based in fastidious data collection and analysis by numerous research groups around the world and is open to change and refinement. It is the best current assessment of available data using a variety of different modeling approaches. It may be wrong – and that’s okay – but it also may be directionally correct and improving all the time.
If anyone made it this far, congratulations! That is all you get, except for all of the stuff compiled above – a congratulatory statement from yours truly. I hope you enjoy other entries to come.
A final note: Just because I believe what I write and believe it is rational does not mean I expect you to do so. I hope you do because I believe it is reasonably well argued, but it would be odd to write about my beliefs and expect that you fall in line. I do not.