A Look at Our Nearest Dwarf Planet: Ceres


It’s a good time to be a dwarf planet. The most famous dwarf planet, Pluto, will be closely photographed and studied in two weeks by the New Horizons spacecraft. But NASA is getting up close and personal with another dwarf planet much, much closer to home right now with their Dawn spacecraft. That dwarf planet is in fact a really large asteroid in the asteroid belt between Mars and Jupiter, called Ceres.

Ceres is so large that it alone comprises approximately a third of the entire mass of the asteroid belt. It is remarkably spheroid, for an asteroid, due to its massive size causing gravity to round it off. It’s the only asteroid in the solar system to be large enough to be rounded like that, and so it is technically classified as a dwarf planet.

Ceres has already offered tantalizing mysteries to us through the Dawn orbiter. First, it noticed very bright, shiny spots that you don’t see on many other celestial bodies. At a great distance, the spots were viewed as one giant spot, but upon closer approach by the spacecraft, they were resolved into several smaller spots. At first, scientists thought they were massive ice volcanoes, of the kind seen on wild worlds like Triton and Enceladus. But it’s most likely giant fields of ice or salt.

Another intriguing mystery is a recent image of a giant mountain on the flat surface of Ceres. As tall as Mont Blanc, the highest peak in the French Alps, the as yet unnamed space mountain would be like having a peak of the Alps plopped down in the flatness of Kansas. Scientists do not yet know what could have caused this odd geological formation. One of the oddest things about the enormous mountain, is that it isn’t shaped like mountains we have on Earth. It’s an almost perfect triangle–making it in essence a pyramid the size of a mountain. Truly alien!

Book Review: Rise of the Robots


Two of the most significant trends in recent decades, stagnating employment figures and exponentially advancing breakthroughs in robotics and automation, are explored in futurist author Martin Ford’s second book, The Rise of the Robots. Ford’s first book, The Lights in the Tunnel, was solid, but he really makes a big jump in voice, confidence, and, especially research in this one. His first book mostly appealed to readers interested in futurist thought, but The Rise of the Robots will appeal to even the most conservative, mainstream,panglossian economists, technologists, and general readers.

One of the salient things about our species as it matures is how much sharper our self-consciousness gets. We can identify trends, epochs, and zeitgeists more readily, even as we’re living through them. In the past, history was less conscious, and broad historical labels were applied to periods long after the events took place. Our age, which certainly will be remembered for its immense technological breakthroughs, is an age paralleled only by the first Industrial Revolution. Few can dispute the seismic shifts taking place as technology infiltrates ever more areas of our reality, potentially dwarfing the role that the breakthroughs of the Industrial Revolution played in human life.

So far, the consensus among economists and politicians is that our current age will result in a situation similar to what happened after the Industrial Revolution. The machine age made certain types of labor obsolete, but opened up an enormous range of new jobs for millions of workers. The wealth and expansion of the middle class in the 20th century bears witness to this. All we can do, the consensus goes, is follow technology into robotics and automation and wait for new career opportunities to be created, as has always happened before.

What makes Martin Ford different is that he refuses to so easily equate the invention of something like the internal combustible engine with something like Quill, an artificial intelligence engine that generates flawless news stories every thirty seconds, using a powerful algorithm to write articles indistinguishable from, and often superior to, pieces written by human beings. The key point is that while the first Industrial Revolution mechanized physical labor, what we’re living through now, the Age of Automation perhaps it will be called, mechanizes intellectual labor. Ford makes it uncomfortably clear that even the most seemingly secure white-collar jobs will be squarely in the sights of advanced robotic and automation technologies in a mere decade or so.

The book offers plenty of striking statistics, like how during the seven year span from 1995-2002, 22 million factory jobs vanished from the globe, while over that same period, manufacturing output increased by 30 percent. The Rise of the Robots is filled with vivid examples from recent history showing outdated conservative thinking crashing up against the new reality. In 2011, a small town in North Carolina won a competition to have Apple build an enormous, billion-dollar data center there. The town, Maiden, offered the most attractive tax breaks, and won the honor of having a many acres-long data park constructed in its confines. The expectation was that there would be a fairly serious pool of decent jobs offered, but only fifty full-time positions were created. Most of the real work was done by algorithms, of course.

The first 80 or so pages of Rise of the Robots focuses almost exclusively on surveying the keystones of modern economic thought. Ford in his first book stated at the outset that he was not an economist, but he makes no such claim in this book, offering reams of thinking drawn from a wide range of economic sources. He’s pored over every major study and found negative trends in all of them.

The internet, and related technologies, have been around long enough to have created any new economic categories they are capable of creating. Ford, like fellow philosopher of the digital economy, Jaron Lanier, sees no reason to delay judging the internet’s effect on the economy. It’s been around long enough to know what it does to the economy. The internet’s effect on the economy has been to dramatically increase the worst type of winner take all capitalism, creating a tiny class of elite billionaires, a small class of well-off coders, programmers, and designers, and not much else. There’s no equivalent to a Ford factory worker in the new internet economy. Polaroid employed hundreds of thousands of workers at its peak, Instagram employs a few dozen. Sure, the internet opens up lots of opportunities for freelance writers to get paid a few cents for every hundred words they write, but these are not solid middle class careers. The rise of the temporary, gig economy, with a few outlier stories of 23 year olds who become billionaires, perfectly mirrors the kind of economic paradigm generated by the internet.

Ford is generous with his analysis of the consensus viewpoint, even wondering if human development will increase so as to make more people suitable for the available jobs in the forthcoming, transformed economy. But there is no evidence that human cognitive capacities can increase broadly enough to offer a thriving new middle class of people employment in the kinds of specialized jobs that will exist in the automated future.

Eager to be taken seriously as part of the mainstream of political and economic thought, Ford is critical of the wilder futurists on the scene, like Ray Kurzweil and others associated with the so-called Singularity. Ford thinks that such outlandish, quasi-cultish movements, promising immortality through science within a mere few decades from now, turn people off from being able to hear the real developments that will impact job prospects right around the corner.

Ford discusses a company called Momentum Machines, which has a robot ready to go that can outperform a fast-food worker in every way. This is not science fiction, it is the present, but McDonald’s, the industry leader, has not pulled the trigger on automation in a full-scale way yet because it is cheaper to have human employees making close to the minimum wage. But as the robots become cheaper to produce, and cries for a comfortable, living wage increase among the millions of fast food workers worldwide, the result is clear.

So what can be done? This is not a problem that can be solved by the same old political and economic strategies. It will require a completely radical, new set of ideas, and people won’t like to hear it, even though it is inevitable. Ford patiently and methodically argues that the idea of an income needs to be slowly detached from the idea of a job, since there already aren’t enough jobs to go around, and there will only be fewer as automation progresses. People still need an income to participate in the market economy, and a healthy market economy needs a strong, wide range of participants with decent purchasing power. The wealthiest man in the world can’t support an entire industry by himself–no billionaire will buy tens of thousands of iPads.

Participating in the market economy is a vital right for all citizens of a capitalist society, and should not hinge on whether one is lucky enough to secure one of the rapidly dwindling number of available jobs. Lots of people having decent incomes is a permanent need, but having a job is a temporary circumstance soon to be forever altered by world-historical progress.

Read The Rise of the Robots to get a solid feeling for where we are all headed, and join the undoubtedly slow, uphill process of arguing for the inevitable detachment of income from jobs.

Life in the Very Early Universe


Here is one of the most fascinating concepts in cosmology I’ve come across. In this New York Times piece from December 2014, astrophysicist Avi Loeb discusses his pioneering work on the earliest periods of the Universe. Loeb concentrates on the earliest periods of the Universe, mere hundreds of thousands of years after the Big Bang.

This was long before stars and galaxies formed. The earliest galaxies formed about 15 million years after the Big Bang, so there was a huge stretch of time where the only real heat source was leftover radiation from the Big Bang.

This leftover heat is known as the cosmic microwave background radiation (CMB). The common assumption is that life evolved only after galaxies and stars warmed up planets after billions of years of formation.

But Loeb had a different idea. The Universe has been cooling as it has been expanding in the 13.8 billion years since the Big Bang. Maybe the earliest life forms didn’t need heat from a star, but received plenty of heat from the CMB. What would life have been like that did not receive its heat energy from a star, but from the background radiation left over from the Big Bang itself. These creatures could have theoretically existed, but would they be at all recognizable to star-centric life forms like us?

The idea hinges on the existence of extremely primitive, rocky planets in the first ten million or so years of the Universe. Loeb guesses that matter would have collected in dense enough pockets to form surfaces for life to exist upon and be warmed up by the CMB. There of course would have been no day or night on these archaic planets, as light and heat were a constant of the entire fabric of the cosmos, rather than emanating from fixed nuclear fusion reactors.

Stranded in our Neighborhood

Shining brightly in this Hubble image is our closest stellar neighbour: Proxima Centauri. Proxima Centauri lies in the constellation of Centaurus (The Centaur), just over four light-years from Earth. Although it looks bright through the eye of Hubble, as you might expect from the nearest star to the Solar System, Proxima Centauri is not visible to the naked eye. Its average luminosity is very low, and it is quite small compared to other stars, at only about an eighth of the mass of the Sun. However, on occasion, its brightness increases. Proxima is what is known as a “flare star”, meaning that convection processes within the star’s body make it prone to random and dramatic changes in brightness. The convection processes not only trigger brilliant bursts of starlight but, combined with other factors, mean that Proxima Centauri is in for a very long life. Astronomers predict that this star will remain middle-aged — or a “main sequence” star in astronomical terms — for another four trillion years, some 300 times the age of the current Universe. These observations were taken using Hubble’s Wide Field and Planetary Camera 2 (WFPC2). Proxima Centauri is actually part of a triple star system — its two companions, Alpha Centauri A and B, lie out of frame. Although by cosmic standards it is a close neighbour, Proxima Centauri remains a point-like object even using Hubble’s eagle-eyed vision, hinting at the vast scale of the Universe around us.

As much as the vastness of intergalactic space invites us to imagine reaching other galaxies, it is doubtful that human beings will even reach another solar system before our species goes extinct. The best we can probably hope for is fully and comprehensively studying, exploring, and potentially colonizing the nooks and crannies of our own Solar System. Barring another spacefaring civilization wondering into our neighborhood, it’s likely that our species will begin, live, and die within our own Sun’s backyard, the eight planets it formed, and the scores of Kuiper Belt objects at its furthest reaches.

The nearest galaxy to our own is Andromeda, which is 2.5 million light years away. The closest star to Earth, Proxima Centauri, is 4.3 light years away, and one light year is about 5.9 trillion miles. That’s over 25 trillion miles, and that is as close as it gets.

We actually can’t even see Proxima Centauri with the naked eye from Earth, despite its cozy proximity, as it is a very dim, cold (by star standards!) red dwarf. As such, it is unlikely to have been able to support a planet’s evolution over billions of years that would be needed for any kind of intelligent life to take hold. And of the next thirty nearest stars to Earth, twenty are red dwarfs.

It does not seem even remotely within the realm of possibility that human beings will be able to  get even part of the way to the nearest star system, which itself almost certainly has no more interesting features than our own Solar System. Humankind’s fastest non-heliocentric spacecraft ever, the New Horizons, took nine years to travel the three billion miles to Pluto, and that spacecraft only made it so fast because it was made as small as possible. Anything carrying a human being, let alone a group of them, wouldn’t approach New Horizons speeds.

That is, through using traditional means. It’s well-known that worm holes, bending space and time to make a shortcut between vast distances, is mathematically possible and represents the best means for getting from one star system to the next. But wormholes don’t exist in nature–they have to be created, and we currently have no idea how to create them.

Even if they could be created, we have no idea how they would respond to having enormous human spaceships thrust into them. They are likely to be highly unstable entities that would collapse should any foreign matter be introduced into them. Wormholes are very much science fiction, and will almost certainly remain so for however many centuries our species exists before reaching its inevitable end in any of the handful of ways, like climate change, nuclear apocalypse, or AI takeover, currently looming.

In all likelihood, again, it is unlikely that the human race will ever reach even our closest solar neighbor. But there’s so much out there, and it still deserves to be appreciated and understood to contextualize our place in the grand scheme of the universe, even if we are extremely unlikely to explore it in any meaningful sense before our species goes kaput.

What we know and how far we get should not be compared to the total knowledge and vast exploratory territory in the galaxy or the universe. We should be incredibly honored and proud that, for trillions and trillions of miles in any direction, the only ones seeking answers to questions or new places to explore are human beings.

Acquired Characteristics aren’t Heritable…Right??


Charles Darwin, of course, did not “invent” the concept of evolution, but collected a mountain of evidence to show how it works through natural selection. The idea that species had evolved from less complex life forms over unfathomable eons, what scientists call “deep time,” had been in the air for quite some time.

One prominent theory of evolution was put forth by a French biologist Jean-Baptiste Lamarck, who lived from 1744 to 1829. Lamarckian evolution was all but left in the dustbin of history after Darwinian evolution was proven right, but it is a very interesting counterpoint.

The basic idea of Lamarckian evolution is that characteristics acquired by organisms during their lifetimes, like increased muscle mass through consistent periods of intense exercise, can be passed on to offspring. In Darwinian evolution, less fit organisms simply die off, while in Lamarckian evolution, organisms can strive to become fitter through their efforts.

It sounds like a classic bit of 19th century pseudoscience, though notably more optimistic and less virulent than the gold standard of 19th century pseudoscience, phrenology. There is something empowering and appealing about the idea that what we achieve in life in terms of improving our minds and bodies can be transmitted to our children, rather than being at the whim of natural selection.

Shockingly, recent research has provided evidence that some Lamarckian ideas may still be alive and kicking in cutting-edge biology, with one major difference. Lamarckian evolution called for positive acquired traits to be passed on as well as negative ones, but it appears that only negative ones actually can be. Researchers from the University of Cambridge have found that genetic deficiencies resulting from trauma, poor lifestyle, or stress from the environment are in fact heritable.

It has been well-known that our genetic code is consistently impacted by our actual daily lives, in terms of lifestyle, environment, and severe trauma. But it was always thought that this rewiring only impacted the organism it happened to, while their genetic material was passed to offspring in an essentially pristine condition.

The truth is that about five per cent of our genetic code carries traces of past events, meaning that negative experiences like trauma, poor diet or poor lifestyle choices may be passed on to the next generations.

This is especially disconcerting given how sedentary we have become in the past decades, as technology enables us to do ever more without moving. Earlier studies have shown that sedentary lifestyles are harmful to us not only in superficial ways like lack of muscle tone or waist size, but have genetic consequences as well. Now when you sit down to binge-watch the latest Netflix original series, you’re not only potentially shortening your own life, but your children and grandchildren’s as well.

Planets are Overrated


Whenever people talk about exploring space, they tend to focus on putting human explorers (or at least rovers or landers) on other planets. Usually, these are planets outside our Solar System, called exoplanets. These are the far-flung, mysterious places where our imaginations and reality seem totally capable of coinciding. Huge civilizations with technological wonders and natural wonders beyond our comprehension seem possible on these exoplanets, controlled by alien creatures that may have some humanoid characteristics, or may be truly alien life forms with no similarities to us whatsoever.

Perhaps more than any other celestial object, Mars has dominated the public’s imagination, probably because it is a huge planet, nearby, with similar surface features to our home planet. We’ve explored it fairly extensively with image mapping and rovers, and know a lot about its chemical composition. It seems like there is little left for us to learn from it, and few surprises remaining. But it retains its unparalleled hold on the popular imagination as a potential terraforming colony or outpost for energy harvesting.

Part of the supremacy of planets in our space questing is because they’re so much bigger than moons. But this is not always the case. The planet Mercury is dwarfed by both Jupiter’s moon Ganymede and Saturn’s moon Titan, and is only slightly larger than Jupiter’s Callisto. Mars is barely bigger than Ganymede.

If our own Solar System is any indication, moons have a higher ratio of geological activity than planets (or dwarf planets). Most of the geological activity, including the volcanic variety, occurs not on planets but on various moons throughout the Solar System. Jupiter’s moon Io shoots enormous jets of ice hundreds of miles into space, and Saturn’s moon Enceladus has an ice volcano that erupts directly into Saturn’s glorious ring system. These ice volcanoes operate constantly, unlike Earth’s, which are usually dormant (and Mars’s, which are long dead). Beyond just geological activity, the object with the densest atmosphere in the Solar System isn’t a planet either–it’s Titan.

There’s a school of thought that a planet like Earth, capable of sustaining multitudes of complex life, might be extremely rare. As in, completely singular. According to this “rare Earth hypothesis,” there are just too many factors that have to break right in order for a planet to sustain life. A main issue is with the lack of class G stars in our galaxy, which has approximately 100 billion stars, yet maybe 70% of them are red dwarf stars.

The problem with red dwarf stars is that they are small, compared to bigger stars like our Sun, and so its planets would complete orbital periods rapidly. These exoplanets would be tidally locked into their parent star, meaning that they would rotate on their axis in sync with their orbit around the sun, so the same side would always be facing the star. Night and day would be restricted to one half of the planet. Stars that are too big also present problems for life-sustaining planets. Our Sun has a stable period of about ten billion years, where it will shine with consistent luminosity and temperature. Planets (and their moons) need billions of years of stable heat and light to form and eventually evolve life.

Exoplanet detection has made giant leaps in recent years, but observation methods are still indirect. We’ve never gotten a visual image of a planet outside our Solar System, and we probably won’t for many years. Astronomers detect them primarily through radial velocity to detect an exoplanet’s mass, and transmit photometry to detect an exoplanet’s radius. The presence of a massive planet will alter the position and velocity of its parent star slightly, and it can be detected. Planets will also block out a fraction of its parent star’s visual brightness, or luminosity, and the amount that it lessens indicates the exoplanet’s radius.

About 4,000 exoplanets have been discovered so far, primarily using radial velocity and transmit photometry on nearby stars. No exomoons have been discovered yet, but we know that giant planets like Saturn can have dozens, so there is no telling how many there are in the Milky Way. Sure there are a lot of planets outside our Solar System, but there are a lot more moons, and judging by our own Solar System, the moons have a lot more going on.

Solar System Missions: Europa


With the advent of New Horizons capping off a long line of Solar System probe missions, from Mariner, to Pioneer, to Voyager, it’s time to begin advanced science missions targeting specific celestial bodies in our Solar System.

The Curiosity rover has been an enormous success on Mars, and the Huygens lander incredibly managed to land on Saturn’s amazing moon Titan (the header image of this site) back in 2005. But Huygens was only designed to survive the descent through Titan’s peerlessly dense, active atmosphere and landing on its unpredictable terrain. It managed to transmit data from Titan’s surface for about 90 minutes, which is why we know so much about it.

As romantic as it may seem to land an exploratory rover on the surface of these moons, a dedicated, specially designed orbiting probe, with cutting-edge modern scientific instruments, can tell us just as much, if not more, than a rover. It also does away with the immense difficulties of landing a man-made craft on a distant moon.

As amazing of a place as Titan is, Saturn is really far away. Jupiter, meanwhile, is a reasonable 5.2 astronomical units (AU) out, while Saturn nearly doubles that at 9.5 AU. Earth, of course, sits at 1 AU, the basic unit of measurement we use for the Solar System. An AU is the distance from the Sun to the Earth–93 million miles.

Luckily, Jupiter has some amazing moons of its own, well worth exploring. Europa, its smallest moon but still the sixth largest in the Solar System, may not be quite as exotic as Titan–it’s doubtful that any other body in the Solar System will have a feature to rival Titan’s massive liquid methane surface seas. But evidence collected from the 1989 Galileo Jupiter orbiter strongly suggests that Europa, which has an icy surface like most objects that far away from the Sun, has a salty, liquid ocean atop a rocky surface. That rocky subsurface may be punctured by hydrothermal vents, circulating heat and compounds.

The evolution of these missions is clear. Initial exploratory space crafts designed to orbit planets and detect objects of special interest. Decades later, the Europa Clipper will be launched in 2022, if it is selected. The way it works in NASA is that several mission concepts are developed, and then voted on, and whatever is at the top of the list gets the limited funding available.

This orbiter will have a very advanced scientific payload weighing over 180 pounds of pure scientific instruments. Among them are a magnetometer measuring the depth and saltiness of Europa’s ocean and an infrared spectrometer for mapping what materials make up Europa, as well as an imaging system to provide high resolution maps of the surface. A heat detector. A mass spectrometer and a dust analyzer that will measure chemicals in Europa’s very thin atmosphere and materials from the surface ejected into space — including chemicals that might indicate the presence of hydrothermal vents.

While the mission is not yet finalized, it is a good bet. There is a lot of interest in these exotic moons of the outer Solar System, and since the robotic submarine to explore Titan’s methane seas concept was not voted to receive funding, this is the best shot humanity has to get up close to a vibrant, active celestial body any time soon.

While it’s not likely that the orbiter will detect evidence of massive alien creatures, it is highly likely to detect microbial life forms, which would be a momentous discovery. Since Europa has had around 5 billion years to do its thing, with the highly active conditions under its icy surface, it is hard to imagine it being totally devoid of life.

Book Review: Who Owns the Future?


In his second book, Who Owns the Future?, pioneering virtual reality researcher turned philosopher of the digital economy Jaron Lanier attempts a tricky maneuver: urging us into both a more purely capitalist direction, while also encouraging us to be far more humanistic. It may strike leftists as too acquiescent to the exponential stranglehold that capital has over human potential, and it may strike rightists as excessively concerned with spreading financial security to a wide base. But that’s the strength of the book–it is revolutionary, but modestly so, in a way that might actually apply to the rapidly approaching digitized future.

Capitalism, digitization, and human dignity need not be thought of as incompatible. Who Owns the Future? is a unique blend of clearheaded realism about digitization’s exponential narrowing effect on our economy, with a decidedly hopeful and far from dystopian tone.

In 2015, we spend countless hours contributing to the hive mind and the general pool of content through Facebook posts, tweets, upvoting/generating reddit content, uploading YouTube videos, Yelping, reviewing Uber drivers, running fan sites, message boards, movie review blogs, and a thousand other ways. All of this is done for free, because that is how it started out–people simply jumping on the internet and doing things.

As the context that this free content was delivered into grew more sophisticated, however, tremendous profits started being derived from it. Lanier asks why the people giving information that is combed through for huge value by a small cadre of firms he dubs Siren Servers haven’t partaken of the massive upswing in value taking place online.

The familiar saying underlying Web culture is that information and content “want to be free”–Lanier suggests that, while this may be true, the human beings from which that information and content originate should not want to work for free. This bracingly sane idea is Lanier’s concept of “humanistic information economics.” Information wants to be free, but people want to be paid.

The base of digital value is as vast as the population , yet the actual money is filtered through the Siren Servers, leaving the real creators of value out in the cold. Lanier asks us to reconceive of what kind of behavior merits monetary compensation. Why should only the firms that have figured out how to sell the value offered in staggering volumes by the masses have all the money, while unemployment and underemployment grow, the labor force shrinks, and hopelessness reigns?

We need to reconceive of what value is, as the traditional economy is so obviously devoid of it, while the new economy has such an obscene overabundance of it, the overwhelming majority of which is divorced from monetization.

We have enabled advertisers to specialize their outreach to us in ways that Don Draper could only dream about. Lanier’s book is filled with galling examples of how Siren Servers like Facebook and advertising technology are being leveraged to tailor consumer experiences directly to you, like “differential pricing.” This is the practice of algorithms based on information obtained about you through your Facebook activity that will allow firms to judge how much you are capable and willing to spend on an item. Someone else, buying the exact same item online but whose digital footprint indicates that they are less inclined and capable of paying more, will be charged less.

The current economic outlook is that the dignity resulting from having money should go only to those cunning enough to be successful predators, leeching off the digital information offered by the public. The masses are merely low level actors providing ever increasing opportunities to be exploited.

The humanism in Lanier’s thesis is that people who share information that could come only from them, even if it is unsolicited, are contributing to the overall pool of value, and should be compensated. Money should not only accrue to those who find ways to exploit, but to those who create of their own volition on their own time.

Lanier’s concrete suggestion for how humanistic information economics would actually work hinges on the idea of two-way linking, deriving from pioneering but insufficiently influential technologist Ted Nelson. A direct path would always be present between an originator of even the most trifling bit of content and a firm that utilizes it for a potentially monetizable practice.

This would also require a reconception of how money accrues to people. The way we spend money, dribs and drabs throughout the day and the week, with occasional big purchases every few months or so, would also be how money comes in to us. For each bit of content or information we offer online that is used to make ad tech algorithms more robust, we would receive a micropayment.

Value should not be viewed as it is traditionally, in terms of great big spurts, but rather in the steady, accretive ways that take place every day. In his novel The Man Without Qualities, Robert Musil conceives of a human being, and by extension a modern city filled with people, as a machine with countless parts constantly in motion generating an incredible amount of energy: “A man going about his business all day long expends far more muscular energy than an athlete who lifts a huge weight once a day. This has been proved physiologically, and so the social sum total of everybody’s little everyday efforts, especially when added together, doubtless releases far more energy into the world than do rare heroic feats. This total even makes the single heroic feat look positively miniscule, like a grain of sand on a mountaintop with a megalomaniacal sense of its own importance.” This is how energy, value, and force are really generated, so it’s about time we use our amazing new technology to monetarily validate that fact, rather than praising the Siren Servers that, in the nascent days of digital economics, figured out how to siphon value.

The work of offering valuable information that tech spy companies make billions from should be treated as the source of wealth that it is. We need to evolve past our culture that lauds self-aggrandizing, heroic, ambitious billionaires, since the value and power they are capable of pales in comparison to the little bits of energy generated by each and every person throughout the day.

More of our activity should be monetized, not less, which is both an admission that commerce is the soul of our society, and an appeal to offer chances for being paid to far more people than now. This is the unique blend of capitalism with leftist populism in Lanier’s thinking.

The core of this process is in rethinking the meaning of ‘rights.’ As of now, civil rights, our rights simply as humans qua humans, are what we mostly mean when we think of rights. This was more appropriate in earlier phases of socioeconomic development, before the current phase of neoliberal capitalism, in which people have limited value simply for being people, and more as vectors through which money can be generated.

So because of this, commercial rights need to become as ubiquitous and vocally defended as civil rights, indeed perhaps more so. Commerce is the soul of capitalism, and having dignity in a capitalistic system requires comprehensively detailed commercial rights for each and every citizen. Each aspect of a person’s life is fodder for generating revenue by some ad tech company, so a person should be protected and compensated for every instance of this happening. Lanier excels at giving examples of this: “If you are tracked while you walk around town, and that helps a government become aware that pedestrian safety could be improved with better signage, you’d get a micropayment for having contributed valuable data”.

Who Owns the Future? does seem longer than it needs to be, at 367 pages, the last eighty or so of which devolve into a hodgepodge of related thought fragments. This latter part of the book is very much Lanier throwing ideas out into the world, for us to benefit from, yes, but it also seems like he is trying to sharpen his own grasp of his thesis. Much of these later sections read like notes, but in fairness he is first and foremost a computer scientist, and for one of those, he is a great writer.

In his second book, he shows all the signs of maturing as a thinker that you would want. His first book, You Are Not A Gadget, while great, was more focused on individual cases of how specific internet tendencies were narrowing the realm of freedom, expressivity, and creative potential. This book, while still filled with specific, concrete examples to build its case, exhibits more comfort with big picture thinking, which is a welcome sign.

New Pathways in Solving the Riddle of Consciousness


Nothing is at once so near and familiar to us, yet so completely alien, as consciousness. It’s such an enormous topic that it’s not even clear if it makes more sense to ask what it is, or what it isn’t.

As society becomes more and more enlightened, and science shines the light of reason into ever more previously darkened corners of ignorance, phenomena become reduced to their physical bases.

But the one major area that has proven resistant to this reductionist euphoria has been consciousness. There is a whole movement in contemporary philosophy devoted to the idea that the roots of consciousness is and always will be a total mystery. Appropriately enough, these people are called New Mysterians.

Reductionists, on the other hand, are convinced that consciousness is a phenomenon occurring in nature, and just like any other natural phenomenon, it can be explained through combining logic and natural elements and properties. For reductionists, consciousness is just the physical working of neurons, and consists only of certain physical processes underlying it.

In a classic essay called “What Is It Like to Be a Bat?“, philosopher Thomas Nagel states that an organism has consciousness, or sentience, if there is something that it is like to be that organism. This is the subjective character of experience.

Sentience is the ability to experience sensations—a condition for which things happen, for which sensations are for. A subject is for itself, because it is that to which objects are presented, the viewer watching the movie of life’s objects. It is present to itself. An object is there for the subject, in itself, it is not there for itself.

A bat is a mammal, but it is one that is very alien to us, yet very abundant in the world.  Bats are nocturnal and blind, using echolocation to get around. They emit high pitched shrieks, which bounce off of the environment and come back to their ears, guiding them around their terrain. They locate things by following the echoes of their own shrieks.

This is a completely alien way of using senses to us! No doubt these experiences have a subjective character to them, but we can have absolutely no idea what this subjective character is.

The qualities of a bat’s experience are completely alien to humans. Scientists have dissected bat brains, but have no idea what the inner character of bat experience is like. This shows us that knowledge of the brain yields no knowledge of consciousness. Nothing in the bat’s brain gives us a clue as to what its experience of sensations is like.

But maybe biology is not the answer. Recent developments at the nexus of neuroscience, computer processing, and quantum mechanics show a possible way forward. MIT theoretical physicist Max Tegmark has laid the foundations of how to precisely conceive of consciousness as a computer processor.

Information always appears in consciousness in unified form. This error-free processing signifies that the system of consciousness has built in correction codes, allowing as much as half the data to be reconstructed from the rest, so that the finished product of consciousness is always unified.

Tegmark notes that this automatic error-correcting capability exists in things called “Hopfield neural nets.” But Tegmark calculates that a Hopfield net about the size of the human brain can only store 37 bits of information. The missing ingredient allowing a chunk of wetware the size of a human brain to store the immense amount of data we can is what will explain consciousness. There has to be some kind of quantifiable factor allowing brains, with their vast but finite neural capacities, to store so much information.