This Is Your Brain on Architecture – CityLab

This Is Your Brain on Architecture – CityLab

Sarah Williams Goldhagen was the architecture critic for The New Republic for many years, a role she combined with teaching at Harvard University’s Graduate School of Design and elsewhere. She is an expert on the work of Louis Kahn, one of the 20th century’s greatest architects, known for the weighty, mystical Modernism of buildings like the Salk Institute in La Jolla, California, and the Bangladeshi parliament in Dhaka.

Several years ago, Goldhagen became interested in new research on how our brains register the environments around us. Dipping into writing from several fields—psychology, anthropology, linguistics, and neuroscience—she learned that a new paradigm for how we live and think in the world was starting to emerge, called “embodied cognition.”

“This paradigm,” she writes in her magisterial new book, Welcome to Your World: How the Built Environment Shapes Our Lives, “holds that much of what and how people think is a function of our living in the kinds of bodies we do.” Not just conscious thoughts, but non-conscious impressions, feedback from our senses, physical movement, and even split-second mental simulations of that movement shape how we respond to a place, Goldhagen argues. And in turn, the place nudges us to think or behave in certain ways.

The research led Goldhagen to science-based answers for previously metaphysical questions, such as: why do some places charm us and others leave us cold? Do we think and act differently depending on the building or room we’re in? (Spoiler: yes, we do.)

Architects intuited some of these principles long ago. As Kahn once noted of the monumental Baths of Caracalla in Rome, a person can bathe under an eight-foot ceiling, “but there’s something about a 150-foot ceiling that makes a man a different kind of man.” As the peer-reviewed studies mount, however, this new science of architecture and the built environment is destined to have a profound effect on the teaching and practice of design over the next generation.

CityLab talked with Goldhagen about the book and why so much architecture and urban design falls short of human needs.

Your book is about how we experience buildings and places through “embodied cognition.” How did you first learn about it?

I fell in love with architecture the way most people fall in love with architecture, which is that I went to places that just astonished me and moved me. And so from very early on I sort of wondered: why does it do that? The arts have this effect on you, but architecture is so much more profound, I find, than any of the other arts.

At the time, there really was no intellectual paradigm for thinking about these questions. And then about 15 years ago, my husband handed me a book by someone who had written a previous book he had really liked. The title of the book was Metaphors We Live By. It’s co-authored by George Lakoff, who’s a cognitive linguist, and Mark Johnson, who’s a philosopher. The basic argument is that much of how our thought is structured emerges from the fact of our embodiment. And many of the ways those thoughts are structured are metaphorical.

There was an immediate light bulb: “Oh, people live in bodies, bodies live in spaces.” I started reading more and more about it and realized [that] what Lakoff and Johnson had figured out was in the process of being confirmed through new studies in cognition that had been enabled by new technologies. We’ve had in the last 20 years a kind of ocean of new information about how the brain actually works. Most of that was confirming the precepts of embodied cognition, and also going beyond it in certain ways, showing how multisensory our apprehension of the environment is.

There was an immediate light bulb: “Oh, people live in bodies, bodies live in spaces.” I started reading more and more about it and realized [that] what Lakoff and Johnson had figured out was in the process of being confirmed through new studies in cognition that had been enabled by new technologies. We’ve had in the last 20 years a kind of ocean of new information about how the brain actually works. Most of that was confirming the precepts of embodied cognition, and also going beyond it in certain ways, showing how multisensory our apprehension of the environment is.

Another thing is differentiated, non-repetitive surfaces. [The psychologist and author] Colin Ellard did a study of how people respond: He basically put sensors on people and had them walk by a boring, generic building. Then he had them walk past something much more variegated with more ways to [engage] visually and therefore motorically. He found that people’s stress levels, measured by cortisol, went up dramatically when they were walking past the boring building.

The reason I emphasize non-conscious [cognition] is because most people are very bad at knowing why we’re feeling or thinking the things we are. You could be walking past that boring building and you ascribe your stress to a bad conversation you had with someone the other day. But cognition is embodied, and you’re standing next to this soul-desiccating place, and that’s what’s going on.

The book is peppered with the findings of scientific research on how the environment shapes us and our lives. The brains of London cab drivers actually change after they memorize the city’s geography. The design of a school can account for up to 25 percent of a child’s rate of learning. Why haven’t these findings upended architectural education?

I had architectural education very much in mind when I was writing the book. I taught in architecture schools for 15 years, good ones. The most obvious part of the answer is that architectural training is really, except for the technical and engineering part of it, based in the Beaux-Arts design tradition. Nobody’s really looking at the sciences.

Number two, the information which I draw in the book to construct this paradigm of how people experience the built environment comes from a lot of different disciplines. Cognitive neuroscience, environmental psychology, evolutionary psychology, neuroanthropology, ecological psychology. In most cases, the studies that I was looking at and ended up finding most useful were not necessarily about the built environment. It was up to me to look at a study on how people respond to water surfaces versus mirrors, and then figure out what that meant for the design of the built environment.

Another reason is that in the academy, the effect of poststructuralism and identity politics has been to hammer into people’s heads the notion of cultural relativism: “You can’t possibly say things about how people experience the world because it’s all culturally constructed, socially constructed; it differs by gender, by locale.” And so the other dimension was that talking about individual experience, even if it’s related to social experience, but from an embodied-cognition point of view, meant that you were apolitical. Because you were talking about something very subjective and individual. So it was kind of forbidden territory.

The embodied-cognition approach is universalizing, although you make it clear that any design guidelines arising from it leave room for different social and cultural responses. Is it easier, or harder, to take this approach now than it would have been 10 or 15 years ago?

I don’t think it’s coincidental that I’m not in the academy and I wrote this book. I don’t want to sound like I’m attacking architectural education because there are plenty of people out there doing great things. This book basically started with an essay on Alvar Aalto and embodied cognition and metaphors, in a book edited by Stanford Anderson. I presented this when I was still teaching at Harvard, and people went nuts. They just went crazy. “Wait a minute, you’re making all these universalist claims!”

My response to that was, and remains, “Sure, there are a lot of things that are socially constructed. All you have to do is read my earlier work; it’s not like I disagree with those ideas. The fact is that humans live in bodies, and brains work in certain ways.”

There’s this dichotomy between those who [think] about architecture in social and political terms, and those who [think] about subjective experience, and never the twain shall meet. One of the things the book does is basically dissolve that opposition. The critical wink is the work of this [psychologist] Roger Barker, who had researchers assigned to kids. [The researchers] followed them around and took notes. Breakfast, school, chess club, ballet. The conclusion was they could tell more about the kids by looking at where they were than by looking at who they were. Their individual psychology mattered a lot less in terms of their experience and behavior than the environments they were in.

So there isn’t this opposition between looking at it as a social construct versus experiential construct. It’s all the same thing. It’s a continuum.

One thing I kept thinking while reading the book was how little agency we really have. Have you gotten pushback on that? I can imagine some people saying, “No way is the environment shaping my thoughts to this degree.”

If people thought that, they didn’t say it to me. I think people are more ready to accept it than they were 10 or 15 years ago. The mind-body connection has become so apparent. We know now, for example, that how we hold our body affects our mood. If you’re depressed and your shoulders are hunched forward, you’ll actually help yourself if you straighten up.

The second thing is behavioral economics, which I think has been really key, and has been adopted into policy. People don’t make decisions logically. They make decisions based on association and fallacious heuristics. I think that has paved the way for people to recognize, “I don’t have as much agency as I thought I did.” The paradox is, with a book like this, I’m hoping to enhance people’s agency with their awareness of it.

You argue that “enriched environments” should be a human right, included in the UN’s Human Development Index. What has to happen next for human-centered design to become not a luxury, but the norm?

Well, a lot. One of the reasons the book is targeted to a general audience is that basically, we need a real paradigm shift in how we think about the built environment. It’s kind of analogous to the paradigm shift that happened in the 1960s and the way people thought about nature.

When I was really young kid, nature was nature. It was forests, trees, lakes, rivers. Then people begin to use the word “environment.” It was a political and social construct, and emphasized the interrelatedness of all these different components within nature. That was a response to pesticides, air pollution, and so on. Now, kids get education in the environment from the time they’re in first grade. They start learning about climate change, visit waste treatment plants. That’s the kind of paradigm shift that needs to happen about the built environment. Then it suddenly becomes of general public health importance.

What concretely needs to happen: One, architectural education. Two, real-estate development. Three, building codes, zoning codes, all these things need to be reviewed according to these kinds of standards. Four, architects need to not be so skittish in thinking about human experience and learn more about it. It’s a much larger problem than just, “Architects should do better.” It’s not a professional disciplinary problem, it’s a larger social problem. We also need more research.

I was at a book event where Richard Roberts [a former New York City housing commissioner] said, “I’m going to recommend to every public official I know that they read this book.” I’ve had a lot of architects tell me that they gave the book to clients.

That seems smart.

Yeah, no joke. We need general education about the built environment that starts very early on. So there are a lot of things that need to change. But they can.

Your revolution was dumb and it filled us with refugees: A Canadian take on the American Revolutionary War | National Post

Your revolution was dumb and it filled us with refugees: A Canadian take on the American Revolutionary War | National Post

To be clear; Canada loves you, United States. You buy our oil, you made Drake a superstar and you haven’t invaded us for 205 years. As Poland and Ukraine keep reminding us, we really couldn’t ask for a better superpower neighbour.

However, just because it all worked out doesn’t mean that starting a brutal war over a tax dispute wasn’t a bit of an overreaction. As the National Post’s own Conrad Black wrote in a 2013 history of the United States, the Founding Fathers “do not deserve the hallelujah chorus ululated to them incessantly for 235 years.”

Canada had to fend off an invasion during the Revolutionary War, after all, so consider us qualified to deliver this Independence Day buzzkill.

The colonists weren’t fighting a “tyrannical” king so much as they were fighting one of the world’s most democratic nations

The Declaration of Independence places sole responsibility on Britain’s George III for establishing what Americans called “an absolute tyranny over these states.” But George III wasn’t an autocrat. While his power was much greater than the current Queen’s, he had an elected House of Commons and a prime minister to check him. Parliamentarians were free to heckle British war plans, and members of the British press (the freest in the world at the time) openly sided with the colonists. British democracy was far from universal, of course, with voting barred to women, Catholics and the lower classes — and with representation ridiculously concentrated in rural areas. But it was not a far cry from the soon-to-be-independent United States, whose first presidential election would only see about six per cent of the population eligible to vote.

The war did involve an autocratic tyrant though … on the colonists’ side

Speaking of autocrats, the American rebels counted one of the world’s most notorious as their best friend in Europe. Louis XVI, the absolute monarch of France, wholeheartedly backed the colonists’ cause as a way to embarrass the English. France smuggled weapons and advisers to the rebels, dispatched thousands of troops to the colonies and ordered its navy to travel the world and harass British efforts to supply their North American armies. Historians generally agree that, without French support, the British would likely have crushed the American Revolution. Meanwhile, the incredible cost of the American proxy war helped to lead an unstable France ever closer to financial ruin, revolution and, ultimately, the execution of Louis. So in effect, the United States owes its existence to an impulsive dictator who ran his country into the ground so hard that he got himself beheaded.

American colonists had sparked a world war … and then refused to help pay for it

The American Revolution was largely sparked by colonial opposition to new taxes. But Great Britain’s bid to get some American revenue makes a bit more sense when one considers that the colonies had just bungled the Brits into a wildly expensive world war. In 1754, a 22-year-old Virginia militia officer named George Washington took a group of men into what is now Pennsylvania to work out a territorial dispute with some nearby French-Canadians. Instead, the inexperienced Washington ambushed a French-Canadian patrol, accidentally executed the patrol’s commander and ended up sparking the Seven Years War. The resultant worldwide conflict — which included Great Britain’s conquest of Quebec — drove London to the edge of bankruptcy.

Canada’s plan to recognize native land and respect Catholics was deemed “intolerable” by colonists

In 1774, the British government introduced the Quebec Act, which allowed French-Canadians in British-conquered Quebec to freely practise Catholicism. Crucially, the act also extended the borders of Quebec down to what is now Ohio and kept in place a large band of “Indian” territory on the western edge of the American Colonies. It was a remarkably liberal document for the time, but anti-Catholic colonists balked at it for promoting “Popery” and for banning their hoped-for expansion into indigenous land. The Declaration of Independence, in fact, directly accused King George of kowtowing to “merciless Indian Savages.” The Quebec Act was soon cited by colonists as the worst of the so-called “Intolerable Acts,” a series of punitive measures that ultimately turned the dispute with Great Britain into a shooting war.

Revolutionary America had a pretty serious terrorism problem
In the recent book Scars of Independence, historian Holger Hoock dismisses modern depictions of the American Revolution as rooms full of men in powdered wigs discussing liberty. It was actually a “profoundly violent civil war,” he writes. One largely forgotten aspect of the war was how much the Patriot cause was driven by terroristic mobs prepared to torture judges, customs officials, newspaper editors or anyone else seen to be supporting British rule. Pro-government officials had their homes burned, their horses poisoned and many were snatched out of their beds in the middle of the night, stripped naked and subjected to mock drownings or tarring and feathering. Accounts of these outrages help explain why the conflict escalated so quickly. When hotheaded Brits backed George III’s call to swiftly put down colonial rebels, it wasn’t because they were incensed at a lack of tea tax revenue — it was because they feared that their American lands had fallen to mob rule.

It’s a little odd when a “struggle for liberty” fills Canada with refugees

Between 60,000 and 80,000 Loyalists fled to Canada following American Independence and lost everything when their property was seized by the new United States. Revolutions commonly prompt an exodus of refugees. Just in the past century, the Russian Revolution, Cuban Revolution and the Zanzibar Revolution, among others, all spawned vast refugee streams, some of which ended in Canada. But unlike the communist and vengeance-minded architects of those revolutions, the Americans were ostensibly fighting for a free, pluralistic democracy where “all men are created equal.” In hindsight, it’s pretty bad optics that vast columns of families felt the need to seek actual freedom and equality elsewhere. “With malice toward none, with charity for all” would have to wait for another civil war.

Of all the countries to obtain independence from Britain, only the U.S. and Ireland chose to do it violently

Roughly 60 independent countries around the world were once counted as British colonies or mandates. Of those, only the United States and the Republic of Ireland gained their independence as a direct result of political violence. Compare that to Spain, which violently resisted the departure of almost every one of its overseas colonies. Great Britain wasn’t afraid to get its hands dirty in colonial affairs, but London could be convinced to tolerate a colony’s peaceful transition to independence — particularly when said colony was filled with white English-speakers. Which is to say that if Americans truly wanted freedom, there were lots more options on the table than simply taking a shot at the first redcoat.

What Did Independence Day Mean to Southerners About to Secede? | History | Smithsonian

What Did Independence Day Mean to Southerners About to Secede? | History | Smithsonian

In the cooling evening air, Charleston, South Carolina’s notable citizens filed into Hibernian Hall on Meeting Street for the traditional banquet to close their Fourth of July festivities. The year was 1860, and the host, as always, was the ’76 Association, a society formed by elite Charlestonians in 1810 to pay homage to the Declaration of Independence.

The guest of honor was one of the city’s most beloved figures, William Porcher Miles, Charleston’s representative in the U.S. Congress in Washington. A former professor of mathematics at the College of Charleston, Miles had won his city’s heart with his heroic efforts as a volunteer nurse to combat an epidemic of yellow fever on the coast of Virginia. He was not a planter, and not even a slaveholder, but he believed in the Constitution and in the slave master’s rights sealed by that compact—and he had come to believe that America was best split into two.

Miles wasn’t happy when, amid the clinking of glasses, a poem approved by the ’76 Association was read out loud in the hall:

The day, when dissevered from Union we be,
In darkness will break, o’er the land and the sea;
The Genius of Liberty, mantled with gloom,
Will despairingly weep o’er America’s doom…

It was just a poem, mere words, sounded with a muted note of elegy. But there was no such thing as “mere words” in the blistering heat of this Charleston summer, with war about to erupt. Words, in 1860, were weapons. And these particular words struck a blow at an equation that secessionists like Miles had labored to forge between their cause and the broader American cause of freedom. This verse presented a quite different idea—the notion, heretical to the secessionist, that the sacred principle of liberty was bound up with Union, with the bonds linking together all of the states, and all of the people of the nation, from Maine to Texas.

So it went for Charleston in this year, beset with a complicated, even excruciating welter of emotions on the question of secession. As determined as so many in Charleston were to defend their way of life, based on slavery, under sharp challenge from the North, still there was room for nostalgic feeling for the Union and for the ideals set forth in the Declaration.

Independence Day in Charleston had begun as customary, with a blast of cannon fire from the Citadel Green at three o’clock in the morning. Roused from their slumber, Charlestonians made ready for a day of parades by militia units in colorful uniform. In the 102-degree heat, the men of the German Artillery, sweltering in their brass-mounted helmets, could only be pitied.

Surely, the town’s secessionists thought, it would be a fine occasion to trumpet their ripening movement. They would celebrate Independence indeed—the coming liberation of the South from the clutches of the nefarious Union. As odd, even bizarre, as this might seem today, Charleston’s secessionists sincerely felt they were acting in a hallowed American tradition. They saw themselves as rebels against tyranny, just like their forefathers who had defeated the British to win America’s freedom some 80 years before. In this instance, the oppressor was the Yankee Abolitionist in league with the devious Washington politician, together plotting to snatch from the South the constitutional right of an American, any American, to hold property in slaves.

By the summer of 1860, these self-styled revolutionaries seemed to be winning their improbable campaign. Back in the spring, at the Democratic National Convention, held in Charleston that year, Charlestonians packed the galleries and cheered wildly when radical Southern Democrats walked out of Institute Hall in protest over the refusal of Northern Democrats to agree to a party plank giving the slaveholder an unimpeded right to operate in western territories like Kansas and Nebraska. The rebel delegates proceeded to establish their own separate “Seceding Convention,” as The Charleston Mercury called this rump group. In its comment hailing the uprising, The Mercury, a daily bugle call for secession, declared that, “The events of yesterday will probably be the most important which have taken place since the Revolution of 1776. The last party, pretending to be a National party, has broken up; and the antagonism of the two sections of the Union has nothing to arrest its fierce collisions.” A Northern reporter strolling the moonlit streets wrote of the occasion that “there was a Fourth of July feeling in Charleston last night—a jubilee …. In all her history, Charleston had never enjoyed herself so hugely.”

In this electric atmosphere, public expressions in favor of the Union could scarcely, and maybe not safely, be heard. An abolitionist in Charleston risked being tarred and feathered. Horace Greeley’s New York Tribune, America’s largest paper by circulation and a standard-bearer for abolition, was banned in the city.

It was all the more remarkable, then, that the poem confessing to despair over the Union’s impending collapse was read for all to hear at the banquet at Hibernian Hall on July 4. Rep. Miles could hardly let a handwringing cry for Union stand unchallenged. He held his tongue at the banquet, but five nights later, at a political meeting of town folk held at the Charleston Theatre, up the street from Hibernian Hall, he gave his constituents a tongue lashing. “I am sick at heart of the endless talk and bluster of the South. If we are in earnest, let us act,” he declared. “The question is with you. It is for you to decide—you, the descendants of the men of ’76.”

His words, and many more like them, would win the summer of 1860 for his camp. Charleston’s passion was for rebellion—and the banquet poem turned out to be a last spasm of sentiment for the Union. Repulsed by such feelings, the Charleston merchant Robert Newman Gourdin, a close friend of Miles, organized rich Charlestonians into a Society of Earnest Men for the purpose of promoting and financing the secession cause. When an Atlanta newspaper mocked Charleston’s insurgents as all talk, no action, a member of the group responded in The Mercury that the Earnest Men would “spot the traitors to the South, who may require some hemp ere long.”

True to their identification of their undertaking with the American Revolution, the secessionists also formed a new crop of militia units known as Minute Men, after the bands that gathered renown in colonial Massachusetts for taking on the British redcoats. Recruits swore an oath, adapted from the last line of Jefferson’s Declaration of Independence, to “solemnly pledge, OUR LIVES, OUR FORTUNES, and our sacred HONOR, to sustain Southern Constitutional equality in the Union, or failing that, to establish our independence out of it.”

In November, with the election to the presidency of Abraham Lincoln, the candidate of the antislavery Republican Party, Charleston went all in for secession. Federal officeholders in the city, including the federal district court judge, resigned their positions, spurring The Mercury to proclaim that “the tea has been thrown overboard—the revolution of 1860 has been initiated.”

Charleston’s “patriotic” uprising ended in ruin—ruin for the dream of secession; ruin for the owner of human chattel, with the Constitution amended to abolish slavery; ruin for the city itself, large parts of which were destroyed by federal shells during the Civil War. The triumph, won by blood, was for the idea expressed ever so faintly by the men of ‘76 at Charleston’s July Fourth celebration of 1860, and made definitive by the war—the idea that liberty, and American-ness, too, were inextricably and forever tied to union.

Take a Look at the Patents Behind Sliced Bread | Smart News | Smithsonian

Take a Look at the Patents Behind Sliced Bread | Smart News | Smithsonian

Some products are so ubiquitous that it can feel as if they were never invented at all.

Take sliced bread. Around 130 years ago, the idea of buying a pre-sliced loaf would have been met with confusion, writes Jesse Rhodes for Smithsonian Magazine. “In 1890, about 90 percent of bread was baked at home, but by 1930, factories usurped the home baker,” Rhodes writes. But the two breads weren’t the same thing–”factory breads were also incredibly soft,” she writes, making them difficult to slice properly at home with a bread knife.

Since breadmaking had moved to factories, why not bread slicing as well? On this day in 1928, in Chillicothe, Missouri, the Chillicothe Baking Company became, in the words of its plaque, “The Home of Sliced Bread.” It was the place where the bread-slicing machine was first installed, wrote J. J. Thompson for Tulsa World in 1989. Thompson was speaking with the son of the bread-slicing machine’s inventor, Richard O. Rohwedder. His father, Otto F. Rohwedder, was a jeweler who started work on the bread-slicing project years before.

The Rohwedder family all went down to the factory to see the bread-slicing machine on its first day, Richard Rohwedder said. They brought the slicer to the factory, “and I fed the first loaf of bread into the slicer,” he recalled.

The patent for the bread slicing machine explains how it worked: the machine moved the bread into the slicer and then a series of “endless cutting bands” sliced the loaf before moving it along to where it could easily be packaged by a specially designed bread wrapping machine–another patent of Rohwedder’s.

The bread-wrapping machine was just one of a number of patents for which Rohwedder was responsible: these included a cardboard bread holder that shrank as the loaf did; a retail display rack for bread; and structural improvements like an improved conveyor belt for getting bread in and out of the slicer.

Among the other bread-related products Otto F. Rohwedder invented to support the bread-slicing machine was a bread holder that shrank as the loaf did. It responded to concerns that pre-sliced bread would inevitably go stale before the consumer wanted to eat it. (U.S. Pat. No. 1,816,399)

Rohwedder’s original invention of the slicing machine dated back to 1917, writes author Aaron Bobrow-Strain, but he had worked to refine and re-refine the idea in the intervening time. “Many bakers actively opposed factory slicing,” he writes, and the inventor was almost ready to throw in the towel.

The owner of the Chillicothe Baking Company, the man who first took a chance on the machine, was named Frank Bench, a friend of Rohwedder’s. Bench’s bakery was near bankruptcy, so he took a chance on the idea even though most bakers thought pre-slicing would make the bread stale.

“The results astounded all observers,” Bobrow-Strain writes. Bench’s bread sales soon skyrocketed by 2000 percent, and mechanical slicing quickly spread around the country. “By 1929, an industry report suggested that there was practically no town of more than twenty-five thousand people without a supply of sliced bread,” he writes.

“I remember the phone ringing day and night, all the time, with bakers ordering slices,” Richard Rohwedder said.

Rohwedder’s seemingly booming business was affected by the Great Depression, and he was forced to sell his patent rights to a larger company, who kept him on as staff. But still—he had the satisfaction of knowing he was the man to invent sliced bread.

The Curious History of the 911 Emergency System

The Curious History of the 911 Emergency System

Of all the things William Shatner has done in his career—captain of the starship Enterprise, veteran police officer, totally-game muse for Ben Folds, fairly good tweeter for being 86 years old—the one thing that’s stuck with me the most is his years on Rescue 911, the early reality show that was America’s Most Wanted (mostly) without the crime and Unsolved Mysteries with the solutions laid out clear as day.

Part of this is my age—I grew up during a period when I would regularly see a show like this on CBS, and Star Trek meant Captain Jean-Luc Picard of the United Federation of Planets (’cause he won’t speak English anyway). But the thing that was great about this show is that it seemed to celebrate heroism rather than focus on villains, a rarity for this era of reality television. Sometimes the stories are relatively minor emergencies; sometimes, they’re a matter of life and death. But it wouldn’t have existed had it not been for the hard work that went into the 911 system less than two decades before.

And getting there wasn’t easy.


The thing about the United States is that it’s big, laid out confusingly, and full of municipalities big and small. And getting them all on the same page with something as simple as an emergency code is really friggin’ hard.

But clearly there was some need for such a system, because people were calling random places to get emergency service. A 1921 article from Public Service magazine highlights the complexities of the situation, noting that New York City’s Bellevue Hospital was fielding as many as 2,500 emergency calls per day.

And every city and municipality is different. A great way to think about this is with a modern-day example: The neighborhood-based online service EveryBlock is only in a few cities, while the much larger NextDoor is pretty much everywhere. The reason for this is a basic functional difference: Because EveryBlock relies on public data acquired from municipalities, it has to theoretically make deals with every single city in the country, which limits its scale.

Now just imagine the problems that patchwork of municipalities created in the days before 911. One example I spotted from a 1946 Washington Post article describing a woman whose apartment was on fire highlights the rigamarole of pre-911 emergency calls: “[S]he tried vainly to reach an operator by dialing the emergency number ‘311.’ In her excitement she forgot the fire control center’s number, Union 1122.”

In 1958, a New York City woman named Rosamund Reinhardt pointed out the severity of the problem when she attempted to call in a fire at a nearby apartment. But she dialed the easiest number she knew: 0, for operator.

That did not work very well; being someone who lives in the busiest city in the country, she was forced to compete with every other person dialing the operator at the time, meaning she kept getting dropped lines. In a New York Times letter to the editor, she noted that the awkwardness of the system had put her in danger.

“The reason it seemed so long to me was that I discovered the time I had wasted in telephoning was just enough to prevent my getting out of the apartment, as the hall was impassable due to smoke,” she recalled.

Clearly, she survived. But the situation raised an incredibly obvious question for her: “Would it not be possible to cut out the operator stage by dialing directly some prearranged emergency number?”

It was not an unusual concept. The United Kingdom, which had three-digit emergency number in London as early as 1937—999, to be exact—had a much easier time making the whole thing work. The system, run by the post office, largely did what it was supposed to.

So why was this not the case for the United States? Simply, the ball hadn’t rolled quickly enough in that general direction.

The fire industry had long been debating the issues of how the telephone system would work during emergencies—in part because fire alarms relied on the system. A 1920 article in the Quarterly of the National Fire Protection Association, for example, highlighted a debate over whether to replace telegraph alarms with telephone alarms. And in the 1950s, some cities (notably Miami) had installed single-use physical telephones on city streets so that people could talk directly to the fire department if an emergency hit. (This setup, of course, led to lots of false alarms, along with a lot of controversy.)

But by the mid-1950s, officials from the International Association of Fire Chiefs (not the National Association of Fire Chiefs, as is sometimes cited, as that is not an organization) were starting to make the case for a single, national telephone number for the public to call in case of an emergency. It was the ball that rolled down the hill. Eventually.


To tell the story of the start of the 911 system—a phone call made by Alabama Senator Rankin Fite on February 16, 1968, made a little more than a month after AT&T officially designated the efficient-on-rotary-phones number—is to tell but half the story of the creation of the 911 system.

Because if a system for emergency treatment is to exist, it needs to work immediately. And in many parts of the country, it did not.

Few legal mandates existed, the first being the the Warren-911-Emergency Assistance Act, a bill signed into law in California in 1973. It came about in no small part to legislator Charles Hugh Warren convincing then-governor Ronald Reagan that it was a good idea to create the law and pay for it with a small surcharge on phone bills—a tax, essentially, but one that Warren made a compelling argument for. It was a significant legislative action, perhaps the feather in the cap of Warren’s entire career, and it set the stage for many similar laws and regulations nationally.

But before that, municipalities were basically on their own in implementing emergency systems. And this led to less than ideal solutions.

To offer frame of reference you’re familiar with: You know how the vehicle from Ghostbusters is a white hearse? That’s because vintage ambulances were hearses. And in some rural towns, the hearses from the funeral homes pulled-double duty—and the funeral home employees didn’t have much in the way of instruction beyond “drive fast.” The EMT, as a concept, was still a few years off.

That doesn’t leave any room, of course, for much in the way of medical treatment before people got to the hospital—during a critical period. And of course, these rural areas were often not served by AT&T, so their local companies weren’t in a position to switch over to 911—as it was not legally mandated by any single national law. (Even big cities, for reasons that can be best described as bureaucratic in nature, initially held their noses at the thought of 911.)

Fortunately, there was a new charitable foundation on the block that was in a position to help cover the gaps.

Robert Wood Johnson II, whose father had started Johnson & Johnson, had recently died at the time that the modern emergency apparatus was coming to life, and when the Robert Wood Johnson Foundation started in 1972, the medical world suddenly had a billion-dollar foundation ready to help—as the younger Johnson had turned the company into a corporate force (thanks in no small part to Band-Aids).

The foundation’s earliest efforts, launched with a $15 million grant, focused on improving emergency services in poorly served rural areas. A side effect of the improvements was that these areas tended to pick up 911, or at the very least, a similarly centralized number. In a case study for Duke’s Center for Strategic Philanthropy & Civil Society, author Scott Kohler noted that the the program’s effects were most obvious thanks to the uptake of 911:

Progress in the emergency response capabilities of the forty-four grant recipients was considerable. One of the most visible developments associated with the program was the expansion of the 911 emergency system. In 1973, only 11 percent of people in the areas supported by the Johnson Foundation program had access to 911, or some equivalent emergency phone number. By the program’s end, in 1977, 95 percent of them did. These outcomes were not mirrored in the nation as a whole. In 1979, only 25 percent of the U.S. population was covered by 911 or its like. Even today, the 911 system is available to only 85 percent of the population. But progress in the Foundation’s forty-four grant areas did serve as a model of the emergency phone number’s effectiveness. In this way, Foundation dollars were the spur that encouraged subsequent federal support.

The Johnson Foundation has remained a major advocate for 911’s growth, and isn’t afraid to speak about the results of the organization’s early charitable work.

“Back in the early 1970s, when the Foundation was young, we realized that the nation’s emergency response system wasn’t much of a system at all,” the foundation stated on its website. “Ambulance crews could not communicate with other ambulance crews. Nor could they share critical patient information with their destination hospitals. Only 12 paramedic crews existed in the entire country. Nothing like the 911 system existed at all.”

Now, the 911 system very much exists—as does the proper infrastructure to back it. (Congress designated 911 the official emergency number in 1999, three years after Rescue 911’s run had ended. Sorry, Shatner.)

The problem is that it has to keep pace with the way we use our phones.


I’d like to make an argument here. Perhaps William Shatner could help me out with this. He seems like a good guy, and his appearances and narration on Rescue 911 probably did more to help the cause of the national 911 system than a lot of other things tried.

Rescue 911 has not aired a new episode in more than 20 years. It has not aired in syndication in the United States in more than a decade. You cannot buy episodes of this show on DVD, either.

It makes sense: These are the kinds of stories that go out of date. The kid in that dog-stuck-in-wall video is probably 40. But since the show’s heyday, a lot of technology has become very common—specifically, smartphones.

Smartphones are infamously problematic for emergency dispatchers. They come with GPS capabilities that are hard to tap into—at least in the Uber-friendly way we’re used to. People, for obvious reasons, want to send text messages. And even landlines are more likely to rely on the voice over internet protocol (VoIP) than actual analog lines.

But 911 systems are often terribly out of date, and require expensive upgrades that can negatively affect the quality of service being offered to local communities.

“While the existing 9-1-1 system has been a success story for more than 30 years, it has been stretched to its limit as technology advances,” explains the National Emergency Numbers Association, which has been focused on this problem since at least 2000.

The problems that made rolling out the 911 number so difficult in the first place have not gone away. In fact, they’ve even gotten worse in some cases. Earlier this year, there were two spates of outages that affected areas covering millions of people.

The public will not be aware of these problems if they just dialed this number that’s been repeatedly drilled into their brains over the last 40 years. Really, we need to inform them through the most universal medium we still have—television—of the complications that the system currently has.

Because that might energize the public to demand fixes and ongoing progress for what is truly a vital resource. It needs to keep up with the times. Maybe Rescue 911 could help.

Emergencies don’t go away with better technology.

When real patriots got Tetanus | National Museum of American History

When real patriots got Tetanus | National Museum of American History

“The coming stupendous holocaust, caused by the sky-rocket, the giant fire cracker and the toy pistol, that leaves an annual trail of disaster following the glorious 4th of July celebrations, is a subject for our serious consideration.”—Editorial, Pediatrics, Vol. 22, No. 6, June 1910

So let us consider it: on the third of July, 1776, the day before the adoption of the Declaration of Independence by the Continental Congress, John Adams wrote to his wife, Abigail, that the nation’s independence should be commemorated “with Pomp and Parade, with . . . Guns, Bells, Bonfires and Illuminations.” Americans gleefully followed Adams’s recommendation with an annual bounty of Independence Day fireworks and gunshots, but by the beginning of the 20th century our love of colorful explosions and loud bangs brought with it a scourge: a deadly infection that became known as “patriotic tetanus.”

Tetanus is caused by the Clostridium tetani bacterium, whose spores are ever-present in soil. Many of us have found ourselves in need of a tetanus shot after stepping on a rusty nail, which can introduce the disease. The bacterium pumps out toxins that cause intense muscle spasms, particularly in the jaw, from which the common name “lockjaw” derives. Left untreated, tetanus gruesomely kills 90% of its victims, producing spasms that can become strong enough to break bones. Fireworks could cause a tetanus infection when they exploded on the spore-laden ground, sending showers of dirty shrapnel deep into the skin of bystanders.

By the late 19th century, fireworks became intimately linked with political campaigns and patriotic holidays. An industry had developed to supply political candidates and patriotic revelers with everything from hats and bunting to Roman candles and shotgun-launched exploding shells. As a relatively inexpensive entertainment, fireworks could enthrall large crowds with lots of bang for the buck. However, now that children and adults had easy access to miniature explosives, tetanus cases increased. These Independence Day infections were so common that they became known as “patriotic tetanus,” “Fourth of July tetanus,” or “patriotic lockjaw.”

Along with the firecrackers, Roman candles, and miniature cannons we know today, July Fourth revelers also enjoyed firing blanks—pistol cartridges loaded with gunpowder but no projectile. Blank cartridges caused a majority of the cases of patriotic tetanus—so many, in fact, that investigators wondered if tetanus spores were somehow deposited within the cartridges at the time of manufacture. A thorough study concluded that this was not the case, and that, as with other fireworks, tetanus-laden dirt from the surrounding environment was simply thrust into the wound at the time of the injury.

The July Fourth infection rate became so grave that the American Medical Association (AMA) began tracking patriotic tetanus in an annual report. In 1903, 406 fatal cases of patriotic tetanus were reported. The AMA stressed that all penetrating wounds from fireworks and blank cartridges were potential cases of tetanus. From 1903 through 1909, patriotic tetanus was responsible for almost two thirds of the 1,531 July Fourth explosives-related deaths.

There was, however, hope on the horizon for those of us who love fireworks but hate tetanus. By 1900, tetanus victims could be treated with an injection of antitoxin (what we now understand as antibodies) against the tetanus toxins. As access to the antitoxin increased, the number of deaths from tetanus declined . . . just in time for World War I, where soldiers would encounter explosions of filthy shrapnel on a much larger scale.

Tetanus antitoxin was heavily used during World War I, drastically reducing the disease’s mortality compared to previous military engagements. By World War II, we had moved beyond treatment to a preventative vaccine, a tetanus toxoid that produced a protective immunity against future exposure. Because it was administered widely to soldiers in World War II, a disease that once ravaged armies only affected a dozen U.S. service members. Explosions, both in war and in patriotic celebration, were now safer—at least regarding tetanus.

In the U.S. today, tetanus is prevented by routine vaccination of infants, children, and adults. Additional boosters are administered after suffering a dirty, deep wound, depending on the patient’s dosage history. In 2009, only 19 cases (two of which were fatal) were reported to the Centers for Disease control. Thanks to effective vaccines, your Fourth of July fun need not include patriotic lockjaw.

Three Very Modern Uses For A Nineteenth-Century Text Generator | Smart News | Smithsonian

Three Very Modern Uses For A Nineteenth-Century Text Generator | Smart News | Smithsonian

Some of the algorithms that underlie commonplace technology today have their roots in the nineteenth century–like the Markov chain.

The brainchild of Andrey Markov–who was himself born on this day in 1856–Markov chains are a way of calculating probability. As an example, consider how your iPhone can predict what you’re going to type next. The phone knows what you just typed and makes an educated guess about what you want to say next based on the probability of certain words appearing next to each other.

Although the algorithm that powers cell phone predictive text relies on some of the ideas behind Markov chains, it’s more complex than what’s being discussed here. That’s partly because the user, not the algorithm, picks the next step in the chain.

A “true” Markov chain would calculate what you are going to type next based on the last thing you typed, without any human input (kind of like when you play the “middle-button game,” hitting the next suggested prediction mindlessly until the computer generates a “sentence” of sorts).

“Markov chains are everywhere in the sciences today,” writes Brian Hayes for American Scientist. They “help identify genes in DNA and power algorithms for voice recognition and web search,” he writes. For instance, Google’s PageRank algorithm relies on a really complex system of Markov chains, according to Hayes.

But Markov chains aren’t just essential to the internet: they’re on the internet for entertainment purposes as well. Although it’s uncertain how Markov himself would have felt about these uses of his algorithm, take the Markov chain for a spin and see what you come up with.

Write a poem

Be like any other writer you like with Markomposition, a Markov generator. Input text–the sample text provided by creator Marie Chatfield includes non-copyrighted works such as the Declaration of Independence and Grimm’s Fairy Tales, but you can use whatever you want. Chatfield suggests that lots of text produces better poems, as does text with word repetition.

Compose some fanfiction

Markov chains can help write prose, as well as poetry. Jamie Brew, writer for parody site Clickhole, has created a predictive text generator that works on Markov-like principles to write fanfiction and other things. Like cell-phone predictive text, it’s not proper Markov text as the user is the one selecting the words, writes Carli Velocci for Gizmodo.

“[It’s] like a choose your own adventure book that’s running on autopilot,” Brew told Velocci. Take a look at his classic “Batman Loves Him a Criminal” and do it yourself using the source code (or, for that matter, using your phone’s predictive text interface.)

Make a Twitter bot

Make a Twitter bot—there are thousands out there, including this one from Public Radio International’s Science Friday—using Markov text. According to the SciFri team, it takes less than an hour, and all you need are a few choice Twitter accounts that you want to remix.

This Map Reimagines the Roman Empire With Subways

This Map Reimagines the Roman Empire With Subways

A new map of the ancient Roman empire plots its major roads in a way that makes sense to modern city dwellers— a subway system.

Basing the map off of 125 A.D, in the midst of Hadrian’s reign, the map shows the complexity of one of the longest lasting societies in the world. Based off public information from the Stanford’s ORBIS model, The Pelagios Project, and the Antonine Itinerary, Sasha Trubetskoy, a geographer studying at the University of Chicago, made a map that shows actual roads as well as a few whose names he invented.

But even the imagined names had real purposes. The Via Sucinaria, he says, involved a real trade from the Baltic region to Italy that carried amber but didn’t refer to a single road. This sort of collapsing is helpful for those just getting their feet wet in the complexity of Roman systems.

Of course, the Mediterranean is also a massive piece of the infrastructure puzzle when it comes to Roman trade. “Sailing was much cheaper and faster,” Trubetskoy says, with “a combination of horse and sailboat would get you from Rome to Byzantium in about 25 days, Rome to Carthage in 4-5 days.”

If you’re a fan of history and infrastructure, you can buy Trubetskoy’s map though PayPal.

What the Six-Day War Tells Us About the Cold War | History | Smithsonian

What the Six-Day War Tells Us About the Cold War | History | Smithsonian

In the 70 years since the United Nations General Assembly approved a plan to divide British Palestine in two—a Jewish state and an Arab one—the region of modern-day Israel has been repeatedly beset by violence. Israel has fought one battle after another, clinging to survival in the decades after its people were systematically murdered during the Holocaust. But the story of self-determination and Arab-Israeli conflicts spills out far beyond the borders of the Middle East. Israel wasn’t just the site of regional disputes—it was a Cold War satellite, wrapped up in the interests of the Soviets and the Americans.

The U.S.S.R. started exerting regional influence in a meaningful way in 1955, when it began supplying Egypt with military equipment. The next year, Britain and the U.S. withdrew financing for Egypt’s Aswan High Dam project over the country’s ties with the U.S.S.R. That move triggered the Suez Crisis of 1956, in which Egypt, with the support of the USSR, nationalized the Suez Canal, which had previously been controlled by French and British interests. The two Western countries feared Egyptian President Nasser might deny their shipments of oil in the future. The summer of that year, Egypt also closed the Straits of Tiran (located between the Sinai and Arabian peninsulas) and the Gulf of Aqaba to Israeli shipping, effectively creating a maritime blockade. Supported by Britain and France, Israel retaliated in October by invading Egypt’s Sinai Peninsula. The combined diplomacy of the U.N. and the Eisenhower administration in the United States brought the conflict to a conclusion, with Israel agreeing to return the territory it had captured and Egypt stopped the blockade. To lessen the chance of future hostilities, the UN deployed an Emergency Force (UNEF) in the region.

The Soviet Union continued its close relationship with Egypt after the Suez Crisis, working to establish itself as a power in the region. “This gave it strategic advantages such as the ability to choke off oil supplies to the West and threaten NATO’s ‘soft underbelly’ in Southern Europe,” say Isabella Ginor and Gideon Remez, both associate fellows of the Truman Institute at the Hebrew University of Jerusalem and authors of Foxbats Over Dimona and The Soviet-Israeli War, 1967-1973.

The U.S.S.R. wasn’t the only Cold War power with an eye on the Arab-Israeli situation. The Kennedy administration also hoped to shore up Arab support by developing a strong relationship with Egypt. In the early 1960s, Kennedy committed the U.S. to providing $170 million worth of surplus wheat to Egypt. That policy was eventually overturned, and the Soviet Union exploited it to grow closer to Nasser.

But Kennedy wasn’t just inserting himself into Arab affairs—he was also working to earn the trust of Israel. In August 1962, Kennedy overturned the previous decade of U.S. policy toward Israel (which stated the U.S. and European powers would support it, but not instigate an arms race). He became the first president to sell a major weapon system to Israel; the Hawk anti-aircraft missile was to be the first in a long line of military supplies Israel received from the U.S. (next up was the A-4 Skyhawk aircraft and M48A3 tanks, approved for sale by the Johnson administration).

While a humanitarian concern may have played a role in Kennedy’s decision, the larger world context was also critical: the U.S. needed a regional ally for the Arab-Israeli conflict, which was transforming into another Cold War stage where allies might mean access to oil.

Just ten years after the conclusion of the Suez Crisis, violence was again becoming a regular element of the region. In the 18 months before the Six-Day War, Palestinian guerillas launched 120 cross-border attacks on Israel from Syria and Jordan. They planted landmines, bombed water pumps, engaged in highway skirmishes, and killed 11 Israelis. Then in November 1966, a landmine killed three Israeli paratroopers near the border town of Arad. Israel responded with a strike on Samu, Jordan, since they believed Jordan had provided assistance to the Palestinian fighters. The attack resulted in the destruction of more than 100 houses, a school, a post office, a library and a medical clinic. Fourteen Jordanians died.

Quick work by American diplomats resulted in a U.N. resolution condemning Israel’s attack, rather than a more immediate escalation of hostilities, but the U.S. intervention did nothing to solve the ongoing problem of Palestinian attacks against Israel.

Which brings us to May 1967, when the U.S.S.R. provided faulty intelligence to Nasser that Israel was assembling troops on Syria’s border. That report spurred the Egyptian president to send soldiers into Sinai and demand the withdrawal of UNEF forces. Egypt then closed the Straits of Tiran to Israel once more, which the Eisenhower administration had promised to consider as an act of war at the end of the Suez Crisis.

The U.S.S.R. were concerned with more than just Sinai; they were also gathering intelligence in Soviet aircraft sent out of Egypt to fly over the Israeli nuclear reactor site of Dimona, according to research by Ginor and Remez.

“If Israel achieved a nuclear counter-deterrent, it would prevent the U.S.S.R. from using its nuclear clout to back up its Arab clients, and thus might destroy the Soviets’ regional influence,” Ginor and Remez said by email. “There was also a deep-seated fear in Moscow of being surrounded by a ring of Western-allied, nuclear-armed pacts.”

For Roland Popp, a senior researcher at the Center for Security Studies, the Soviet Union may have had real reason to think Israel would eventually be a threat, even if the Sinai report they provided Egypt was wrong. And for Egypt, responding may have been a calculated policy rather than a hotheaded reaction, considering that the U.N. had told them the intelligence was faulty.

“I think in retrospect, Nasser wanted an international crisis,” Popp says. “It didn’t matter if the Israelis mobilized troops or not. What mattered was that history had showed the Israelis were hellbent on punishing Syria. The Arabs weren’t capable of militarily containing Israel anymore. Israeli fighter planes could penetrate deep into Syrian and Egyptian airspace without being challenged.”

But Popp also adds that it’s still almost impossible to reconstruct the true motives and beliefs of the protagonists, because there’s little material available from the incident.

Whatever the leaders of Egypt and the Soviet Union may have been thinking, their actions caused acute terror in Israel. Many worried about an impending attack, by an air force armed with chemical gas or by ground troops. “Rabbis were consecrating parks as cemeteries, and thousands of graves were dug,” writes David Remnick in The New Yorker.

Meanwhile, the U.S. remained convinced that Nasser had no real intention of attacking. When President Johnson ordered a CIA estimate of Egypt’s military capabilities, they found only 50,000 in the Sinai Peninsula, compared with Israel’s 280,000 ground forces. “Our judgment is that no military attack on Israel is imminent, and, moreover, if Israel is attacked, our judgment is that the Israelis would lick them,” Johnson said. He cautioned Israel against instigating a war in the region, adding ominously, “Israel will not be alone unless it decides to do it alone.”

For Israelis, it was a moment of crisis. Wait for the enemy to attack and potentially destroy their nation, having not yet made it to its 20th year? Or take the offensive and strike first, risking the ire of the U.S.?

Ultimately, the latter option was chosen. Early on the morning of June 5, 1967, the Israel Air Force launched a surprise attack and destroyed Nasser’s grounded air force, then turned their sights to the troops amassed on the borders of Syria and Jordan. Within six days, the entire fight was over, with Israel dramatically overpowering their neighbors. In the process Egypt lost 15,000 men and Israel around 800. Israel also gained Sinai and Gaza from Egypt, the West Bank and East Jerusalem from Jordan and the Golan Heights from Syria. The little nation had quadrupled its territory in a week.

The immediate aftermath of the war was celebrated in Israel and the U.S., but “the Johnson administration knew the Israeli victory had negative aspects,” Popp says. It meant a more polarized Middle East, and that polarization meant a window of opportunity for the Soviet Union. “There was a good chance [after the war] to find some kind of deal. But you have to understand, the Israelis just won a huge military victory. Nothing is more hurtful to strategic foresight than a huge victory. They didn’t feel any need whatsoever to compromise.”

Most of the territory Israel had won has stayed occupied, and the conflict between Israel and the Palestinian territories today seems as intractable as ever. At this point the U.S. has given more than $120 billion to Israel since the Six-Day War, reports Nathan Thrall, and Israel receives more military assistance from the U.S. than from the rest of the world combined. Today about 600,000 Israelis—10 percent of the nation’s Jewish citizens—live in settlements beyond the country’s 1967 borders. And for Palestinians and Israelis alike, those settlement shave meant terrorism, counterattacks, checkpoints and ongoing hostility.

“What greater paradox of history,” Remnick writes of the Six-Day War’s legacy. “A war that must be won, a victory that results in consuming misery and instability.”

200+ Free Art Books Are Now Available to Download from the Guggenheim – Creators

200+ Free Art Books Are Now Available to Download from the Guggenheim – Creators

A veritable art history degree’s worth of books digitized by the Solomon R. Guggenheim Museum are now available for free.

There’s Wassily Kandinsky’s 1946 treatise, On the Spiritual in Art; books about movements from the Italian metamorphosis and Russian Constructivism; thousands of years of Aztec and Chinese art; and catalogs of work by the many greats to pass through the Guggenheim’s Frank Lloyd Wright-designed halls. Formerly locked in paper prisons (a.k.a., hard-copy books), analysis of work by Pablo Picasso, Roy Lichtenstein, Dan Flavin, Robert Rauschenberg, Gustav Klimt, Mark Rothko, and more is now free to roam the web as PDFs and ePubs.

The initiative to publish certain entries from The Guggenheim’s vast library began with 65 catalogs published in 2012, and has now grown to 205 titles. This joins 43 titles available in the Los Angeles County Museum of Art’s Online Reading Room, 281 from Getty Publications’ Virtual Library, and the Metropolitan Museum of Art MetPublications’s whopping 1,611 books you can download for free. That’s in addition to the 375,000+ high resolution images of the artworks themselves the Met dumped into the public domain earlier this year.