The Radical Reference Librarians Who Use Info to Challenge Authority – Atlas Obscura

The Radical Reference Librarians Who Use Info to Challenge Authority – Atlas Obscura

An adaptation of Banksy’s “Flower Bomber,” this depicts a librarian in protest, throwing Margaret Atwood’s A Handmaid’s Tale.

FROM AUGUST 29 THROUGH SEPTEMBER 2, 2004, a series of protests erupted in New York in response to the 2004 Republican National Convention and the nomination of George W. Bush for the impending election. Nearly 1,800 protesters were arrested during the convention, and later filed a civil rights suit, citing violation of their constitutional rights.

During the protests, a steady team provided support to anyone who needed information amid the confusion: a modest group of socially conscious librarians from around the United States, armed with folders of facts ranging from legal rights in dealing with police to the locations of open bathrooms.

“We wanted to operate as if we were bringing a reference desk to the streets,” explains Lia Friedman, Director of Learning Services at University of California San Diego, who was at one of the protest marches in 2004. At the time, fewer people had smartphones, making this service both new and important. When someone asked a question that wasn’t included in their traveling reference desk folders, other librarians waiting at their home computers were poised to research and deliver information by phone.

The group of librarians soon formed into the first-ever chapter of the Radical Reference Collective, a non-hierarchal volunteer collective who believe in supporting social justice, independent journalists, and activist causes. Since the group’s first action at the Republican National Convention of 2004, the group, originally based in New York, has spread across the United States as a collection of individual local chapters. New collectives formed via library listservs, the Rad Ref’s website, and word of mouth.

Because the organization is non-hierarchical, there is no consensus on exactly what the “radical” in Radical Reference really means, nor what kind of work it might imply. Members come from different backgrounds; one New York City Rad Ref member is personally involved with a group that provides street medics training to help people medically during protests; other members are from academia or are involved in work with prisoners or science archives. The website for Rad Ref points out that in no way does the word “radical” specifically denote a political affiliation of any kind; the word “radical” is used to challenge “mainstream meaning which largely marginalizes the term and along with it certain groups.” But, members do form their own opinions on the matter.

“In my opinion, using the word ‘radical’ means advocating for change, whether that is political, societal,” says Friedman. Audrey Lorberfeld, Digital Technical Specialist at The New York Academy of Medicine and longtime member of the New York Rad Ref group, says that this inherent politicization of information is apparent to librarians regardless of their specialty: in the introductory information sciences courses for those pursuing a Masters of Library Sciences in the U.S., librarians-to-be become well-versed in the American Librarians Association code of ethics, which includes intellectual and informational freedom. According to both Lorberfeld and Friedman, who were interviewed separately for this article, to many librarians, the idea that information is neutral is a myth.

Now, amid a divisive political climate in the U.S., the original New York group is continuing to provide open-access information for all, be it about a specific historical fact, civil rights infringement statistic, or the complex laws regarding immigration. Often this support is lended to social justice organizations, independent journalists, and, as their website states, “anyone who questions authority.”

Some of the Rad Ref groups use social media to communicate and promote events, talks, and workshops with the public and activist organizations, while others meet face-to-face. The topics vary wildly; some events may concern more local issues, like planning support for a library worker strike, while others could involve “creating or using a resource guide on a relevant issue, i.e. Black Lives Matter, Critical Librarianship, Fact Checking basics” says Friedman.

Rad Ref has sizzled in the background of protests, local workshops and activist groups across the U.S. since its inception, although participation varies, and each local group is unique. Friedman spent years providing information through the Rad Ref website, which formerly acted as a virtual reference question desk. More recently she has participated on a smaller scale within the San Diego group, which sometimes has only a few members; both Friedman and Lorberfeld noted that since librarians tend to be involved in multiple projects and membership is voluntary, the numbers of a group can fluctuate. Occasionally, the groups have gone on hiatus, as the New York group’s online reference presence did in 2013, when its members didn’t have time to devote to the struggles of running a volunteer-based organization.

That hiatus hasn’t lasted, though: New York City’s Rad Ref was reinvigorated after President Donald Trump took his oath of office in early 2017. And these librarians are ready to radicalize their role as information champions. During their first meeting post-hiatus, the room was overflowing with activists and librarians who deeply cared about organizing to preserve information. “It was an amazing feeling,” says Lorberfeld.

Right now, the New York group’s goal is to build a community of knowledgeable experts on the art of finding and delivering information, which can become a resource for librarians and activist organizations alike. “We’re trying to look inward and educate ourselves, from anything from immigration rights to grassroots organizing best practices,” says Lorberfeld. “One member is making a guide about Unions in New York City; what they are, how to join them,” and another is creating a map of available free meeting places for organizations throughout the city.

Friedman and other Rad Ref volunteers once helped a writer who was working on research for a nonfiction book on resistance and struggle within women’s prisons, for example, and needed access to statistics and facts that were not easy to find, such as the amount of prisoners in a specific county in a specific year. Through the website, Rad Ref librarians were able to provide specific numbers she used throughout her book by pooling their skills in data research and law. While the reference service aspect of the website has been inactive since the hiatus, the website is still available as an archive, and librarians will still answer the occasional question in the Rad Ref inbox.

It might seem like information is open to everyone today: the internet is common and most people know how to type a sentence into Google. But as Friedman and Lorberfeld explain, there’s often more to it than that when trying to find specific and often very meaningful details and information. Sometimes, the only thing keeping data from, say, a government website like that of the Environmental Protection Agency from being removed or tampered with by politicians may be the librarians, working behind the scenes to preserve that information—something that actually happened earlier this year.

“We used to teach that a .gov site was trusted. And that’s a little bit more challenging to do now,” Friedman says. A Google search often isn’t curated or necessarily fact-checked, and doesn’t always provide multiple and balanced sources. Sometimes key information is behind an academic journal’s steep paywall, or buried in government documents under specialized lingo. “We really wanted to support independent journalists and activists, and really wanted to give people access to information, which is a pretty librarian thing to do,” says Friedman.

It really is a pretty librarian thing to do. Despite librarians’ public image of glasses-clad women hushing pesky kids, library workers around the world, from South Africa to Sweden, have formed similar organizations to Rad Ref, according to Alfred Kagan in his book Progressive Library Organizations: A Worldwide History. Since the 1960s in the United States, many library workers have committed to “social responsibility,” the democratic giving of information to the public and to free speech. Kagan writes that progressive groups have used their independence from the American Librarians Association (ALA), the major national librarian group in the U.S., “to take radical stances.”

A subunit of the ALA includes the Social Responsibilities Round Table, which has, according to its website, worked to democratize the ALA with human and economic rights in mind since 1969. Similar groups exist around the United States, such as the Progressive Librarians Guild, which aims toward an international agenda. Rad Ref participates in this tradition, though it differs in that it’s comprised of local librarian groups who work within their individual volunteer base’s skills and goals, without a central governance.

“We all sort of have a really core sentimental belief of information access as a human right, and I feel like that really governs what we do within the group,” Lorberfeld says. By using their diverse backgrounds and talents, Rad Ref is now readying itself to help activist groups by promoting information that aids in furthering social equality.

http://www.atlasobscura.com/articles/radical-reference-collective

Advertisements

Algorithms Have Already Gone Rogue | WIRED

Algorithms Have Already Gone Rogue | WIRED

For more than two decades, Tim O’Reilly has been the conscience of the tech industry. Originally a publisher of technical manuals, he was among the first to perceive both the societal and commercial value of the internet—and as he transformed his business, he drew upon his education in the classics to apply a moral yardstick to what was happening in tech. He has been a champion of open-source, open-government, and, well, just about everything else that begins with “open.”

His new book WTF: What’s the Future and Why It’s Up to Us seizes on this singular moment in history, in which just about everything makes us say “WTF?”, invoking a word that isn’t “future.” Ever the optimist, O’Reilly celebrates technology’s ability to create magic—but he doesn’t shirk from its dangerous consequences. I got to know Tim when writing a profile of him in 2005, and have never been bored by a conversation. This one touches on the effects of Uber’s behavior and misbehavior, why capitalism is like a rogue AI, and whether Jeff Bezos might be worth voting for in the next election.

Steven Levy: Your book appears at a time when many people who once had good feelings towards technology are now questioning it. Would you defend it?

Tim O’Reilly: I like the title WTF because it can be an expression of amazement and delight or an expression of amazement and dismay. Tech is bringing us both. It has enhanced productivity and made us all richer. I don’t think I would like to roll back the clock.
Not that rolling it back is an option.

No, but it’s important for us to realize that technology is not just about efficiency. It’s about taking these new capabilities that we have and doing more with them. When you do that, you actually increase employment. As people came off the farm, we didn’t end up with a vast leisure class while two percent of people were feeding slop to animals. We ended up creating new kinds of employment, and we used that productivity actually to enhance the quality and the quantity of food. Why should it be different in this era of cognitive enhancement? Uber and Lyft are telling us that things we used to think of as being in the purely digital realm, in the realm of media, whatever, are coming to the real world. So that’s the first wake up call for society. Secondly, we’re seeing a new kind of interaction between people and algorithmic systems. Third, they represent a new kind of marketplaces based on platforms [in this case, they exist because the of the platform of smartphones—and then they can become platforms of their own, as new services, like food delivery, are added in addition to transit]. This marketplace works because people are being augmented with new cognitive superpowers. For example, because of GPS and mapping apps, Uber and Lyft drivers don’t need a lot of training.

Agreed. But when the curtain rolls back we see that those superpowers have consequences: Those algorithms have bias built in.

That’s absolutely right. But I’m optimistic because we’re having a conversation about biased algorithms. We had plenty of bias before but we couldn’t see it. We can’t see, for example, that the algorithms that manage the workers at McDonald’s or The Gap are optimized toward not giving people full-time work so they don’t have to pay benefits. All that was invisible. It wasn’t until we really started seeing the tech-infused algorithms that people started being critical.

In WTF you talk about a specific out-of-control algorithm: the capitalist impulse to maximize profits regardless of societal consequences. The way you describe is reminds me of Nick Bostrom’s scenario of an AI machine devoted to making paper clips—because that’s its sole mission, it winds up eating up all the materials in the world and even killing those who would turn it off. Corporations whose sole justification is shareholder value seem to be working on a similarly destructive algorithm.

Yes, financial markets are the first rogue AI.

How do you roll back that particular AI?

I try to show [earlier cases of] how humans tried to manage their algorithms, by talking about [how Google improved] search quality. Google had some pretty bad patches where the spammers really had the upper hand, and they addressed it.

And that can be done to fix capitalism’s rogue AI?

Somebody planted the idea that shareholder value was the right algorithm, the right thing to be optimizing for. But this wasn’t the way companies acted before. We can plant a different idea. That’s what this political process is about.

Speaking of politics, it seems like another runaway algorithm has led us to a government controlled by people who don’t represent majority views.

I look at it through the long arc of history. You look at the long slow decline of the Roman Empire and see so many analogies—the turning away from literacy and science, the outsourcing of core functions of government to mercenaries effectively. We could go through some real shit before we turn around. We might not turn around at all. But I take hope from something that Tim Urban in Wait But Why calls “the human colossus.” He has this fabulous description of how Elon Musk moves this human colossus in a new direction—to show that it’s possible to go into space, to show that it’s possible to build a brain-machine interface—and then everybody else will follow along. The human colossus I’m most heartened by is the post-World War II period. We learned a lesson from the incredible convulsions after World War I where there was vast dislocation, as we punished the losers of the war. So after World War II they rebuilt Europe, and they invested in the returning veterans with the GI Bill.
As we learn from tech, though, algorithms need continual improvement. You don’t just set them in motion and leave them forever. The strategies put in place after World War II that worked for this period of 30 years have stopped working so well, so we came up with something else [which happened to create income inequality]. There’s a sense that things going wrong will lead to new strategies. And now that Trump has broken the Overton Window—

What’s that?

It’s this idea [named for the late think tank leader Joseph Overton] that there’s a certain set of things that are considered acceptable in public policy debate, and you just can’t go outside that window. And Trump has just done everything unthinkable. Because all bets are off, we are not necessarily going back to the old, tired solutions. I think it’s possible that we’ll shrug off this madness, and we will come back to saying we really have to invest in people, we really have to build a better economy for everyone. In China, they’re already doing that. China has recognized that its vast population is a possible powder keg and it has to take care of its people. That’s something we have not done. We’ve just been pushing down the ordinary people. China is also being more aggressive than any other country in rising to the challenge of climate change. So there’s two possibilities—we’re going to wake up and start acting the same way, or China will lead the world.
Reading your book I think I know who you’d like for our next president: Jeff Bezos. The book is full of Bezos love.

Well. Jeff and Elon [Musk] are probably the two entrepreneurs I admire most.

You can think of the book as an apology to Jeff. As a publisher, I originally bought the usual story, that Amazon would go the way of Wal-Mart—the more dominant it got, the more it would extract value for itself, squeezing down its suppliers. Jeff is a ruthless competitor, no question, but while Amazon has done a chunk of that, it has spent so much time trying to do more. I’m not sure that Jeff would make a great president, but he might.

You’d vote for him, wouldn’t you?

It would depend who he was running against but, yeah, I probably would.

You also praise Uber in your book. Do you think it’s possible to distinguish between the value of its service and the ethics of the company?

Uber is a good metaphor for what’s right and wrong in tech. Here we have this amazing new technology, which is transforming an industry and putting more people to work than worked in that industry before, creating great consumer surplus, and yet it has ridden roughshod over cities, and exploited drivers. It’s interesting that Lyft, which has been both more cooperative in general and better to drivers, is gaining share. That indicates there’s a competitive advantage in doing it right, and you can only go so far being an ass.
Let’s finish by talking about AI. You seem a firm believer that it will be a boon.

AI itself will certainly not take away jobs. I recently saw a wonderful slide from Joanna Bryson, a professor from the University of Bath. It referred to human prehistory and the text said, “12 thousand years of AI,” because everything in technology is artificial intelligence. What we now call AI is just the next stage of us weaving our intelligence together into a greater whole. If you think about the internet as weaving all of us together, transmitting ideas, in some sense an AI might be the equivalent of a multi-cellular being and we’re its microbiome, as opposed to the idea that an AI will be like the Gollum or the Frankenstein. If that’s the case, the systems we are building today, like Google and Facebook and financial markets, are really more important than the fake ethics of worrying about some far future AI. We tend to be afraid of new technology and we tend to demonize it, but to me, you have to use it as an opportunity for introspection. Our fears ultimately should be of ourselves and other people.

https://www.wired.com/story/tim-oreilly-algorithms-have-already-gone-rogue/

What’s the Matter With Applebee’s? – Eater

What’s the Matter With Applebee’s? – Eater

The experience of sliding into an overstuffed leather booth, hemmed in by walls decked in dubious Americana, the metal signs and pilfered taxidermy alluding to a time and place steeped in myth and wholly alien to the strip mall outside, while perusing a menu of oversauced fried hunks of protein and cheap carbs, all under the tawny haze of a poorly cloned Tiffany lamp, wasn’t quite universal. But it was common enough that the market, the great American arbiter of truth and beauty, blessed the suburbs from coast to coast — where so many of us were spawned and haltingly shepherded toward nominal adulthood — with thousands upon thousands of places in which to have that experience: The casual dining chain bloomed, almost like an onion you might say.

And now it’s dying, sort of. Because they’re terrible places, or because of millennials, or because of looming class warfare, or probably all of the above. Whatever the reasons, it should probably be less surprising that a monoculture as vast and mediocre as the suburban sitdown restaurant has contracted a terminal illness now slowly spreading from specimen to specimen, from Applebee’s to Ruby Tuesday’s to BW3 or whatever the fuck they’re calling Buffalo Wild Wings these days. More fascinating than the grinding demise of this corporate culinary hegemon, maybe, is the knowing, mournful soundtrack that we can’t help but provide with the collective gnashing of our teeth: Why do we still care so much about these places that we’ve since decided offer us such hollow fulfillment? —Matt Buchanan

[There are links to several articles about the decline of the suburban restaurant chains.  All the articles are depressing given the state of middle class America the chains represent.]

https://www.eater.com/2017/10/3/16360878/decline-applebees-olive-garden-tgi-fridays

Today’s roads can’t handle today’s climate conditions | Smart Cities Dive

Today’s roads can’t handle today’s climate conditions | Smart Cities Dive

Dive Brief:

  • Engineers are using outdated temperature data — from 1964 to 1995 — to pick the right temperature-sensitive asphalt blends for use on roads today, Ars Technica reported, citing a new study in Nature Climate Change, which found the mismatch could raise road maintenance costs significantly.
  • In a study of nearly 800 asphalt roads built in the U.S. over the last 20 years, researchers found that 35% used an asphalt product ill-suited to current climate conditions. For one-quarter of those cases, the roads weren’t built to handle the high temperatures it experienced currently.
  • The researchers note that using an asphalt product even one grade short of what would be necessary could cut a few years off the road’s life, requiring repaving sooner than anticipated.

Dive Insight:

One in five miles of highway pavement was in poor condition in 2014 with urban roads (32%) worse off than rural ones (14%), according to the American Society of Civil Engineers. Poor roads don’t cost only those charged with their maintenance and repairs, however. That level of disrepair caused vehicle operators $112 billion in additional fixes and operating costs that year.

State and local governments paved roads in asphalt when prices for the material were low. Now, they are now rethinking that strategy as construction costs trend upward. To help manage those costs, at least 27 states have turned some of their asphalt roads to gravel, with most of that work occurring in the last five years, the ASCE reported.

Omaha, NE, is one municipality doing just that. The New York Times reported earlier this year that the city decided to convert some asphalt roads, including those in higher-end neighborhoods, to gravel after it struggled to fund repairs, such as filling potholes.

Montpelier, VT, has also made headlines for its de-paving activities. The city saved $120,000 by replacing asphalt on some run-down roads with dirt and gravel reinforced with a geotextile for stability rather than repaving, Wired reported.

A substantial number of de-paved roads are in rural areas that don’t see heavy traffic, according to a 2016 study from the National Cooperative Highway Research Program. Still, regardless of their location, drivers aren’t necessarily in favor of gravel roads, which generally lead to more wear on their vehicles as compared to asphalt or concrete surfaces.

However, it’s likely more state and local governments will look to de-paving as a way to manage road maintenance costs as questions continue around potential new long-term funding sources for infrastructure.

http://www.constructiondive.com/news/todays-roads-cant-handle-todays-climate-conditions/506390/

National Aquarium | Cephalopods: Arms or Tentacles?

National Aquarium | Cephalopods: Arms or Tentacles?

Cephalopods are a class of marine mollusks including the octopus, cuttlefish and squid. The name cephalopod means “head-foot” because they have limbs attached to their head, and these mollusks are well-known for their arms and tentacles. And while all cephalopods have arms, not all cephalopods have tentacles.

Tentacles are long, flexible organs found on invertebrate animals. They are important for feeding, sensing and grasping. Tentacles are longer than arms, are retractable and have a flattened tip that is covered in suckers.

Arms are similar to tentacles, but still distinctly different. Arms are covered with suckers that help with grasping food items. In addition, these arms are useful to attach to surfaces while resting.

The names may seem interchangeable, but when it comes to cephalopods, there’s a difference between arms and tentacles. An easy way to spot the difference is that arms have suckers along their entire length, while tentacles only have suckers at the tip.

This means that octopuses have eight arms and no tentacles, while other cephalopods—such as cuttlefish and squids—have eight arms and two tentacles.

https://aqua.org/blog/2017/October/Cephalopods-Arms-or-Tentacles

The Coming Software Apocalypse – The Atlantic

The Coming Software Apocalypse – The Atlantic

here were six hours during the night of April 10, 2014, when the entire population of Washington State had no 911 service. People who called for help got a busy signal. One Seattle woman dialed 911 at least 37 times while a stranger was trying to break into her house. When he finally crawled into her living room through a window, she picked up a kitchen knife. The man fled.

The 911 outage, at the time the largest ever reported, was traced to software running on a server in Englewood, Colorado. Operated by a systems provider named Intrado, the server kept a running counter of how many calls it had routed to 911 dispatchers around the country. Intrado programmers had set a threshold for how high the counter could go. They picked a number in the millions.

Shortly before midnight on April 10, the counter exceeded that number, resulting in chaos. Because the counter was used to generating a unique identifier for each call, new calls were rejected. And because the programmers hadn’t anticipated the problem, they hadn’t created alarms to call attention to it. Nobody knew what was happening. Dispatch centers in Washington, California, Florida, the Carolinas, and Minnesota, serving 11 million Americans, struggled to make sense of reports that callers were getting busy signals. It took until morning to realize that Intrado’s software in Englewood was responsible, and that the fix was to change a single number.

Not long ago, emergency calls were handled locally. Outages were small and easily diagnosed and fixed. The rise of cellphones and the promise of new capabilities—what if you could text 911? or send videos to the dispatcher?—drove the development of a more complex system that relied on the internet. For the first time, there could be such a thing as a national 911 outage. There have now been four in as many years.

It’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code. This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.

“When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.

Software is different. Just by editing the text in a file somewhere, the same hunk of silicon can become an autopilot or an inventory-control system. This flexibility is software’s miracle, and its curse. Because it can be changed cheaply, software is constantly changed; and because it’s unmoored from anything physical—a program that is a thousand times more complex than another takes up the same actual space—it tends to grow without bound. “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”

Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing. Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”

This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”

he attempts now underway to change how we make software all seem to start with the same premise: Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.

Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code. When you press your foot down on your car’s accelerator, for instance, you’re no longer controlling anything directly; there’s no mechanical link from the pedal to the throttle. Instead, you’re issuing a command to a piece of software that decides how much air to give the engine. The car is a computer you can sit inside of. The steering wheel and pedals might as well be keyboard keys.

Like everything else, the car has been computerized to enable new features. When a program is in charge of the throttle and brakes, it can slow you down when you’re too close to another car, or precisely control the fuel injection to help you save on gas. When it controls the steering, it can keep you in your lane as you start to drift, or guide you into a parking space. You couldn’t build these features without code. If you tried, a car might weigh 40,000 pounds, an immovable mass of clockwork.

Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of code. But just because we can’t see the complexity doesn’t mean that it has gone away.

The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning. As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.

What made programming so difficult was that it required you to think like a computer. The strangeness of it was in some sense more vivid in the early days of computing, when code took the form of literal ones and zeros. Anyone looking over a programmer’s shoulder as they pored over line after line like “100001010011” and “000010011110” would have seen just how alienated the programmer was from the actual problems they were trying to solve; it would have been impossible to tell whether they were trying to calculate artillery trajectories or simulate a game of tic-tac-toe. The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.

“The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work. “Software engineers like to provide all kinds of tools and stuff for coding errors,” she says, referring to IDEs. “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”

In September 2007, Jean Bookout was driving on the highway with her best friend in a Toyota Camry when the accelerator seemed to get stuck. When she took her foot off the pedal, the car didn’t slow down. She tried the brakes but they seemed to have lost their power. As she swerved toward an off-ramp going 50 miles per hour, she pulled the emergency brake. The car left a skid mark 150 feet long before running into an embankment by the side of the road. The passenger was killed. Bookout woke up in a hospital a month later.

The incident was one of many in a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible. The National Highway Traffic Safety Administration enlisted software experts from NASA to perform an intensive review of Toyota’s code. After nearly 10 months, the NASA team hadn’t found evidence that software was the cause—but said they couldn’t prove it wasn’t.

It was during litigation of the Bookout accident that someone finally found a convincing connection. Michael Barr, an expert witness for the plaintiff, had a team of software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around, what’s already there; eventually the code becomes impossible to follow, let alone to test exhaustively for flaws.

Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it. “You have software watching the software,” Barr testified. “If the software malfunctions and the same program or same app that is crashed is supposed to save the day, it can’t save the day because it is not working.”

 

Barr’s testimony made the case for the plaintiff, resulting in $3 million in damages for Bookout and her friend’s family. According to The New York Times, it was the first of many similar cases against Toyota to bring to trial problems with the electronic throttle-control system, and the first time Toyota was found responsible by a jury for an accident involving unintended acceleration. The parties decided to settle the case before punitive damages could be awarded. In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.

There will be more bad days for software. It’s important that we get better at making it, because if we don’t, and as software becomes more sophisticated and connected—as it takes control of more critical functions—those days could get worse.

The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little. There is a small but growing chorus that worries the status quo is unsustainable. “Even very good programmers are struggling to make sense of the systems that they are working with,” says Chris Granger, a software developer who worked as a lead at Microsoft on Visual Studio, an IDE that costs $1,199 a year and is used by nearly a third of all professional programmers. He told me that while he was at Microsoft, he arranged an end-to-end study of Visual Studio, the only one that had ever been done. For a month and a half, he watched behind a one-way mirror as people wrote code. “How do they use tools? How do they think?” he said. “How do they sit at the computer, do they touch the mouse, do they not touch the mouse? All these things that we have dogma around that we haven’t actually tested empirically.”

The findings surprised him. “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.

John Resig had been noticing the same thing among his students. Resig is a celebrated programmer of JavaScript—software he wrote powers over half of all websites—and a tech lead at the online-education site Khan Academy. In early 2012, he had been struggling with the site’s computer-science curriculum. Why was it so hard to learn to program? The essential problem seemed to be that code was so abstract. Writing software was not like making a bridge out of popsicle sticks, where you could see the sticks and touch the glue. To “make” a program, you typed words. When you wanted to change the behavior of the program, be it a game, or a website, or a simulation of physics, what you actually changed was text. So the students who did well—in fact the only ones who survived at all—were those who could step through that text one instruction at a time in their head, thinking the way a computer would, trying to keep track of every intermediate calculation. Resig, like Granger, started to wonder if it had to be that way. Computers had doubled in power every 18 months for the last 40 years. Why hadn’t programming changed?

 

The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.

Bret victor does not like to write code. “It sounds weird,” he says. “When I want to make a thing, especially when I want to create something in software, there’s this initial layer of disgust that I have to push through, where I’m not manipulating the thing that I want to make, I’m writing a bunch of text into a text editor.”

“There’s a pretty strong conviction that that’s the wrong way of doing things.”

Victor has the mien of David Foster Wallace, with a lightning intelligence that lingers beneath a patina of aw-shucks shyness. He is 40 years old, with traces of gray and a thin, undeliberate beard. His voice is gentle, mournful almost, but he wants to share what’s in his head, and when he gets on a roll he’ll seem to skip syllables, as though outrunning his own vocal machinery.

Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering, and then went on, after grad school at the University of California, Berkeley, to work at a company that develops music synthesizers. It was a problem perfectly matched to his dual personality: He could spend as much time thinking about the way a performer makes music with a keyboard—the way it becomes an extension of their hands—as he could thinking about the mathematics of digital signal processing.

By the time he gave the talk that made his name, the one that Resig and Granger saw in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.

“Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.” That code now takes the form of letters on a screen in a language like C or Java (derivatives of Fortran and ALGOL), instead of a stack of cards with holes in it, doesn’t make it any less dead, any less indirect.

There is an analogy to word processing. It used to be that all you could see in a program for writing documents was the text itself, and to change the layout or font or margins, you had to write special “control codes,” or commands that would tell the computer that, for instance, “this part of the text should be in italics.” The trouble was that you couldn’t see the effect of those codes until you printed the document. It was hard to predict what you were going to get. You had to imagine how the codes were going to be interpreted by the computer—that is, you had to play computer in your head.

Then WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.” When you marked a passage as being in italics, the letters tilted right there on the screen. If you wanted to change the margin, you could drag a ruler at the top of the screen—and see the effect of that change. The document thereby came to feel like something real, something you could poke and prod at. Just by looking you could tell if you’d done something wrong. Control of a sophisticated system—the document’s layout and formatting engine—was made accessible to anyone who could click around on a page.

Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling. And it was the proper job of programmers to ensure that someday they wouldn’t have to.

There was precedent enough to suggest that this wasn’t a crazy idea. Photoshop, for instance, puts powerful image-processing algorithms in the hands of people who might not even know what an algorithm is. It’s a complicated piece of software, but complicated in the way a good synth is complicated, with knobs and buttons and sliders that the user learns to play like an instrument. Squarespace, a company that is perhaps best known for advertising aggressively on podcasts, makes a tool that lets users build websites by pointing and clicking, instead of by writing code in HTML and CSS. It is powerful enough to do work that once would have been done by a professional web designer.

But those were just a handful of examples. The overwhelming reality was that when someone wanted to do something interesting with a computer, they had to write code. Victor, who is something of an idealist, saw this not so much as an opportunity but as a moral failing of programmers at large. His talk was a call to arms.

At the heart of it was a series of demos that tried to show just how primitive the available tools were for various problems—circuit design, computer animation, debugging algorithms—and what better ones might look like. His demos were virtuosic. The one that captured everyone’s imagination was, ironically enough, the one that on its face was the most trivial. It showed a split screen with a game that looked like Mario on one side and the code that controlled it on the other. As Victor changed the code, things in the game world changed: He decreased one number, the strength of gravity, and the Mario character floated; he increased another, the player’s speed, and Mario raced across the screen.

Suppose you wanted to design a level where Mario, jumping and bouncing off of a turtle, would just make it into a small passageway. Game programmers were used to solving this kind of problem in two stages: First, you stared at your code—the code controlling how high Mario jumped, how fast he ran, how bouncy the turtle’s back was—and made some changes to it in your text editor, using your imagination to predict what effect they’d have. Then, you’d replay the game to see what actually happened.

Victor wanted something more immediate. “If you have a process in time,” he said, referring to Mario’s path through the level, “and you want to see changes immediately, you have to map time to space.” He hit a button that showed not just where Mario was right now, but where he would be at every moment in the future: a curve of shadow Marios stretching off into the far distance. What’s more, this projected path was reactive: When Victor changed the game’s parameters, now controlled by a quick drag of the mouse, the path’s shape changed. It was like having a god’s-eye view of the game. The whole problem had been reduced to playing with different parameters, as if adjusting levels on a stereo receiver, until you got Mario to thread the needle. With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.

 

When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”

When john resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns … [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.

Chris Granger, who had worked at Microsoft on Visual Studio, was likewise inspired. Within days of seeing a video of Victor’s talk, in January of 2012, he built a prototype of a new programming environment. Its key capability was that it would give you instant feedback on your program’s behavior. You’d see what your system was doing right next to the code that controlled it. It was like taking off a blindfold. Granger called the project “Light Table.”

In April of 2012, he sought funding for Light Table on Kickstarter. In programming circles, it was a sensation. Within a month, the project raised more than $200,000. The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.

But seeing the impact that his talk ended up having, Bret Victor was disillusioned. “A lot of those things seemed like misinterpretations of what I was saying,” he said later. He knew something was wrong when people began to invite him to conferences to talk about programming tools. “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.

In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface. Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.

 

Of course, to do that, you’d have to get programmers themselves on board. In a recent essay, Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.” Exciting work of this sort, in particular a class of tools for “model-based design,” was already underway, he wrote, and had been for years, but most programmers knew nothing about it.

“If you really look hard at all the industrial goods that you’ve got out there, that you’re using, that companies are using, the only non-industrial stuff that you have inside this is the code.” Eric Bantégnie is the founder of Esterel Technologies (now owned by ANSYS), a French company that makes tools for building safety-critical software. Like Victor, Bantégnie doesn’t think engineers should develop large systems by typing millions of lines of code into an IDE. “Nobody would build a car by hand,” he says. “Code is still, in many places, handicraft. When you’re crafting manually 10,000 lines of code, that’s okay. But you have systems that have 30 million lines of code, like an Airbus, or 100 million lines of code, like your Tesla or high-end cars—that’s becoming very, very complicated.”

Bantégnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules. If you were making the control system for an elevator, for instance, one rule might be that when the door is open, and someone presses the button for the lobby, you should close the door and start moving the car. In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.

It’s not quite Photoshop. The beauty of Photoshop, of course, is that the picture you’re manipulating on the screen is the final product. In model-based design, by contrast, the picture on your screen is more like a blueprint. Still, making software this way is qualitatively different than traditional programming. In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.

 

“Typically the main problem with software coding—and I’m a coder myself,” Bantégnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”

On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself. Too much is lost going from one to the other. The idea behind model-based design is to close the gap. The very same model is used both by system designers to express what they want and by the computer to automatically generate code.

Of course, for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to. “We have benefited from fortunately 20 years of initial background work,” Bantégnie says.

Esterel Technologies, which was acquired by ANSYS in 2012, grew out of research begun in the 1980s by the French nuclear and aerospace industries, who worried that as safety-critical code ballooned in complexity, it was getting harder and harder to keep it free of bugs. “I started in 1988,” says Emmanuel Ledinot, the Head of Scientific Studies for Dassault Aviation, a French manufacturer of fighter jets and business aircraft. “At the time, I was working on military avionics systems. And the people in charge of integrating the systems, and debugging them, had noticed that the number of bugs was increasing.” The 80s had seen a surge in the number of onboard computers on planes. Instead of a single flight computer, there were now dozens, each responsible for highly specialized tasks related to control, navigation, and communications. Coordinating these systems to fly the plane as data poured in from sensors and as pilots entered commands required a symphony of perfectly timed reactions. “The handling of these hundreds of and even thousands of possible events in the right order, at the right time,” Ledinot says, “was diagnosed as the main cause of the bug inflation.”

Ledinot decided that writing such convoluted code by hand was no longer sustainable. It was too hard to understand what it was doing, and almost impossible to verify that it would work correctly. He went looking for something new. “You must understand that to change tools is extremely expensive in a process like this,” he said in a talk. “You don’t take this type of decision unless your back is against the wall.”

He began collaborating with Gerard Berry, a computer scientist at INRIA, the French computing-research center, on a tool called Esterel—a portmanteau of the French for “real-time.” The idea behind Esterel was that while traditional programming languages might be good for describing simple procedures that happened in a predetermined order—like a recipe—if you tried to use them in systems where lots of events could happen at nearly any time, in nearly any order—like in the cockpit of a plane—you inevitably got a mess. And a mess in control software was dangerous. In a paper, Berry went as far as to predict that “low-level programming techniques will not remain acceptable for large safety-critical programs, since they make behavior understanding and analysis almost impracticable.”

 

Esterel was designed to make the computer handle this complexity for you. That was the promise of the model-based approach: Instead of writing normal programming code, you created a model of the system’s behavior—in this case, a model focused on how individual events should be handled, how to prioritize events, which events depended on which others, and so on. The model becomes the detailed blueprint that the computer would use to do the actual programming.

Ledinot and Berry worked for nearly 10 years to get Esterel to the point where it could be used in production. “It was in 2002 that we had the first operational software-modeling environment with automatic code generation,” Ledinot told me, “and the first embedded module in Rafale, the combat aircraft.” Today, the ANSYS SCADE product family (for “safety-critical application development environment”) is used to generate code by companies in the aerospace and defense industries, in nuclear power plants, transit systems, heavy industry, and medical devices. “My initial dream was to have SCADE-generated code in every plane in the world,” Bantégnie, the founder of Esterel Technologies, says, “and we’re not very far off from that objective.” Nearly all safety-critical code on the Airbus A380, including the system controlling the plane’s flight surfaces, was generated with ANSYS SCADE products.

Part of the draw for customers, especially in aviation, is that while it is possible to build highly reliable software by hand, it can be a Herculean effort. Ravi Shivappa, the VP of group software engineering at Meggitt PLC, an ANSYS customer which builds components for airplanes, like pneumatic fire detectors for engines, explains that traditional projects begin with a massive requirements document in English, which specifies everything the software should do. (A requirement might be something like, “When the pressure in this section rises above a threshold, open the safety valve, unless the manual-override switch is turned on.”) The problem with describing the requirements this way is that when you implement them in code, you have to painstakingly check that each one is satisfied. And when the customer changes the requirements, the code has to be changed, too, and tested extensively to make sure that nothing else was broken in the process.

The cost is compounded by exacting regulatory standards. The FAA is fanatical about software safety. The agency mandates that every requirement for a piece of safety-critical software be traceable to the lines of code that implement it, and vice versa. So every time a line of code changes, it must be retraced to the corresponding requirement in the design document, and you must be able to demonstrate that the code actually satisfies the requirement. The idea is that if something goes wrong, you’re able to figure out why; the practice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.

As Bantégnie explains, the beauty of having a computer turn your requirements into code, rather than a human, is that you can be sure—in fact you can mathematically prove—that the generated code actually satisfies those requirements. Much of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”

Still, most software, even in the safety-obsessed world of aviation, is made the old-fashioned way, with engineers writing their requirements in prose and programmers coding them up in a programming language like C. As Bret Victor made clear in his essay, model-based design is relatively unusual. “A lot of people in the FAA think code generation is magic, and hence call for greater scrutiny,” Shivappa told me.

Most programmers feel the same way. They like code. At least they understand it. Tools that write your code for you and verify its correctness using the mathematics of “finite-state machines” and “recurrent systems” sound esoteric and hard to use, if not just too good to be true.

It is a pattern that has played itself out before. Whenever programming has taken a step away from the writing of literal ones and zeros, the loudest objections have come from programmers. Margaret Hamilton, a celebrated software engineer on the Apollo missions—in fact the coiner of the phrase “software engineering”—told me that during her first year at the Draper lab at MIT, in 1964, she remembers a meeting where one faction was fighting the other about transitioning away from “some very low machine language,” as close to ones and zeros as you could get, to “assembly language.” “The people at the lowest level were fighting to keep it. And the arguments were so similar: ‘Well how do we know assembly language is going to do it right?’”

“Guys on one side, their faces got red, and they started screaming,” she said. She said she was “amazed how emotional they got.”

Emmanuel Ledinot, of Dassault Aviation, pointed out that when assembly language was itself phased out in favor of the programming languages still popular today, like C, it was the assembly programmers who were skeptical this time. No wonder, he said, that “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”

The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”

Which sounds almost like a joke, but for proponents of the model-based approach, it’s an important point: We already know how to make complex software reliable, but in so many places, we’re choosing not to. Why?

In 2011, chris Newcombe had been working at Amazon for almost seven years, and had risen to be a principal engineer. He had worked on some of the company’s most critical systems, including the retail-product catalog and the infrastructure that managed every Kindle device in the world. He was a leader on the highly prized Amazon Web Services team, which maintains cloud servers for some of the web’s biggest properties, like Netflix, Pinterest, and Reddit. Before Amazon, he’d helped build the backbone of Steam, the world’s largest online-gaming service. He is one of those engineers whose work quietly keeps the internet running. The products he’d worked on were considered massive successes. But all he could think about was that buried deep in the designs of those systems were disasters waiting to happen.

“Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”

Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.

This is why he was so intrigued when, in the appendix of a paper he’d been reading, he came across a strange mixture of math and code—or what looked like code—that described an algorithm in something called “TLA+.” The surprising part was that this description was said to be mathematically precise: An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.

TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy (say, if you were programming an ATM, a constraint might be that you can never withdraw the same money twice from your checking account). TLA+ then exhaustively checks that your logic does, in fact, satisfy those constraints. If not, it will show you exactly how they could be violated.

The language was invented by Leslie Lamport, a Turing Award–winning computer scientist. With a big white beard and scruffy white hair, and kind eyes behind large glasses, Lamport looks like he might be one of the friendlier professors at the American Hogwarts. Now at Microsoft Research, he is known as one of the pioneers of the theory of “distributed systems,” which describes any computer system made of multiple parts that communicate with each other. Lamport’s work laid the foundation for many of the systems that power the modern web.

For Lamport, a major reason today’s software is so full of bugs is that programmers jump straight into writing code. “Architects draw detailed plans before a brick is laid or a nail is hammered,” he wrote in an article. “But few programmers write even a rough sketch of what their programs will do before they start coding.” Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,” he says. Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.

Newcombe and his colleagues at Amazon would go on to use TLA+ to find subtle, critical bugs in major systems, including bugs in the core algorithms behind S3, regarded as perhaps the most reliable storage engine in the world. It is now used widely at the company. In the tiny universe of people who had ever used TLA+, their success was not so unusual. An intern at Microsoft used TLA+ to catch a bug that could have caused every Xbox in the world to crash after four hours of use. Engineers at the European Space Agency used it to rewrite, with 10 times less code, the operating system of a probe that was the first to ever land softly on a comet. Intel uses it regularly to verify its chips.

But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols. For Lamport, this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”

Lamport sees this failure to think mathematically about what they’re doing as the problem of modern software development in a nutshell: The stakes keep rising, but programmers aren’t stepping up—they haven’t developed the chops required to handle increasingly complex problems. “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”

Newcombe isn’t so sure that it’s the programmer who is to blame. “I’ve heard from Leslie that he thinks programmers are afraid of math. I’ve found that programmers aren’t aware—or don’t believe—that math can help them handle complexity. Complexity is the biggest challenge for programmers.” The real problem in getting people to use TLA+, he said, was convincing them it wouldn’t be a waste of their time. Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.

Most programmers who took computer science in college have briefly encountered formal methods. Usually they’re demonstrated on something trivial, like a program that counts up from zero; the student’s job is to mathematically prove that the program does, in fact, count up from zero.

“I needed to change people’s perceptions on what formal methods were,” Newcombe told me. Even Lamport himself didn’t seem to fully grasp this point: Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.

For one thing, he said that when he was introducing colleagues at Amazon to TLA+ he would avoid telling them what it stood for, because he was afraid the name made it seem unnecessarily forbidding: “Temporal Logic of Actions” has exactly the kind of highfalutin ring to it that plays well in academia, but puts off most practicing programmers. He tried also not to use the terms “formal,” “verification,” or “proof,” which reminded programmers of tedious classroom exercises. Instead, he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.

He has since left Amazon for Oracle, where he’s been able to convince his new colleagues to give TLA+ a try. For him, using these tools is now a matter of responsibility. “We need to get better at this,” he said.

“I’m self-taught, been coding since I was nine, so my instincts were to start coding. That was my only—that was my way of thinking: You’d sketch something, try something, you’d organically evolve it.” In his view, this is what many programmers today still do. “They google, and they look on Stack Overflow” (a popular website where programmers answer each other’s technical questions) “and they get snippets of code to solve their tactical concern in this little function, and they glue it together, and iterate.”

“And that’s completely fine until you run smack into a real problem.”

In the summer of 2015, a pair of American security researchers, Charlie Miller and Chris Valasek, convinced that car manufacturers weren’t taking software flaws seriously enough, demonstrated that a 2014 Jeep Cherokee could be remotely controlled by hackers. They took advantage of the fact that the car’s entertainment system, which has a cellular connection (so that, for instance, you can start your car with your iPhone), was connected to more central systems, like the one that controls the windshield wipers, steering, acceleration, and brakes (so that, for instance, you can see guidelines on the rearview screen that respond as you turn the wheel). As proof of their attack, which they developed on nights and weekends, they hacked into Miller’s car while a journalist was driving it on the highway, and made it go haywire; the journalist, who knew what was coming, panicked when they cut the engines, forcing him to a slow crawl on a stretch of road with no shoulder to escape to.

Although they didn’t actually create one, they showed that it was possible to write a clever piece of software, a “vehicle worm,” that would use the onboard computer of a hacked Jeep Cherokee to scan for and hack others; had they wanted to, they could have had simultaneous access to a nationwide fleet of vulnerable cars and SUVs. (There were at least five Fiat Chrysler models affected, including the Jeep Cherokee.) One day they could have told them all to, say, suddenly veer left or cut the engines at high speed.

“We need to think about software differently,” Valasek told me. Car companies have long assembled their final product from parts made by hundreds of different suppliers. But where those parts were once purely mechanical, they now, as often as not, come with millions of lines of code. And while some of this code—for adaptive cruise control, for auto braking and lane assist—has indeed made cars safer (“The safety features on my Jeep have already saved me countless times,” says Miller), it has also created a level of complexity that is entirely new. And it has made possible a new kind of failure.

“There are lots of bugs in cars,” Gerard Berry, the French researcher behind Esterel, said in a talk. “It’s not like avionics—in avionics it’s taken very seriously. And it’s admitted that software is different from mechanics.” The automotive industry is perhaps among those that haven’t yet realized they are actually in the software business.

“We don’t in the automaker industry have a regulator for software safety that knows what it’s doing,” says Michael Barr, the software expert who testified in the Toyota case. NHTSA, he says, “has only limited software expertise. They’ve come at this from a mechanical history.” The same regulatory pressures that have made model-based design and code generation attractive to the aviation industry have been slower to come to car manufacturing. Emmanuel Ledinot, of Dassault Aviation, speculates that there might be economic reasons for the difference, too. Automakers simply can’t afford to increase the price of a component by even a few cents, since it is multiplied so many millionfold; the computers embedded in cars therefore have to be slimmed down to the bare minimum, with little room to run code that hasn’t been hand-tuned to be as lean as possible. “Introducing model-based software development was, I think, for the last decade, too costly for them.”

One suspects the incentives are changing. “I think the autonomous car might push them,” Ledinot told me—“ISO 26262 and the autonomous car might slowly push them to adopt this kind of approach on critical parts.” (ISO 26262 is a safety standard for cars published in 2011.) Barr said much the same thing: In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.

“Computing is fundamentally invisible,” Gerard Berry said in his talk. “When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing.”

“So that’s a big problem.”

https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/

How Do Consumers Really Feel About Facial Recognition? – eMarketer

How Do Consumers Really Feel About Facial Recognition? – eMarketer

Last month Apple unveiled the iPhone X—the company’s most expensive device to date, which comes with a host of new features, including facial recognition technology.

According to Apple, the device uses light projection and an infrared camera to create a 3-D map of a user’s face. While new to Apple, facial identification technology has been used by other manufacturers, such as Apple rival Samsung, for some time now.

There’s no denying that smartphones with biometrics will soon be the norm. But consumers are somewhat split when it comes to mobile devices with facial recognition capability, data from Morning Consult reveals.

Its survey of US internet users in September 2017 showed that 34% of respondents had a favorable view of facial recognition software in personal devices. In contrast, 39% of those polled felt the opposite way. And over a quarter (26%) said they either weren’t sure how they felt, or had no opinion about it.

Women were more likely than men to feel at least somewhat unfavorably toward this type of technology. For example, while 30% of women surveyed had at least a somewhat favorable view of facial recognition software, 41% expressed the opposite sentiment. Men, on the other hand, were nearly split in their attitudes.

Many consumers are likely not sold on facial recognition because the technology creeps them out.

According to a June 2017 survey from RichRelevance, facial recognition was one of the “creepiest” technologies out there. Indeed, over two-thirds of US internet users it polled found it creepy.

https://www.emarketer.com/Article/How-Do-Consumers-Really-Feel-About-Facial-Recognition/1016556

Disney Dark Kingdom: Rumored Disney Villain Theme Park Plans Revealed – Thrillist

Disney Dark Kingdom: Rumored Disney Villain Theme Park Plans Revealed – Thrillist

“Anyone know what will come of The Villains Park (Dark Kingdom) concepts?”

Three years ago, a Disney fan hoping for updates about a supposed all-villains theme park that Disney would nestle alongside its other properties in Central Florida posted that inquiry to Reddit. In response, someone linked to an equally vague post on the WDW Info webpage about Disney parks that were never built. More posts popped up in 2016, fueling the mystery. Earlier this year, another Reddit user linked to a post on the Berlin-based news-and-rumor site MoviePilot, and just this June, the tourism site Travel Whip revved up the rumor. Again and again the concept of a Dark Kingdom park was reiterated, and again and again the response attracted believers and skeptics alike. Round and round the discussion went, like the Mad Tea Cups and Dumbo the Flying Elephant rides at Disneyland.

Every post devoted to the Dark Kingdom features the same basic skeleton: Disney had (or has?) plans to carve out part of its Walt Disney World complex and hand it to the brand’s rogues gallery. Instead of being relegated to the parks’ Halloween celebrations, villains like Ursula, Captain Hook, Maleficent, Gaston, the Evil Queen, and even Oogie Boogie from The Nightmare Before Christmas would get their chance to shine year-round in ghoulish shows and attractions.

Except that this park never existed — not even in some rough conceptual form. As Disney historian Jim Hill told me, “The idea of the Dark Kingdom seems to have basically come from the internet, with no basis on anything concrete.” Or, as one former Imagineer familiar with actual proposed projects and the online rumors puts it, “An entirely villains-centered park is complete bullshit.”

The story of the Dark Kingdom isn’t unfounded, instead the unlikely synthesis of hearsay, years of disparate (and wholly unrelated) projects, snowballing half-truths, wispy rumors, an aggressive consumer product line that’s captured the imagination of acolytes, and Disney’s inability to formally comment on park plans. And the legend’s origin traces back to a single day in Disney history: July 11, 1986.

The freedom to be villainous
Tokyo Disneyland, Disney’s first international theme park, opened on April 15, 1983 and swiftly established itself as something different than its California namesake. While largely designed by Walt Disney Imagineering, the secretive group of engineers, artists, and technicians that work out of a rambling collection of buildings in Glendale, California, Tokyo Disneyland wasn’t technically owned by the Walt Disney Company, but by Oriental Land Company, a Japanese conglomerate of hotels, transportation outfits, and restaurants. Under OLC, the Imagineers weren’t hampered by the financial constraints or synergistic obligations that could foil ambitious projects elsewhere.

Cinderella Castle Mystery Tour, which opened on July 11, 1986, was an emblem of that outside-the-box thinking. “The idea was to do a Haunted Mansion-type experience but not the Haunted Mansion,” former Imagineer Mark Eades, who helped design the amazing video effects in the attraction, told me.

The Mystery Tour mirrored the experience of the Sleeping Beauty Castle tour in Disneyland, but with a twist. Cast members warning participants that the experience might be too intense was just the beginning; as a cheery tour guide ushered tourists through the castle, the Magic Mirror from Snow White and the Seven Dwarfs appears, irked by the guide’s disparaging remarks about villains. In a flash, the portraits on the walls mutate, turning Pinocchio’s face into a painting of Stromboli. A secret passageway opens up, leading the tour (and your terrified tour guide) through a number of show scenes: the Evil Queen’s laboratory (complete with poison-dipped apple); ghastly spirits; the Chernabog from the “Bald Mountain” segment of Fantasia (crowd-pleaser!); and those creepy gnomish goons from Sleeping Beauty.

At the very end of the tour you come face-to-face with the Horned King, the big bad from The Black Cauldron, the notorious animated flop that had been released in America less than a year before the Japanese Mystery Tour opened. The Horned King Audio-Animatronic figure was one of the more sophisticated figures in existence, and visitors used light swords to “vanquish” him, a welcome lesson that good ultimately conquers evil. Cinderella Castle Mystery Mansion also taught park-goers that spending time with the Disney villains could be really, really fun.

Clearly, someone at the company noticed.

Bringing the evil stateside
The first attempt at stateside villainy came with the establishment of Fantasyland’s Disney Villains shop, which opened in July 1991 to sell spooky merchandise centered around the hero counterparts. The shop closed in 1997 when it was used to sell merchandise from The Hunchback of Notre Dame — yes, Disney dedicated prime real estate to a store exclusively hawking Quasimodo plush — but returned in 1998 as Villain’s Lair. A second shop, Villains in Vogue, also opened on September 14, 1998 at Disney’s Hollywood Studios (then known as Disney-MGM Studios) in Walt Disney World.

Around the same time, Disney plotted its next move for space previously occupied by Walt Disney World’s 20,000 Leagues Under the Sea: Submarine Voyage, a sprawling attraction and operational nightmare that broke down for good in 1994. The open real estate put Imagineering (in California) and Walt Disney World operations (in Florida) at constant odds, with team members from both sides proposing ideas (including a Harry Potter-themed land back when Disney had temporarily secured the theme park rights from J.K. Rowling). The two attractions eventually built in that space – a character greeting space for Ariel from The Little Mermaid and a play area themed to Winnie the Pooh – were safe options. But one rejected idea for the 20,000 Leagues space would warp the Dark Kingdom narrative forever.

Villain Mountain was pitched as Magic Kingdom’s second flume ride after Splash Mountain. Riding Hades’ underworld boat from Hercules, guests would run into classic Disney villains before the grand finale: a run-in with a towering, Audio-Animatronic version of the winged demon Chernabog from Fantasia. A steep flume chute provided an escape route, returning the vessel to the safety of Fantasyland. The current internet rumor mill suggests that then-Disney-CEO Michael Eisner, nervously anticipating the opening of Universal Orlando’s second gate, Islands of Adventure, in the summer of 1999, adored the concept. True or not, it didn’t go beyond lavish concept art.

“[Villain Mountain] was just a concept,” said Eades. “I can’t emphasize that enough. People say, ‘They were going to build that,’ but they have no idea. It was never more than a concept.”

An ambitious Fantasyland expansion continued to fan Dark Kingdom rumors. In 2014, Disney unveiled the Seven Dwarfs Mine Train attraction, a family-friendly roller coaster with the most state-of-the-art Audio-Animatronics ever including a truly uncanny version of Snow White’s Evil Queen (in old hag guise). Another staple of the new land was a character meet-and-greet with Gaston, who could be spotted outside of a tavern that bears his name. The villains were still there, but an attraction devoted exclusively to their exploits remained out of reach.

By the time that New Fantasyland opened, however, the villains had been commoditized and packaged into a single brand: Disney Villains. Disney Consumer Products chairman Andy Mooney, who devised the official Disney Princesses line after seeing a gaggle of girls dressed in homemade costumes standing in line at a Disney on Ice show, stepped in to integrate the darker take into the public conscious. They weren’t as many constraints on the brand as with the Disney Princesses (who couldn’t look at one another or appear to be in the same physical location); acting rascally was part of their commercial identity. The grouping maintained core Disney Villains like the Evil Queen and Hook, alongside newer staples like Jafar and Ursula. More recently, Oogie Boogie from Tim Burton’s The Nightmare Before Christmas has joined the ranks of a flood of villain merchandise, including officially licensed handbags at Hot Topic and a cavalcade of T-shirts sold in the New Orleans Square section of Disneyland. (An official Instagram account for the brand was started earlier this year.)

With the audience primed, surely the all-villains park must be on the way? The answer is no, although a pair of projects in the late-2000s would certainly confuse those who desperately wanted these rumors to be true.

https://www.thrillist.com/entertainment/nation/disney-dark-kingdom-villain-theme-park-revealed

There Never Was a Real Tulip Fever | History | Smithsonian

There Never Was a Real Tulip Fever | History | Smithsonian

When tulips came to the Netherlands, all the world went mad. A sailor who mistook a rare tulip bulb for an onion and ate it with his herring sandwich was charged with a felony and thrown in prison. A bulb named Semper Augustus, notable for its flame-like white and red petals, sold for more than the cost of a mansion in a fashionable Amsterdam neighborhood, complete with coach and garden. As the tulip market grew, speculation exploded, with traders offering exorbitant prices for bulbs that had yet to flower. And then, as any financial bubble will do, the tulip market imploded, sending traders of all incomes into ruin.

For decades, economists have pointed to 17th-century tulipmania as a warning about the perils of the free market. Writers and historians have reveled in the absurdity of the event. The incident even provides the backdrop for the new film Tulip Fever, based on a novel of the same name by Deborah Moggach.

The only problem: none of these stories are true.

What really happened and how did the story of Dutch tulip speculation get so distorted? Anne Goldgar discovered the historical reality when she dug into the archives to research her book, Tulipmania: Money, Honor, and Knowledge in the Dutch Golden Age.

“I always joke that the book should be called ‘Tulipmania: More Boring Than You Thought,’” says Goldgar, a professor of early modern history at King’s College London. “People are so interested in this incident because they think they can draw lessons from it. I don’t think that’s necessarily the case.”

But before you even attempt to apply what happened in the Netherlands to more recent bubbles—the South Sea Bubble in 1700s England, the 19th-century railway bubble, the dot-com bubble and bitcoin are just a few comparisons Goldgar has seen—you have to understand Dutch society at the turn of the 17th century.

For starters, the country experienced a major demographic shift during its war for independence from Spain, which began in the 1560s and continued into the 1600s. It was during this period that merchants arrived in port cities like Amsterdam, Haarlem and Delft and established trading outfits, including the famous Dutch East India Company. This explosion in international commerce brought enormous fortune to the Netherlands, despite the war. In their newly independent nation, the Dutch were mainly led by urban oligarchies comprised of wealthy merchants, unlike other European countries of the era, which were controlled by landed nobility. As Goldgar writes in her book, “The resultant new faces, new money and new ideas helped to revolutionize the Dutch economy in the late 16th century.”

As the economy changed, so, too, did social interactions and cultural values. A growing interest in natural history and a fascination with the exotic among the merchant class meant that goods from the Ottoman Empire and farther east fetched high prices. The influx of these goods also drove men of all social classes to acquire expertise in newly in-demand areas. One example Goldgar gives is fish auctioneer Adriaen Coenen, whose watercolor-illustrated manuscript Whale Book allowed him to actually meet the President of Holland. And when Dutch botanist Carolus Clusius established a botanical garden at the University of Leiden in the 1590s, the tulip quickly rose to a place of honor.

Originally found growing wild in the valleys of the Tien Shan Mountains (at the border where China and Tibet meet Afghanistan and Russia), tulips were cultivated in Istanbul as early as 1055. By the 15th century, Sultan Mehmed II of the Ottoman Empire had so many flowers in his 12 gardens that he required a staff of 920 gardeners. Tulips were among the most prized flowers, eventually becoming a symbol of the Ottomans, writes gardening correspondent for The Independent Anna Pavord in The Tulip.

The Dutch learned that tulips could be grown from seeds or buds that grew on the mother bulb; a bulb that grows from seed would take 7 to 12 years before flowering, but a bulb itself could flower the very next year. Of particular interest to Clusius and other tulip traders were “broken bulbs”—tulips whose petals showed a striped, multicolor pattern rather than a single solid color. The effect was unpredictable, but the growing demand for these rare, “broken bulb” tulips led naturalists to study ways to reproduce them. (The pattern was later discovered to be the result of a mosaic virus that actually makes the bulbs sickly and less likely to reproduce.) “The high market price for tulips to which the current version of tulipmania refers were prices for particularly beautiful broken bulbs,” writes economist Peter Garber. “Since breaking was unpredictable, some have characterized tulipmania among growers as a gamble, with growers vying to produce better and more bizarre variegations and feathering.”

After all the money Dutch speculators spent on the bulbs, they only produced flowers for about a week—but for tulip lovers, that week was a glorious one. “As luxury objects, tulips fit well into a culture of both abundant capital and new cosmopolitanism,” Goldgar writes. Tulips required expertise, an appreciation of beauty and the exotic, and, of course, an abundance of money.

Here’s where the myth comes into play. According to popular legend, the tulip craze took hold of all levels of Dutch society in the 1630s. “The rage among the Dutch to possess them was so great that the ordinary industry of the country was neglected, and the population, even to its lowest dregs, embarked in the tulip trade,” wrote Scottish journalist Charles Mackay in his popular 1841 work Extraordinary Popular Delusions and the Madness of Crowds. According to this narrative, everyone from the wealthiest merchants to the poorest chimney sweeps jumped into the tulip fray, buying bulbs at high prices and selling them for even more. Companies formed just to deal with the tulip trade, which reached a fever pitch in late 1636. But by February 1637, the bottom fell out of the market. More and more people defaulted on their agreement to buy the tulips at the prices they’d promised, and the traders who had already made their payments were left in debt or bankrupted. At least that’s what has always been claimed.

In fact, “There weren’t that many people involved and the economic repercussions were pretty minor,” Goldgar says. “I couldn’t find anybody that went bankrupt. If there had been really a wholesale destruction of the economy as the myth suggests, that would’ve been a much harder thing to face.”

That’s not to say that everything about the story is wrong; merchants really did engage in a frantic tulip trade, and they paid incredibly high prices for some bulbs. And when a number of buyers announced they couldn’t pay the high price previously agreed upon, the market did fall apart and cause a small crisis—but only because it undermined social expectations.

“In this case it was very difficult to deal with the fact that almost all of your relationships are based on trust, and people said, ‘I don’t care that I said I’m going to buy this thing, I don’t want it anymore and I’m not going to pay for it.’ There was really no mechanism to make people pay because the courts were unwilling to get involved,” Goldgar says.

But the trade didn’t affect all levels of society, and it didn’t cause the collapse of industry in Amsterdam and elsewhere. As Garber, the economist, writes, “While the lack of data precludes a solid conclusion, the results of the study indicate that the bulb speculation was not obvious madness.”

So if tulipmania wasn’t actually a calamity, why was it made out to be one? We have tetchy Christian moralists to blame for that. With great wealth comes great social anxiety, or as historian Simon Schama writes in The Embarrassment of Riches: An Interpretation of Dutch Culture in the Golden Age, “The prodigious quality of their success went to their heads, but it also made them a bit queasy.” All the outlandish stories of economic ruin, of an innocent sailor thrown in prison for eating a tulip bulb, of chimney sweeps wading into the market in hopes of striking it rich—those come from propaganda pamphlets published by Dutch Calvinists worried that the tulip-propelled consumerism boom would lead to societal decay. Their insistence that such great wealth was ungodly has even stayed with us to this day.

“Some of the stuff hasn’t lasted, like the idea that God punishes people who are overreaching by causing them to have the plague. That’s one of the things people said in the 1630s,” Goldgar says. “But the idea that you get punished if you overreach? You still hear that. It’s all, ‘pride goes before the fall.’”

Goldgar doesn’t begrudge novelists and filmmakers for taking liberties with the past. It’s only when historians and economists neglect to do their research that she gets irked. She herself didn’t set out to be a mythbuster—she only stumbled upon the truth when she sat down to look through old documentation of the popular legend. “I had no way of knowing this existed before I started reading these documents,” Goldgar says. “That was an unexpected treasure.”

http://www.smithsonianmag.com/history/there-never-was-real-tulip-fever-180964915/

Why Bacteria in Space Are Surprisingly Tough to Kill | Smart News | Smithsonian

Why Bacteria in Space Are Surprisingly Tough to Kill | Smart News | Smithsonian

Bacteria in space may sound like the title of a bad science fiction movie, but it’s actually a new experiment that tests how the weightlessness of space can change microbes’ antibiotic resistance.

While the vacuum of space may be a sterile environment, the ships (and eventually habitats) humans travel and live in are rife with microbial life. And keeping these microbes in check will be vital for the health of the crew and even the equipment, reports George Dvorsky for Gizmodo.

Past research has shown that bacteria that would normally collapse in the face of standard antibiotics on Earth seem to resist those same drugs much more effectively in the microgravity of space, and even appear more virulent than normal. To figure out how weightlessness gives bacteria a defensive boost, samples of E. coli took a trip to the International Space Station in 2014 so astronauts could experiment with antibiotics.

Now, in a new study published this week in the journal Frontiers in Microbiology, researchers demonstrates that microgravity gives bacteria some nifty tricks that make a lot less susceptible to antibiotics. Their main defense: getting smaller.

The E. coli in space showed a 73 percent reduction in their volume, giving the bacteria much less surface area that can be exposed to antibiotic molecules, Dvorsky reports. Along with this shrinkage, the cell membranes of the E. coli grew at least 25 percent thicker, making it even harder for any antibiotic molecules to pass through them. And the defense mechanisms weren’t only the individual level—the E. coli also showed a greater propensity for growing together in clumps, leaving the bacteria on the edges open to danger, but insulating those within from exposure to the antibiotics.

All of these differences allowed the E. coli on the International Space Station to grow to 13 times the population of the same bacteria grown on Earth under the same conditions, according to the study. And understanding why and how these defense mechanisms form could help doctors better prevent the scourge of antibiotic resistance here on Earth.

Perhaps even more terrifying, compared to the bacteria grown in the same conditions on Earth, the space-bound E. coli developed fluid-filled sacs called vesicles on their cell membranes, giving them tools that can make them even better at infecting other cells. This means that astro-bacteria could make people ill more easily, creating an infection that is harder to treat.

As people head further out into space, many are still afraid about what will happen when we meet alien bacterial life. But travelers into the great beyond may also need to keep a close eye out for the bacteria we already thought we knew.

http://www.smithsonianmag.com/smart-news/bacteria-space-are-harder-kill-180964887/