Collection of letters by codebreaker Alan Turing found in filing cabinet | Science | The Guardian

Collection of letters by codebreaker Alan Turing found in filing cabinet | Science | The Guardian

A lost collection of nearly 150 letters from the codebreaker Alan Turing has been uncovered in an old filing cabinet at the University of Manchester.

The correspondence, which has not seen the light of day for at least 30 years, contains very little about Turing’s tortured personal life. It does, however, give an intriguing insight into his views on America.

In response to an invitation to speak at a conference in the US in April 1953, Turing replied that he would rather not attend: “I would not like the journey, and I detest America.”

The letter, sent to Donald Mackay, a physicist at King’s College London, does not give any further explanation for Turing’s forthright views on America, nor do these views feature in any of the other 147 letters discovered earlier this year.

The correspondence, dating from early 1949 to Turing’s death in 1954, was found by chance when an academic cleared out an old filing cabinet in a storeroom at the University of Manchester. Turing was deputy director of the university’s computing laboratory from 1948, after his heroic wartime codebreaking at Bletchley Park.

Turing was a visionary mathematician and is regarded today as the father of modern computing who broke the Nazis’ second world war Enigma code. While his later life has been overshadowed by his conviction for gross indecency and his death aged 41 from cyanide poisoning, a posthumous pardon was granted by the Queen in 2013. His life was featured in the 2014 film the Imitation Game.

Prof Jim Miles, of the university’s school of computer science, said he was amazed to stumble upon the documents, contained in an ordinary-looking red paper file with “Alan Turing” scrawled on it.

“When I first found it I initially thought: ‘That can’t be what I think it is,’ but a quick inspection showed it was a file of old letters and correspondence by Alan Turing,” he said.

“I was astonished such a thing had remained hidden out of sight for so long. No one who now works in the school or at the university knew they even existed. It really was an exciting find and it is mystery as to why they had been filed away.”

The collection focuses mainly on Turing’s academic research, including his work on groundbreaking areas in AI, computing and mathematics, and invitations to lecture at some of America’s best-known universities including the Massachusetts Institute of Technology.

It contains a single letter from GCHQ, for whom Turing worked during the war, asking the mathematician in 1952 if he could supply a photograph of himself for an official history of Bletchley Park that was being compiled by the American cryptographer William Friedman. In his reply to Eric Jones, GCHQ’s then director, Turing said he would send a picture for the “American rogues gallery”.

The collection also contains a handwritten draft BBC radio programme on artificial intelligence, titled “Can machines think?” from July 1951. The documents were sorted, catalogued and stored by the University of Manchester archivist James Peters and are now available to search online.

Peters said: “This is a truly unique find. Archive material relating to Turing is extremely scarce, so having some of his academic correspondence is a welcome and important addition to our collection.

“There is very little in the way of personal correspondence, and no letters from Turing family members. But this still gives us an extremely interesting account and insight into his working practices and academic life whilst he was at the University of Manchester.”

He added: “The letters mostly confirm what is already known about Turing’s work at Manchester, but they do add an extra dimension to our understanding of the man himself and his research.

“As there is so little actual archive on this period of his life, this is a very important find in that context. There really is nothing else like it.”

https://www.theguardian.com/science/2017/aug/27/collection-letters-codebreaker-alan-turing-found-filing-cabinet

Advertisements

For Centuries, Readers Annotated Books With Tiny Drawings of Hands – Atlas Obscura

For Centuries, Readers Annotated Books With Tiny Drawings of Hands – Atlas Obscura

In the list of rarely-used punctuation marks—amid the interrobang (‽), hedera (❧), lozenge (◊), and asterism (⁂)—the manicule is a pointedly unique symbol. Quite literally: it takes the form of a hand with an outstretched index figure, gesturing towards a particularly pertinent piece of text.

Although manicules are still visible today in old signage and retro décor, their heyday was in medieval and Renaissance Europe.

Despite its centuries-long popularity, the first-ever use of a manicule is surprisingly difficult to pinpoint. They were reportedly used in the Domesday Book of 1066, a record of land ownership in England and Wales, but widespread use began around the 12th century. The name comes from the latin word manicula—little hand—but the punctuation mark has had other synonyms, including bishop’s fist, pointing hand, digit, and fist.

As far as punctuation marks go, the manicule’s function was fairly self-explanatory. Usually drawn in the margin of a page (and sometimes between columns of text or sentences), it was a way for the reader to note a particularly significant paragraph of text. They were essentially the medieval version of a highlighter. Although mainly used by readers, occasionally a scribe or a printer would draw a manicule to indicate a new section in a book.

The use and dynamic of manicules changed once books began to be printed. This new technology allowed writers and publishers to highlight what they believed to be significant. As Keith Houston notes in his book Shady Characters: The Secret Life of Punctuation, Symbols and Other Typographical Marks, “the margin, once the reader’s workspace and sketchbook, was gradually colonized by writers seeking to provide their own explanatory notes or commentaries.”

Despite its simplicity, the style of the manicule could vary. Some had elaborate sleeves, some were strangely proportioned with extra-long fingers, as the one leading this article, and some were otherwise anatomically incorrect. The Italian Renaissance scholar Petrarch drew manicules that consisted of five fingers and no thumb, which is surprising, seeing as he would have been looking at the very thing he was drawing. (A five-fingered hand, it’s worth noting, would not have been the strangest thing to adorn the margins of a Medieval manuscript).

In the 19th century, manicules had moved beyond books and into signage, advertisements, and posters as a way of directing the eye. They pointed the way to trains and pubs. In the “Wanted” poster for John Wilkes Booth following his assassination of President Lincoln, a manicule gestured towards the reward announcement. Manicules were even used on gravestones (pointing up toward heaven, of course).

While the popularity of manicules faded before the arrival of the 20th century, they aren’t completely extinct. A mutated version existed in early versions of the cursor, in the form of an upwards-pointing clenched fist. There are manicule emojis that point left, right, up and down. If you look hard enough, you can even find one in the Wingdings font. ☞ Scroll on for more manicules.

http://www.atlasobscura.com/articles/manicules

The Dying Art of Courtroom Illustration – Atlas Obscura

The Dying Art of Courtroom Illustration – Atlas Obscura

On a recent gray August Monday at the Brooklyn Federal Courthouse, illustrator Jane Rosenberg found herself craning her neck and scrabbling for her pastels. The infamous Mexican drug lord Joaquín “El Chapo” Guzmán was making his first public appearance in months, and it was Rosenberg’s job to capture the scene for Reuters. Seated next to her, with a brush pen and colored pencils, fellow courtroom artist Elizabeth Williams was also trying to capture his likeness, for the Wall Street Journal.

The artists had been waiting outside the door of the courtroom since 7:30 a.m., long before it opened, to get the best possible seats. When they finally made it inside, El Chapo was present for just 15 minutes. Williams finished one drawing; Rosenberg “very loosely” started two.

“He walked in the courtroom and looked over at his wife and children, and waved,” Rosenberg says. “Then he sat down in the chair by his lawyer, and stood up, started making arguments.” She sketched furiously to complete the drawings later, without any visual reference. “I couldn’t remember if he had on the same color bottoms as his top,” she says. Someone else in the courtroom later confirmed that he had.

Rosenberg is a self-proclaimed “dinosaur,” one of the last courtroom artists working today. Both she and Williams began working in the profession in 1980. Over nearly 40 years, Rosenberg has sketched a veritable who’s-who of celebrity courtroom drama: Woody Allen, Bill Cosby, Martha Stewart, Tom Brady.

Courtroom art has been a feature of the American media landscape since just after the high-profile Charles Lindbergh case in 1935. The famous aviator’s infant son was kidnapped and murdered, and news audiences were insatiable in their demand for more coverage. The media, in turn, went from reportage to circus to full-on hullabaloo. Newsreels of courtroom action, filmed from secret cameras in the New Jersey courtroom, were sent to movie theaters across the country. And so, when the trial was over, the American Bar Association banned cameras from courtrooms altogether. (To this day, they’re not allowed in federal court, though that may soon change.)

One of the last courtroom photographs before the ban shows Charles Lindbergh testifying at the Hauptmann trial in 1935. PUBLIC DOMAIN

Illustrators stepped in to fill the gap, but, by 1980, many states had lifted the photography ban in their courts. A total of 47 states today now allow broadcast coverage of some, if not all, judicial proceedings. “The proliferation of cable television, the advent of the Internet, and the waning economy have combined to greatly shrink the market for courtroom art,” writes Phoebe Hoban in ARTnews.

Detailed, illustrative drawings of courtrooms date back at least as far as the 17th century. Each often serves as a guide to the court standards and mores of the place and time in which it was made. Honoré Daumier, a French caricaturist and artist working in the 19th century, is known for his own courtroom images, which depict nameless lawyers and other judicial workers deep in conversation, peaky faces looming out over dark robes.

At some point in the 19th century, illustrations started to depict particular court cases. Some were imagined rather than observed. There are, for example, a wealth of dramatic illustrations from this period of the Salem witch trials (around 180 years in the past, by then), showing young women writhing on the floor as townspeople in the background huff and simmer. Others were more journalistic. The Victorian weekly tabloid Illustrated Police News made heavy use of illustrations of definitive moments in contemporary trials, though journalistic conventions of the time meant that these were usually not credited to any one artist.

For Oscar Wilde’s famous 1895 trial for “gross indecency,” for instance, the paper ran a series of illustrations of Wilde and other key players in the courtroom. This practice, however, came to a stop in Britain in 1925, with the Criminal Justice Act, which is still in effect. Among other things, it prohibits taking photographs or sketching anyone in the room, whether “a judge of the court or a juror or a witness in or a party to any proceedings before the court.”

There was concern at the time that photographers or artists might wig out lawyers or defendants, and therefore obstruct the path of justice. Unobtrusive fixed cameras have since been installed, but there’s still a demand for further depictions of certain cases. So British court artists circumvent the law by taking feverish notes about everything they observe—from posture to personal grooming to the color of someone’s tie—and then prepare their sketches outside of the courtroom. Court artist Priscilla Coleman became the first person to be given permission to draw in a courtroom in the United Kingdom in nearly 90 years, at an appeal hearing at the Supreme Court in 2013. At the time, a spokesperson from the court told the Evening Standard that while ordinarily photographers still weren’t permitted in the court, Coleman had been as noninvasive as the fixed cameras.

The golden age of real-time courtroom art in the United States began with Leo Hershfield, hired by NBC to sketch the congressional censure of Joseph McCarthy in 1954. He was rapidly ejected, allegedly, for “behaving like a camera.” After that, illustrators were obliged to work from the same kind of fastidious notes as their British counterparts, until 1963 protests, led by courtroom art “grande dame” Ida Libby Dengrove, “freed them to work on the spot.” That same year, TV news expanded from 15 minutes to a full half-hour, sending demand for courtroom sketches sky-high, as news editors struggled to fill the extra time. One high-profile case followed another: the assassinations of John F. Kennedy, Martin Luther King Jr., and John Lennon, for example. Illustrations were more in demand than ever before, and a new vanguard of illustrators, including former war artist Howard Brodie, went marching into the country’s courts.

The cases from these years that the artists love best are not always those that the public remembers. Rosenberg fondly recalls the trial of the wine counterfeiter Rudy Kurniawan. “I loved drawing all those bottles of wine, because they laid them out as evidence,” she says. “It was so much fun—like being back at art school.” And for Williams, a particular favorite was the cocaine-trafficking trial of automaker John DeLorean trial in 1984. She had a fantastic vantage point, she says, loved drawing former model Cristina Ferrare’s “fabulous clothes,” and enjoyed the support of encouraging, more seasoned colleagues. At that time, court illustrators were commonly sent along with reporters, who would tell them what to look out for. These days, they mostly work alone and hope they manage to capture what their client news organizations want.

Photographs might seem a better method for capturing an accurate likeness, but courtroom sketches provide something extra, something about the emotional resonance of what happened, Rosenberg says. “It can provide more of an essence.” Williams remembers seeing a heavily cropped photograph from a trial that failed to show how a 6-foot-5-inch, 300-pound defendant dwarfed his attorney. “And that’s a very important part of what that scene was,” she adds. On top of that, a photograph might capture someone between expressions, or with a fleeting grimace that doesn’t necessarily characterize the overall emotional tenor of a courtroom situation or moment.

All courtroom illustrations are necessarily impressionistic in this way, writes Katherine Krubb, who curated the 1995 exhibition Witness of the People: Courtroom Art in the Electronic Age at what was then called the Museum of Television and Radio in New York (now the Paley Center for Media). “[They] seek to capture scenes from everyday modern life in flickering, fleeting images,” she writes, comparing them to the work of 19th-century artists Edgar Degas or Édouard Manet. “The pursuit of verisimilitude leads to exaggerations and distortions.” There tends not to be much physical action in a courtroom: Instead, artists must rely on minute changes of facial expression to communicate the drama of the proceedings.

Rare moments of action often reap some of the most famous courtroom images: Bill Robles’s 1970 drawing of Charles Manson leaping from the defense table, intent on stabbing Justice Charles H. Older with a pencil, for instance. (Manson is mostly a frenzied scribble, his pencil soaring toward the judge.) “That was my first trial,” Robles recollects. “I’d never set foot in a courtroom before.” Williams remembers catching what she later described as “a great moment,” rather than “a great drawing,” when she drew notorious fraudster Bernard Madoff cuffed and being led away by officers.

Increasingly, the few remaining court artists are expected to do this thoroughly analog job in a digital world, where technology affects everything—from the challenges of creating these images to their public reception. In court, Rosenberg “wriggles around” to get the best view she can amid a forest of computer screens. Immediately afterward, she must photograph and email her work to a television studio, where it features on news broadcasts.

On top of that, illustrators are at the mercy of thousands, if not millions, of internet hot takes on their work. Rosenberg, for example, went viral after one of her sketches of Brady, quarterback on the New England Patriots, caught the world’s attention in August 2015. It was variously labeled “troll-faced,” “haunting,” and like it was “put in one of those machines that crushes cars.” Rosenberg found herself the victim of the opprobrium of vicious football superfans. “I got a lot of negative bullying on the internet,” she recalls. “A lot of that was from rabid Patriots fans. I didn’t know about that world, but I’m learning about it.” More recently, Taylor Swift’s sexual assault case made headlines not just for its outcome, but for sketch artist Jeff Kandyba’s perceived failure to capture the pop star’s likeness.

This artist’s rendition of Taylor Swift is a very good sketch of Rolf from The Sound of Music. pic.twitter.com/Qf37v3J8vC

— Anne T. Donahue (@annetdonahue) August 9, 2017

And now the days of courtroom illustration may be numbered. “It’s a matter of time,” says Williams. “I always thought it was.” In March of this year, senators Amy Klobuchar and Chuck Grassley introduced bipartisan legislation that would finally allow cameras back in federal courtrooms. If it passes, this could be the final nail in a coffin that’s already almost sealed. “That’s pretty much going to do it in,” Williams says, a little wistfully.

Robles is more circumspect. “They’ve had cameras for years, but a lot of the judges—the TV stations petition the court to allow a camera in, and they deny them that right. So that’s where we come in.” Judges famously banned cameras at Lindsay Lohan’s 2011 trial, for instance. Robles is currently preparing for the forthcoming murder trial of real estate scion Robert A. Durst. Though they may be a costlier option for media companies than using photographers, “we’re a necessary evil,” he says. “When there’s no cameras permitted, we’re king.”

Today, illustrators might work just 100 days in a good year, says Robles, and usually only for extremely high-profile cases. “People probably think it’s a piece of cake,” he adds. But key players are moving around the courtroom all the time, refusing to pose, and sometimes hiding their faces between newspapers or their hands. “But sooner or later, the judge will make them uncover their face,” he says. “So you get them one way or another. They never escape.”

It’s a hard, insecure job, Williams says. “Sometimes you can sit there and struggle. You can never, ever sit back and say, ‘Oh, this is going to be a breeze.’” She adds, more seriously, “This business will destroy you, if you think that. It will chew you up and spit you out. It is a tough business, and it takes nerves of steel to do it. I’ve seen artists just fold up and—‘Forget it, no,’” she says. Robles too has observed fellow illustrators running out of the courtroom when confronted with “death pictures and mutilations and so forth.” Today there’s barely enough available work for experienced artists, and no space for new ones.

On top of that, it can be very stressful. “But I’ve been doing it so long, it’s routine. A novice, I think, would be paralyzed.” It’s occasionally a struggle not to take the emotions of the trials home with her, says Rosenberg. “Sometimes it affects me, sometimes I cover horrific trials which make me cry.” But, somehow, this, and the punishingly early starts, don’t diminish her enthusiasm for the job. “I love it still,” she says. “I was telling my husband the other day, if I won the lottery, I would still do it—but I’d pay a sherpa to carry all my supplies for me, so I just had to show up.”

http://www.atlasobscura.com/articles/courtroom-artist-history-legal-illustration

Why the Can Opener Wasn’t Invented Until Almost 50 Years After the Can | Smart News | Smithsonian

Why the Can Opener Wasn’t Invented Until Almost 50 Years After the Can | Smart News | Smithsonian

How did the first tin cans get opened? A chisel and a hammer, writes Kaleigh Rogers for Motherboard. Given that the first can opener famously wasn’t invented for about fifty years after cans went into production, people must have gotten good at the method. But there are reasons the can opener took a while to show up.

Our story starts in 1795, when Napoleon Bonaparte offered a significant prize “for anyone who invented a preservation method that would allow his army’s food to remain unspoiled during its long journey to the troops’ stomachs,” writes Today I Found Out. (In France at the time, it was common to offer financial prizes to encourage scientific innovation–like the one that led to the first true-blue paint.) A scientist named Nicolas Appert cleaned up on the prize in the early 1800s, but his process used glass jars with lids rather than tin cans.

“Later that year,” writes Today I Found Out, “an inventor, Peter Durand, received a patent from King George III for the world’s first can made of iron and tin.” But early cans were more of a niche item: they were produced at a rate of about six per hour, rising to sixty per hour in the 1840s. As they began to penetrate the regular market, can openers finally started to look like a good idea.

But the first cans were just too thick to be opened in that fashion. They were made of wrought iron (like fences) and lined with tin, writes Connecticut History, and they could be as thick as 3/16 of an inch. A hammer and chisel wasn’t just the informal method of opening these cans–it was the manufacturer’s suggested method.

The first can opener was actually an American invention, patented by Ezra J. Warner on January 5, 1858. At this time, writes Connecticut History, “iron cans were just starting to be replaced by thinner steel cans.”

Warner’s can opener was a blade that cut into the can lid with a guard to prevent it from puncturing the can. A user sort of sawed their way around the can’s edge, leaving a jagged rim of raw metal as they went. “Though never a big hit with the public, Warner’s can opener served the U.S. Army during the Civil War and found a home in many grocery stores,” writes Connecticut History, “where clerks would open cans for customers to take home.”

Attempts at improvement followed, and by 1870, the basis of the modern can opener had been invented. William Lyman’s patent was the first to use a rotary cutter to cut around the can, although in other aspects it doesn’t look like the modern one. “The classic toothed-wheel crank design” that we know and use today came around in the 1920s, writes Rogers. That invention, by Charles Arthur Bunker, remains the can opener standard to this day.

http://www.smithsonianmag.com/smart-news/why-can-opener-wasnt-invented-until-almost-50-years-after-can-180964590/

25 Words That Are Their Own Opposites | Mental Floss

25 Words That Are Their Own Opposites | Mental Floss

Here’s an ambiguous sentence for you: “Because of the agency’s oversight, the corporation’s behavior was sanctioned.” Does that mean, ‘Because the agency oversaw the company’s behavior, they imposed a penalty for some transgression’ or does it mean, ‘Because the agency was inattentive, they overlooked the misbehavior and gave it their approval by default’? We’ve stumbled into the looking-glass world of “contronyms”—words that are their own antonyms.

1. Sanction (via French, from Latin sanctio(n-), from sancire ‘ratify,’) can mean ‘give official permission or approval for (an action)’ or conversely, ‘impose a penalty on.’
*
2. Oversight is the noun form of two verbs with contrary meanings, “oversee” and “overlook.” “Oversee,” from Old English ofersēon ‘look at from above,’ means ‘supervise’ (medieval Latin for the same thing: super- ‘over’ + videre ‘to see.’) “Overlook” usually means the opposite: ‘to fail to see or observe; to pass over without noticing; to disregard, ignore.’
*
3. Left can mean either remaining or departed. If the gentlemen have withdrawn to the drawing room for after-dinner cigars, who’s left? (The gentlemen have left and the ladies are left.)
*
4. Dust, along with the next two words, is a noun turned into a verb meaning either to add or to remove the thing in question. Only the context will tell you which it is. When you dust are you applying dust or removing it? It depends whether you’re dusting the crops or the furniture.
*
5. Seed can also go either way. If you seed the lawn you add seeds, but if you seed a tomato you remove them.
*
6. Stone is another verb to use with caution. You can stone some peaches, but please don’t stone your neighbor (even if he says he likes to get stoned).
*
7. Trim as a verb predates the noun, but it can also mean either adding or taking away. Arising from an Old English word meaning ‘to make firm or strong; to settle, arrange,’ “trim” came to mean ‘to prepare, make ready.’ Depending on who or what was being readied, it could mean either of two contradictory things: ‘to decorate something with ribbons, laces, or the like to give it a finished appearance’ or ‘to cut off the outgrowths or irregularities of.’ And the context doesn’t always make it clear. If you’re trimming the tree are you using tinsel or a chain saw?
*
8. Cleave can be cleaved into two “homographs,” words with different origins that end up spelled the same. “Cleave,” meaning ‘to cling to or adhere,’ comes from an Old English word that took the forms cleofian, clifian, or clīfan. “Cleave,” with the contrary meaning ‘to split or sever (something), ‘ as you might do with a cleaver, comes from a different Old English word, clēofan. The past participle has taken various forms: “cloven,” which survives in the phrase “cloven hoof,” “cleft,” as in a “cleft palate” or “cleaved.”
*
9. Resign works as a contronym in writing. This time we have homographs, but not homophones. “Resign,” meaning ‘to quit,’ is spelled the same as “resign,” meaning ‘to sign up again,’ but it’s pronounced differently.
*
10. Fast can mean “moving rapidly,” as in “running fast,” or ‘fixed, unmoving,’ as in “holding fast.” If colors are fast they will not run. The meaning ‘firm, steadfast’ came first. The adverb took on the sense ‘strongly, vigorously,’ which evolved into ‘quickly,’ a meaning that spread to the adjective.
*
11. Off means ‘deactivated,’ as in “to turn off,” but also ‘activated,’ as in “The alarm went off.”
*
12. Weather can mean ‘to withstand or come safely through,’ as in “The company weathered the recession,” or it can mean ‘to be worn away’: “The rock was weathered.”
*
13. Screen can mean ‘to show’ (a movie) or ‘to hide’ (an unsightly view).
*
14. Help means ‘assist,’ unless you can’t help doing something, when it means ‘prevent.’
*
15. Clip can mean “to bind together” or “to separate.” You clip sheets of paper to together or separate part of a page by clipping something out. Clip is a pair of homographs, words with different origins spelled the same. Old English clyppan, which means “to clasp with the arms, embrace, hug,” led to our current meaning, “to hold together with a clasp.” The other clip, “to cut or snip (a part) away,” is from Old Norse klippa, which may come from the sound of a shears.
*
16. Continue usually means to persist in doing something, but as a legal term it means stop a proceeding temporarily.
*
17. Fight with can be interpreted three ways. “He fought with his mother-in-law” could mean “They argued,” “They served together in the war,” or “He used the old battle-ax as a weapon.” (Thanks to linguistics professor Robert Hertz for this idea.)
*
18. Flog, meaning “to punish by caning or whipping,” shows up in school slang of the 17th century, but now it can have the contrary meaning, “to promote persistently,” as in “flogging a new book.” Perhaps that meaning arose from the sense ‘to urge (a horse, etc.) forward by whipping,’ which grew out of the earliest meaning.
*
19. Go means “to proceed,” but also “give out or fail,” i.e., “This car could really go until it started to go.”
*
20. Hold up can mean “to support” or “to hinder”: “What a friend! When I’m struggling to get on my feet, he’s always there to hold me up.”
*
21. Out can mean “visible” or “invisible.” For example, “It’s a good thing the full moon was out when the lights went out.”
*
22. Out of means “outside” or “inside”: “I hardly get out of the house because I work out of my home.”
*
23. Bitch, as reader Shawn Ravenfire pointed out, can derisively refer to a woman who is considered overly aggressive or domineering, or it can refer to someone passive or submissive.
*
24. Peer is a person of equal status (as in a jury of one’s peers), but some peers are more equal than others, like the members of the peerage, the British or Irish nobility.
*
25. Toss out could be either “to suggest” or “to discard”: “I decided to toss out the idea.”

http://mentalfloss.com/article/57032/25-words-are-their-own-opposites

America Has Been Struggling With the Metric System For Almost 230 Years | Smart News | Smithsonian

America Has Been Struggling With the Metric System For Almost 230 Years | Smart News | Smithsonian

At press time, only three of the world’s countries don’t use the metric system: the United States, Myanmar and Liberia. But it didn’t have to be this way.

On this day in 1866, the Metric Act was passed by the Senate. The law, which was intended “to authorize the use of the metric system of weights and measures,” was signed by then-President Andrew Johnson the next day. It provided a table of standardized measurements for converting between metric and the commonly used American system that could be used for trade.

The Metric Act doesn’t require Americans to use the metric system, but it did legally recognize the then-relatively-new system. It remains law–although it has been substantially amended over time–to this day, writes the US Metric Association. It was just the first in a number of measures leading to the United States’ current system, where metric is used for some things, like soda, drugs and even for military use, but not for other things. “Americans’ body-weight scales, recipes and road signs,” among other examples of everyday use, haven’t converted, writes Victoria Clayton for The Atlantic. “And neither has the country’s educational system,” she writes. This split system exists for reasons, but arguments about how to create a good national standard of measurement go all the way back to 1790.

The USMA is one of a number of voices advocating for America’s full “metrification.” It argues that converting to the International System of Units (the modern form of the metric system, abbreviated as SI) would make international trade simpler. (Technically, the American system known as Imperial is called United States customary units or USCS.) It also argues that the decimalized metric system is simpler to work with.

SI units influence the size of packages (such as 750 ml bottles of wine ) as well as how the package must be labelled. Since 1994, both metric and USCS have been required on commerical packaging under the Fair Packaging and Labeling Act.

“The United States is metric, or at least more metric than most of us realize,” writes John Bemelmans Marciano for Time:

American manufacturers have put out all-metric cars, and the wine and spirits industry abandoned fifths for 75-milliliter bottles. The metric system is, quietly and behind the scenes, now the standard in most industries, with a few notable exceptions like construction. Its use in public life is also on the uptick, as anyone who has run a “5K” can tell you.

America has been creeping towards metrification almost since the country was founded.

“In 1790, the United States was ripe for conversion,” writes David Owen for The New Yorker. At the time, the metric system was a new French invention (SI stands for Systeme Internationale), and adopting a system that departed from the Old World conventions and was based on modern decimalized units seemed like a good fit for the United States.

The French and Americans had supported and conflicted with one another over their revolutions in statehood, Owen writes, and there was some expectation on the part of the French that the country would join them in the measurement revolution as well.

But even though “the government was shopping for a uniform system of weights and measures,” Owen writes, the meter was too new, and too French. Then-Secretary of State Thomas Jefferson originally advocated for the meter, but then discarded the idea. “His beef was that the meter was conceived as a portion of a survey of France, which could only be measured in French territory,” writes Marciano.

In the course of the nineteenth century, though, the meter gained traction again and other countries started to pick up on it. However, by this point in time, American industrialists already ran all of their equipment based on inch units. “Retooling, they argued, was prohibitively expensive,” historian Stephen Mihm told The Atlantic. “They successfully blocked the adoption of the metric system in Congress on a number of occasions in the late 19th and 20th century.”

Add to these arguments America’s nationalist pride and traditional resistance to outside influences, and you have an argument for maintaining the status quo–metric, with a quarter-inch veneer of Imperial.

http://www.smithsonianmag.com/smart-news/america-has-been-struggling-metric-system-almost-230-years-180964147/

The Metre adventure

http://www.french-metrology.com/en/history/metre-adventure.asp

 

 

A History of Taco Bell’s Failed Attempts to Open Locations in Mexico – MUNCHIES

A History of Taco Bell’s Failed Attempts to Open Locations in Mexico – MUNCHIES

Taco Bell currently has 6,604 outlets in 22 countries and territories throughout the world, and over the next five years, it plans to push into Peru, Finland, Sri Lanka, and Romania—and add at least 100 locations each in China, Brazil, India, and Canada. Yes, on the tiny Micronesian island of Guam—which has a population of only 174,214—there are currently seven Taco Bells. But despite all that, you might be shocked to learn there is not a single Taco Bell currently in operation in the nation that gave birth to its titular taco.

That’s right: There are no Taco Bells in Mexico. But that sure as hell isn’t for a lack of trying. And then trying again.

Taco Bell’s forays into Mexico started back in 1992, when the chain had only around 3,700 restaurants, the vast majority of which were in the US. The company made its first stab at the Mexican market with a food cart in Mexico City, which served a limited menu of soft-shell tacos and burritos, along with Pepsi, the chain’s corporate owner at the time. A few other outlets were briefly opened next to KFC locations.

Problems arose from the get-go. The wildly inauthentic names of several popular Taco Bell items had to be changed because Mexican customers didn’t understand what exactly they were ordering. For instance, the crunchy taco—an anomaly in Mexico—had to be re-branded the “Tacostada,” thereby evoking the crunchiness of a tostada in taco form.

Still, the Mexican population wasn’t buying it. One iconic comment on the issue came from Carlos Monsivais, a cultural critic, who spoke to the Associated Press at the time and dubbed the attempt to bring tacos to Mexico “like bringing ice to the Arctic.” In short order, the first Taco Bells in Mexico were shut down less than two years after opening and the chain retreated back across the border.

But by the mid-aughts, Taco Bell was ready to try to break into the Mexican market once again. In 2007, when the fast-food company opened another outlet south of the border, the Chicago Tribune wrote that the move meant “the cultural walls fell for good.” Hopes were high; after almost a 15-year absence, Taco Bell could once again be found in our southern neighbor Mexico—this time next to a Dairy Queen in the parking lot of a fancy shopping mall just outside of Monterrey.

This second attempt called for a new approach. “We’re not trying to be authentic Mexican food,” explained Rob Poetsch, who was the director of public relations at Taco Bell at the time. “So we’re not competing with taquerias. We’re a quick-service restaurant, and value and convenience are our core pillars.” Poetsch claimed that this venture would be different, because the brand had changed by then—it had become more international, with 230 locations outside of the US—and a consumer-research team had been put in place. The goal was to reach 800 international locations, with 300 stores to open throughout Mexico. In 2008 alone, Taco Bell planned to open between eight and ten locations throughout Mexico.

At that first Monterrey location, Taco Bell made no attempts to hide how gringo-ish its food really was. French fries and soft-serve ice cream proudly held forth on the menu; Steven Pepper, the Yum! Brands Managing Director of Mexico admitted, “Our menu comes almost directly from the US menu.” In fact, a half-page newspaper ad that ran at the time came straight out and told the public, “One look alone is enough to tell that Taco Bell is not a ‘taqueria.’ It is a new fast-food alternative that does not pretend to be Mexican food.”

The branding strategy summed it all up succinctly: “Taco Bell is something else.”

Something else, indeed.

Once again, critics were skeptical. Scott Montgomery—CEO of Brandtailers, a California ad agency—said, “It’s like Mexicans coming up and trying to sell us hot dogs.” Customers agreed. Marco Fragoso, an office worker remarked to the Associated Press at the time, “They’re not tacos. They’re folded tostadas. They’re very ugly.” Another customer, Jonathan Elorriaga, told the AP reporter, “Something is lacking here. Maybe the food shouldn’t come with French fries.”

A food writer for Monterrey’s El Norte newspaper summed it all up with the following: “What foolish gringos. They want to come by force to sell us tacos in Taco Land. Here, they have a year in operation and the most ironic part is that they are doing well. Are we malinches [a Mexican term for traitor] or masochists?”

The new stores closed in swift succession, and pundits tried to explain the debacle. Some chalked the second failure up to the political climate in 2007: the mid-aughts were indeed an era of tougher enforcement of immigration laws and the inability to pass temporary-worker laws in the US. Meanwhile, Taco Bell’s contemporaneous move into the China market—where the outlets served soy milk and plum juice—was much more successful than the chain’s second attempt to sell Mexican food to Mexicans.

Taco Bell has stayed out of the Mexican market ever since. Today, the idea of a Taco Bell in Mexico has become something of a joke. There’s even a Facebook page for a non-existent Taco Bell in Mexico City that has a one-star review and is littered with comments deriding the chain. One Facebook user from Culhuacán left a comment on the page saying “NO SON TACOS, SON CHINGADERAS,” which translates to “They are not tacos, they are trash.” In a TripAdvisor post asking what ever happened to the Taco Bell in Mexico City, a user from Mexico City wrote the following: “In 29 years I’ve never seen a Taco Bell in Mexico City… or in Mexico, although I could be wrong. I agree to bring a Taco Bell here it’s a pretty bad idea. Taco Bell… is everything but Mexican food.”

Truth be told, there is one Taco Bell left in Mexico… sort of. When you cross the border from California into Tijuana, you may come across a cluster of decidedly un-corporate-looking taco stands called, well, Taco Bell. They even have a bell as their logo, but the bell is yellow instead of the chain’s pink. Evidently, this joint has absolutely no affiliation with the Irvine, California-based chain. You’ll know you’re in the right place because—if the Yelp reviews are accurate—this Taco Bell has no running water, the bathrooms are “disgusting,” and there are flies aplenty. But the beers cost a buck, and the tacos are legit street-style—and truthfully don’t sound bad at all.

When MUNCHIES reached out to Taco Bell and asked if it had any plans to re-enter the Mexican market in the future, a spokesperson provided us with the following statement: “We’ve changed our international expansion strategy in recent years, focusing on open-kitchen restaurant concepts that feature localized design, menu offerings, shareable plates and beer and alcohol. We are on track with this approach to grow to 9,000 restaurants in more than 40 countries by 2022, and have identified four partners in key markets where we will open 100 restaurants: Brazil, Canada, China and India. While we’re not currently in Mexico, we are seeing continued success in the more than 130 Taco Bells in Central and South America, as well as across the globe.”

In the end, nothing better encapsulates the Sisyphean task that was trying to convince proud Mexicans that they should willingly ditch the nation’s countless taquerias in favor of an American fast-food chain than Taco Bell’s no-longer-used “run for the border” slogan. After all, each time Taco Bell attempted an ill-advised foray into Mexico, the result was a mad dash back to the States.

https://munchies.vice.com/en_us/article/a3d4xg/a-history-of-taco-bells-failed-attempts-to-open-locations-in-mexico-fastfoodweek2017

The Tragic Story of England’s Nine-Day Queen | Smart News | Smithsonian

The Tragic Story of England’s Nine-Day Queen | Smart News | Smithsonian

Jane Grey was headed for death.

The daughter of Henry VIII’s niece Frances, Jane was destined, at least originally, for greatness. But her path to queenship, her brief reign and her untimely death all show the politics underpinning succession in the Tudor years. Her story is a powerful antidote to the “Tudor myth”–a longstanding view of sixteenth-century England as a political and social golden age, ruled by the divinely-appointed Tudors. It demonstrates that the line of succession, something which had been portrayed as fixed, was as political and changeable as any other public office. And it shows the religious conflicts underpinning this era in English history.

Grey’s family had intended her to marry the king’s son Edward and prepared her for that role with education and training in England’s new Protestant faith. But when it became clear that the young Edward was dying instead, writes Richard Cavendish for History Today, plans changed.

John Dudley, the Duke of Northumberland and “the virtual dictator of England,” as Cavendish wrote, was “desperate to prevent the throne passing to Edward’s half-sister and heir, the Catholic Mary Tudor,” writes the BBC. “Northumberland persuaded the king to declare Mary illegitimate, as well as Edward’s other half-sister Elizabeth, and alter the line of succession to pass to Jane.” At the time, the young queen-to-be was about 16–historians are unsure of her exact birthdate.

So launched a series of events that culminated with her death.

May 25, 1553: Jane Grey marries the Duke of Northumberland’s son

Grey was married to Guildford Dudley, who was just a few years her senior. This cemented Northumberland’s link to the future throne.

July 6, 1553: Edward VI dies aged 15

Edward had been king since he was nine years old. He “was given a rigorous education and was intellectually precocious,” writes the BBC, but he was often sick. It turned out that he was suffering from tuberculosis–although after he died, poisoning rumors swirled.

July 9, 1553: Jane Grey is taken to the Duke of Northumberland’s mansion for a secret meeting

At the mansion, she found the Duke, her new husband and her parents. After being told that she was now the queen, writes Cavendish, she fainted. After coming to, she reluctantly accepted her duty, saying, he writes, “if what has been given to me is lawfully mine.”

July 10, 1553: Jane Grey takes the throne

The fact that Grey was now queen was publicly announced, leading to some grumbling among the citizens. The English citizens who had been through so much political and religious turmoil thought that Catholic Mary Tudor, with her ties to other Catholic monarchs, was the rightful inheritor of the throne. Although Mary later became unpopular, she was at this time very popular.

Grey made it to the Tower of London, from which she would rule, and then had a giant fight with her husband and her mother-in-law because she refused to make him king, writes Cavendish. Mary Tudor also sent a letter asserting her right to rule.

July 11-18, 1553: Jane Grey occupies the throne, ineffectually

“Jane continued going through the motions as queen in the Tower,” writes Cavendish, “but Northumberland had miscalculated badly.”

Mary Tudor was traveling and gaining support. Grey was less well known.

July 19, 1553: Mary Tudor is declared queen. The Tower becomes Jane Grey’s prison

Public and political support led the royal council to declare that Mary, not Grey, was the rightful heir to the Tudor throne.

“Early hopes that Mary might pardon her predecessor dimmed after Jane vehemently opposed Mary’s legislation of the Catholic Mass,” writes Leanda De Lisle for 1843 Magazine. “In an open letter to a Catholic convert, Jane condemned the Mass as ‘wicked’ and exhorted Protestants to ‘Return, return again unto Christ’s war.’”

Not long after that, De Lisle writes, Grey’s father helped to lead an armed rebellion against Queen Mary in opposition to her plan to marry the king of Spain. Grey wasn’t involved, but she got the flack anyways.

February 12, 1554: Lady Jane Grey is executed

Grey was executed along with her husband because she was an ongoing alternative claimant to the throne. She was still a teenager.

After her death, writes De Lisle, Grey was considered a martyr of sorts to the Protestant cause, and remembered primarily as the Nine Days Queen. Her successor, Queen Mary I, ruled for about five years until her own death at the age of 42.

http://www.smithsonianmag.com/smart-news/tragic-story-englands-nine-day-queen-180964042/

This Is Your Brain on Architecture – CityLab

This Is Your Brain on Architecture – CityLab

Sarah Williams Goldhagen was the architecture critic for The New Republic for many years, a role she combined with teaching at Harvard University’s Graduate School of Design and elsewhere. She is an expert on the work of Louis Kahn, one of the 20th century’s greatest architects, known for the weighty, mystical Modernism of buildings like the Salk Institute in La Jolla, California, and the Bangladeshi parliament in Dhaka.

Several years ago, Goldhagen became interested in new research on how our brains register the environments around us. Dipping into writing from several fields—psychology, anthropology, linguistics, and neuroscience—she learned that a new paradigm for how we live and think in the world was starting to emerge, called “embodied cognition.”

“This paradigm,” she writes in her magisterial new book, Welcome to Your World: How the Built Environment Shapes Our Lives, “holds that much of what and how people think is a function of our living in the kinds of bodies we do.” Not just conscious thoughts, but non-conscious impressions, feedback from our senses, physical movement, and even split-second mental simulations of that movement shape how we respond to a place, Goldhagen argues. And in turn, the place nudges us to think or behave in certain ways.

The research led Goldhagen to science-based answers for previously metaphysical questions, such as: why do some places charm us and others leave us cold? Do we think and act differently depending on the building or room we’re in? (Spoiler: yes, we do.)

Architects intuited some of these principles long ago. As Kahn once noted of the monumental Baths of Caracalla in Rome, a person can bathe under an eight-foot ceiling, “but there’s something about a 150-foot ceiling that makes a man a different kind of man.” As the peer-reviewed studies mount, however, this new science of architecture and the built environment is destined to have a profound effect on the teaching and practice of design over the next generation.

CityLab talked with Goldhagen about the book and why so much architecture and urban design falls short of human needs.

Your book is about how we experience buildings and places through “embodied cognition.” How did you first learn about it?

I fell in love with architecture the way most people fall in love with architecture, which is that I went to places that just astonished me and moved me. And so from very early on I sort of wondered: why does it do that? The arts have this effect on you, but architecture is so much more profound, I find, than any of the other arts.

At the time, there really was no intellectual paradigm for thinking about these questions. And then about 15 years ago, my husband handed me a book by someone who had written a previous book he had really liked. The title of the book was Metaphors We Live By. It’s co-authored by George Lakoff, who’s a cognitive linguist, and Mark Johnson, who’s a philosopher. The basic argument is that much of how our thought is structured emerges from the fact of our embodiment. And many of the ways those thoughts are structured are metaphorical.

There was an immediate light bulb: “Oh, people live in bodies, bodies live in spaces.” I started reading more and more about it and realized [that] what Lakoff and Johnson had figured out was in the process of being confirmed through new studies in cognition that had been enabled by new technologies. We’ve had in the last 20 years a kind of ocean of new information about how the brain actually works. Most of that was confirming the precepts of embodied cognition, and also going beyond it in certain ways, showing how multisensory our apprehension of the environment is.

There was an immediate light bulb: “Oh, people live in bodies, bodies live in spaces.” I started reading more and more about it and realized [that] what Lakoff and Johnson had figured out was in the process of being confirmed through new studies in cognition that had been enabled by new technologies. We’ve had in the last 20 years a kind of ocean of new information about how the brain actually works. Most of that was confirming the precepts of embodied cognition, and also going beyond it in certain ways, showing how multisensory our apprehension of the environment is.

Another thing is differentiated, non-repetitive surfaces. [The psychologist and author] Colin Ellard did a study of how people respond: He basically put sensors on people and had them walk by a boring, generic building. Then he had them walk past something much more variegated with more ways to [engage] visually and therefore motorically. He found that people’s stress levels, measured by cortisol, went up dramatically when they were walking past the boring building.

The reason I emphasize non-conscious [cognition] is because most people are very bad at knowing why we’re feeling or thinking the things we are. You could be walking past that boring building and you ascribe your stress to a bad conversation you had with someone the other day. But cognition is embodied, and you’re standing next to this soul-desiccating place, and that’s what’s going on.

The book is peppered with the findings of scientific research on how the environment shapes us and our lives. The brains of London cab drivers actually change after they memorize the city’s geography. The design of a school can account for up to 25 percent of a child’s rate of learning. Why haven’t these findings upended architectural education?

I had architectural education very much in mind when I was writing the book. I taught in architecture schools for 15 years, good ones. The most obvious part of the answer is that architectural training is really, except for the technical and engineering part of it, based in the Beaux-Arts design tradition. Nobody’s really looking at the sciences.

Number two, the information which I draw in the book to construct this paradigm of how people experience the built environment comes from a lot of different disciplines. Cognitive neuroscience, environmental psychology, evolutionary psychology, neuroanthropology, ecological psychology. In most cases, the studies that I was looking at and ended up finding most useful were not necessarily about the built environment. It was up to me to look at a study on how people respond to water surfaces versus mirrors, and then figure out what that meant for the design of the built environment.

Another reason is that in the academy, the effect of poststructuralism and identity politics has been to hammer into people’s heads the notion of cultural relativism: “You can’t possibly say things about how people experience the world because it’s all culturally constructed, socially constructed; it differs by gender, by locale.” And so the other dimension was that talking about individual experience, even if it’s related to social experience, but from an embodied-cognition point of view, meant that you were apolitical. Because you were talking about something very subjective and individual. So it was kind of forbidden territory.

The embodied-cognition approach is universalizing, although you make it clear that any design guidelines arising from it leave room for different social and cultural responses. Is it easier, or harder, to take this approach now than it would have been 10 or 15 years ago?

I don’t think it’s coincidental that I’m not in the academy and I wrote this book. I don’t want to sound like I’m attacking architectural education because there are plenty of people out there doing great things. This book basically started with an essay on Alvar Aalto and embodied cognition and metaphors, in a book edited by Stanford Anderson. I presented this when I was still teaching at Harvard, and people went nuts. They just went crazy. “Wait a minute, you’re making all these universalist claims!”

My response to that was, and remains, “Sure, there are a lot of things that are socially constructed. All you have to do is read my earlier work; it’s not like I disagree with those ideas. The fact is that humans live in bodies, and brains work in certain ways.”

There’s this dichotomy between those who [think] about architecture in social and political terms, and those who [think] about subjective experience, and never the twain shall meet. One of the things the book does is basically dissolve that opposition. The critical wink is the work of this [psychologist] Roger Barker, who had researchers assigned to kids. [The researchers] followed them around and took notes. Breakfast, school, chess club, ballet. The conclusion was they could tell more about the kids by looking at where they were than by looking at who they were. Their individual psychology mattered a lot less in terms of their experience and behavior than the environments they were in.

So there isn’t this opposition between looking at it as a social construct versus experiential construct. It’s all the same thing. It’s a continuum.

One thing I kept thinking while reading the book was how little agency we really have. Have you gotten pushback on that? I can imagine some people saying, “No way is the environment shaping my thoughts to this degree.”

If people thought that, they didn’t say it to me. I think people are more ready to accept it than they were 10 or 15 years ago. The mind-body connection has become so apparent. We know now, for example, that how we hold our body affects our mood. If you’re depressed and your shoulders are hunched forward, you’ll actually help yourself if you straighten up.

The second thing is behavioral economics, which I think has been really key, and has been adopted into policy. People don’t make decisions logically. They make decisions based on association and fallacious heuristics. I think that has paved the way for people to recognize, “I don’t have as much agency as I thought I did.” The paradox is, with a book like this, I’m hoping to enhance people’s agency with their awareness of it.

You argue that “enriched environments” should be a human right, included in the UN’s Human Development Index. What has to happen next for human-centered design to become not a luxury, but the norm?

Well, a lot. One of the reasons the book is targeted to a general audience is that basically, we need a real paradigm shift in how we think about the built environment. It’s kind of analogous to the paradigm shift that happened in the 1960s and the way people thought about nature.

When I was really young kid, nature was nature. It was forests, trees, lakes, rivers. Then people begin to use the word “environment.” It was a political and social construct, and emphasized the interrelatedness of all these different components within nature. That was a response to pesticides, air pollution, and so on. Now, kids get education in the environment from the time they’re in first grade. They start learning about climate change, visit waste treatment plants. That’s the kind of paradigm shift that needs to happen about the built environment. Then it suddenly becomes of general public health importance.

What concretely needs to happen: One, architectural education. Two, real-estate development. Three, building codes, zoning codes, all these things need to be reviewed according to these kinds of standards. Four, architects need to not be so skittish in thinking about human experience and learn more about it. It’s a much larger problem than just, “Architects should do better.” It’s not a professional disciplinary problem, it’s a larger social problem. We also need more research.

I was at a book event where Richard Roberts [a former New York City housing commissioner] said, “I’m going to recommend to every public official I know that they read this book.” I’ve had a lot of architects tell me that they gave the book to clients.

That seems smart.

Yeah, no joke. We need general education about the built environment that starts very early on. So there are a lot of things that need to change. But they can.

https://www.citylab.com/design/2017/07/this-is-your-brain-on-architecture/531810/

Your revolution was dumb and it filled us with refugees: A Canadian take on the American Revolutionary War | National Post

Your revolution was dumb and it filled us with refugees: A Canadian take on the American Revolutionary War | National Post

To be clear; Canada loves you, United States. You buy our oil, you made Drake a superstar and you haven’t invaded us for 205 years. As Poland and Ukraine keep reminding us, we really couldn’t ask for a better superpower neighbour.

However, just because it all worked out doesn’t mean that starting a brutal war over a tax dispute wasn’t a bit of an overreaction. As the National Post’s own Conrad Black wrote in a 2013 history of the United States, the Founding Fathers “do not deserve the hallelujah chorus ululated to them incessantly for 235 years.”

Canada had to fend off an invasion during the Revolutionary War, after all, so consider us qualified to deliver this Independence Day buzzkill.

The colonists weren’t fighting a “tyrannical” king so much as they were fighting one of the world’s most democratic nations

The Declaration of Independence places sole responsibility on Britain’s George III for establishing what Americans called “an absolute tyranny over these states.” But George III wasn’t an autocrat. While his power was much greater than the current Queen’s, he had an elected House of Commons and a prime minister to check him. Parliamentarians were free to heckle British war plans, and members of the British press (the freest in the world at the time) openly sided with the colonists. British democracy was far from universal, of course, with voting barred to women, Catholics and the lower classes — and with representation ridiculously concentrated in rural areas. But it was not a far cry from the soon-to-be-independent United States, whose first presidential election would only see about six per cent of the population eligible to vote.

The war did involve an autocratic tyrant though … on the colonists’ side

Speaking of autocrats, the American rebels counted one of the world’s most notorious as their best friend in Europe. Louis XVI, the absolute monarch of France, wholeheartedly backed the colonists’ cause as a way to embarrass the English. France smuggled weapons and advisers to the rebels, dispatched thousands of troops to the colonies and ordered its navy to travel the world and harass British efforts to supply their North American armies. Historians generally agree that, without French support, the British would likely have crushed the American Revolution. Meanwhile, the incredible cost of the American proxy war helped to lead an unstable France ever closer to financial ruin, revolution and, ultimately, the execution of Louis. So in effect, the United States owes its existence to an impulsive dictator who ran his country into the ground so hard that he got himself beheaded.

American colonists had sparked a world war … and then refused to help pay for it

The American Revolution was largely sparked by colonial opposition to new taxes. But Great Britain’s bid to get some American revenue makes a bit more sense when one considers that the colonies had just bungled the Brits into a wildly expensive world war. In 1754, a 22-year-old Virginia militia officer named George Washington took a group of men into what is now Pennsylvania to work out a territorial dispute with some nearby French-Canadians. Instead, the inexperienced Washington ambushed a French-Canadian patrol, accidentally executed the patrol’s commander and ended up sparking the Seven Years War. The resultant worldwide conflict — which included Great Britain’s conquest of Quebec — drove London to the edge of bankruptcy.

Canada’s plan to recognize native land and respect Catholics was deemed “intolerable” by colonists

In 1774, the British government introduced the Quebec Act, which allowed French-Canadians in British-conquered Quebec to freely practise Catholicism. Crucially, the act also extended the borders of Quebec down to what is now Ohio and kept in place a large band of “Indian” territory on the western edge of the American Colonies. It was a remarkably liberal document for the time, but anti-Catholic colonists balked at it for promoting “Popery” and for banning their hoped-for expansion into indigenous land. The Declaration of Independence, in fact, directly accused King George of kowtowing to “merciless Indian Savages.” The Quebec Act was soon cited by colonists as the worst of the so-called “Intolerable Acts,” a series of punitive measures that ultimately turned the dispute with Great Britain into a shooting war.

Revolutionary America had a pretty serious terrorism problem
In the recent book Scars of Independence, historian Holger Hoock dismisses modern depictions of the American Revolution as rooms full of men in powdered wigs discussing liberty. It was actually a “profoundly violent civil war,” he writes. One largely forgotten aspect of the war was how much the Patriot cause was driven by terroristic mobs prepared to torture judges, customs officials, newspaper editors or anyone else seen to be supporting British rule. Pro-government officials had their homes burned, their horses poisoned and many were snatched out of their beds in the middle of the night, stripped naked and subjected to mock drownings or tarring and feathering. Accounts of these outrages help explain why the conflict escalated so quickly. When hotheaded Brits backed George III’s call to swiftly put down colonial rebels, it wasn’t because they were incensed at a lack of tea tax revenue — it was because they feared that their American lands had fallen to mob rule.

It’s a little odd when a “struggle for liberty” fills Canada with refugees

Between 60,000 and 80,000 Loyalists fled to Canada following American Independence and lost everything when their property was seized by the new United States. Revolutions commonly prompt an exodus of refugees. Just in the past century, the Russian Revolution, Cuban Revolution and the Zanzibar Revolution, among others, all spawned vast refugee streams, some of which ended in Canada. But unlike the communist and vengeance-minded architects of those revolutions, the Americans were ostensibly fighting for a free, pluralistic democracy where “all men are created equal.” In hindsight, it’s pretty bad optics that vast columns of families felt the need to seek actual freedom and equality elsewhere. “With malice toward none, with charity for all” would have to wait for another civil war.

Of all the countries to obtain independence from Britain, only the U.S. and Ireland chose to do it violently

Roughly 60 independent countries around the world were once counted as British colonies or mandates. Of those, only the United States and the Republic of Ireland gained their independence as a direct result of political violence. Compare that to Spain, which violently resisted the departure of almost every one of its overseas colonies. Great Britain wasn’t afraid to get its hands dirty in colonial affairs, but London could be convinced to tolerate a colony’s peaceful transition to independence — particularly when said colony was filled with white English-speakers. Which is to say that if Americans truly wanted freedom, there were lots more options on the table than simply taking a shot at the first redcoat.

http://nationalpost.com/news/canada/your-revolution-was-dumb-and-it-filled-us-with-refugees-a-canadian-take-on-the-american-revolutionary-war/wcm/b8a6b815-25de-4ea6-be42-a7af9d0323ed