Even teachers now say that academics are not the key to kids’ success

Even teachers now say that academics are not the key to kids’ success

To many, increasing automation and the unprecedented pace of technological changes mean kids need more than just academic skills to succeed. They need confidence and motivation to tackle problems, interpersonal skills to work with others and the resilience to stay on task when things fall apart.

New research from the Sutton Trust, a British foundation focused on social mobility, finds that 88% of young people, 94% of employers, and 97% of teachers say these so-called life skills are as or more important than academic qualifications. Perhaps more surprising: more than half of teachers surveyed—53%—believe these “non-cognitive” or “soft” skills are more important than academic skills to young people’s success.

“It is the ability to show flexibility, creativity, and teamwork that are increasingly becoming just as valuable, if not more valuable, than academic knowledge and technical skills,” said Peter Lampl, founder and chairman of the Sutton Trust.

The teachers’ perspective flies in the face of a decades-long movement from governments in the US and UK toward increased standards and testing. The more educators emphasize test scores, the more teachers feel hamstrung to focus their teaching on preparing for those tests, which crowds out the space to teach a subject they might love, or to underpin the subject with creative and collaborative projects and lessons to help build social and emotional learning, or character.

While testing has an important role to play in education, more research is pointing to the idea that too much testing crowds out real learning. Amanda Spielman, Britain’s chief school inspector, said this week that “the regular taking of test papers does little to increase a child’s ability to comprehend. A much better use of time is to teach and help children to read and read more.”

Meanwhile, teaching “character” is taking hold everywhere from Singapore and China to Colombia and Uganda. And employers are on board. Recent research in the US shows that jobs requiring a combination of strong social and cognitive skills are rising far faster than those based on cognitive ability alone.

Unfortunately, the Sutton Trust research found that, despite a lot of lip service about the importance of life skills, most schools in the UK aren’t doing enough to teach them. The National Foundation for Education Research asked secondary school teachers (kids aged 13-18) across England how many offered programs to build life skills, such as extracurricular activities (sports, drama, debating), or volunteering programs. They also asked kids whether they participated. More than a third of students—37%—don’t take part in any clubs or activities. Nearly half of teachers said their schools provided debating, yet just 2% of young people said they participated.

There is also a huge socioeconomic dimension. Less than half (46%) of students from disadvantaged backgrounds participate in extracurriculars, compared to 66% from better off families.

The group stressed the need for a more holistic approach to children’s education, and also vouchers to help disadvantaged kids participate in extracurricular activities. It noted that private schools have long focused on the importance of building confidence, articulacy (a Britishism for being articulate), and perseverance. And research shows those private school kids dominate the ranks of government and industry.

Even teachers now say that academics are not the key to kids’ success


Long Sleeves on Doctors’ White Coats May Spread Germs

Long Sleeves on Doctors’ White Coats May Spread Germs

SAN DIEGO — Doctors may want to roll up their sleeves before work, literally. A new study suggests that long sleeves on a doctor’s white coat may become contaminated with viruses or other pathogens that could then be transmitted to patients.

In the study, the researchers had 34 health care workers wear either long- or short-sleeved white coats while they examined a mannequin that had been contaminated with DNA from the “cauliflower mosaic virus.” This virus infects plants and is harmless to humans, but it is transmitted in a way that is similar to that of other, harmful pathogens, such as Clostridium difficile, a bacteria that causes severe diarrhea, said Dr. Amrita John, an infectious disease specialist at University Hospitals Case Medical Center in Cleveland, who led the study. John presented the research here on Friday (Oct. 6) at an infectious disease conference called IDWeek 2017.

The health care workers wore gloves while they examined the mannequin, then removed the gloves, washed their hands and put on a new pair of gloves before examining a second, clean (non-contaminated) mannequin. After the health care workers had finished examining both mannequins, the researchers swabbed the workers’ sleeves, wrists and hands, and tested the samples for DNA from the cauliflower mosaic virus. Each of the 34 participants completed the exam twice (once wearing short sleeves and once wearing long sleeves), for a total of 68 “simulations.”

They found that, when the health care workers wore long-sleeved coats, 25 percent of the simulations resulted in contamination of their sleeves or wrists with the virus DNA marker, compared with none when the health care workers wore short-sleeved coats.

In addition, about 5 percent of health care workers who wore long sleeves contaminated the clean mannequin with the virus DNA marker, while none of the health care workers who work short sleeves contaminated the clean mannequin.

These results provide support for a recommendation “that health care personnel wear short sleeves to reduce the risk for pathogen transmission,” John said. [10 Deadly Diseases That Hopped Across Species]

Such a recommendation already exists in the United Kingdom — in 2007, the country’s department of health introduced a “bare below the elbow” policy for hospitals, which recommended that health care personnel wear short sleeves. In the United States in 2014, the Society for Healthcare Epidemiology of America said that health care facilities might consider the adoption of a “bare below the elbow” policy.

Some U.S. facilities have subsequently adopted this policy within their institutions, and the new findings suggest that “more people should consider it,” said study co-author Dr. Curtis J. Donskey, an infectious disease specialist and professor of medicine at Case Western Reserve University in Cleveland.

Still, the policy has been met with some resistance, with some doctors calling for more evidence showing that long sleeves really do increase the likelihood of transmitting pathogens. The new study provides some evidence, but additional, larger studies are still needed before some hospitals may adopt the policy, John said.

In addition, future research is still needed to show that a short-sleeve policy actually reduces the number of infections spread in a hospital, the researchers said.

But John said the study has changed her personal preference for the way she wears her white coat. “I role up my coat sleeves above my elbow,” John said.


Astronomers Are ‘Racing Against Time’ as Humanity Clogs the Air With Radio Signals

Astronomers Are ‘Racing Against Time’ as Humanity Clogs the Air With Radio Signals

In a remote valley in the British Columbia interior, a massive telescope called the Canadian Hydrogen Intensity Mapping Experiment (CHIME) is scouring the skies for traces of dark energy, a mysterious force that drives the expansion of the universe but has never been directly detected. As if hunting for dark energy isn’t challenging enough, radio astronomers fear it might not be long before proliferating tech like smartphones and space satellites make these kinds of studies—and even the ongoing search for aliens—impossible, due to radio interference.

According to Mark Halpern, principal investigator at CHIME and astronomy professor at the University of British Columbia, the growing number of communications satellites in space as well as technologies on the ground that emit radio waves are interfering with CHIME’s data-collecting, and could potentially do more damage in the future. If radio astronomers aren’t able to do their research, it could prevent us from making future discoveries about our universe.

“I feel like we’re racing against time to get CHIME done while we still can,” Halpern said.


Radio frequencies are everywhere. They’re used to transport information for radio broadcasts, television, and your cell phone. As anybody who’s ever awkwardly spoken to someone on a walkie-talkie that accidentally hooked up to the wrong frequency knows, it’s really easy to interrupt those channels when one signal bleeds into the other.

It might not be such a big deal when your television gets a little bit of static. But interference can cause radio astronomers to lose their research data. Radio astronomy has led to the discovery of quasars, the imaging of asteroids, and showed us the cosmic microwave background, which is leftover radiation from the Big Bang. Just this week, scientists discovered a new source of gravitational wave: the violent merger of two neutron stars 130 million light years away. Astronomers will study the resulting radio waves to learn more about the energy of a neutron star collision, and how much mass is ejected.

The International Telecommunications Union (ITU), the United Nations’ agency for policing frequencies, provides recommendations for how radio frequencies should be distributed. The agency sets aside a band of radio waves specifically for radio astronomy projects.

But the nature of CHIME’s experiment makes it so that the telescope has to access a broader range of frequencies outside of that spectrum to map out more parts of the universe at once.

Halpern said the huge amount of frequencies they were accessing wasn’t a problem when they initially started the project. They’re located in a radio safe zone in a valley near Penticton, BC, where there is government-approved signage in the area telling drivers to turn off all electronic devices. But he said that around three years ago, a series of television stations started opening up near Penticton, causing bleeds into their signal.

Although the valley protects CHIME against local radio waves, the television satellites locked in orbit still cause interference. And since the scientists don’t own the frequencies they use—and won’t likely ever afford to buy a spectrum needed to perform their experiments, as they sell for billions—these scientists can’t do much about it. Halpern expects the rest of their radio waves will be auctioned off for television eventually.

“It’s completely not in the cards that CHIME could use any part of its budget to buy its own frequency,” Halpern said. In terms of funding and priority, he said, communications services dwarf the resources of radio astronomers.


Satellites aren’t the only thing bleeding into other frequencies. According to Ken Tapping, an astronomer at the National Research Council’s Dominion Radio Astrophysical Observatory in Penticton, where CHIME is also based, everyday tech like smartphones and those new wireless car keys emit accidental radio waves, called “unwanted emissions,” that interfere with other frequencies.

“These things splatter across all paths of the radio spectrum. They’re produced inconsequentially,” Tapping told me in a phone interview.

The ITU recommends that radio astronomy studies expect a maximum of five percent of their total data lost due to interference. It might not seem like a lot, but for experiments that depend on tracking short radio wave bursts, it could mean losing information that’s crucial to the experiment. Tapping said that in the future, even if everybody sticks to their allotted radio emissions, the amount of interference will increase due to the sheer amount of technology.

“They’re dirt cheap, they’re imported from abroad, and they’re being deployed all over the place in a currently uncontrolled fashion,” Tapping said, referring to the proliferation of cheap electronic devices. “If these reach a certain density of use, then radio astronomy could become rather difficult.”

Tapping hopes that it the ITU will be able to cap its losses at five percent. If a radio astronomy study is losing more of their research than that, he said, it might cause problems with funding, since funding sources could feel as though they’re not getting an adequate return on their investment.

Growing radio interference will also make it harder for scientists to receive signals from extraterrestrial life. At the Search for Extraterrestrial Life Institute (SETI) in Mountain View, California, radio astronomers are constantly looking for any sort of message that didn’t come from humans. But, according to SETI senior astronomer Seth Shostak, the signals they look for are the same ones humans produce every day.

“The question is ‘Is this ET on the line, or is it another telecommunications satellite passing overhead?'” Shostak told me. When SETI detect a promising radio signal, their astronomers check if it moves in correlation with the rotation of the Earth, and make sure that their other receivers don’t detect it, since that would indicate it’s either a human-made satellite in orbit or a ground-based radio system.

The clear solution for radio astronomers is to move their instruments to remote areas with little human presence. This can occasionally result in rules that are somewhat dystopian: China banned any electronic devices and created a resident-free zone around its massive new radio telescope to ensure there would be no interference, relocating 9,000 residents in the process.

Halpern said his team did field measurements in remote places like the Sahara Desert when trying to figure out where to put CHIME. But remote locations come with their own challenges, like ensuring safety and access for scientists, and the extra cost of building in unpopulated areas.

Another option, Shostak suggests, is to move radio astronomy projects to the backside of the Moon, which is shielded from any frequencies from Earth. The obvious problem, he said, is that these projects don’t have the astronomical amount of funding necessary for a lunar mission, so they’re Earthbound for now.

To prevent the death of radio astronomy, Tapping said that astronomers have to work more closely with communications companies for solutions. The introduction of more low-power transmitters for smartphones and other technologies could reduce the amount of unwanted emissions, and increase the battery life of those products, too. But Tapping pointed out that would affect these companies’ bottom line, so the risk of increased interference lives on.

“There’s a Darwinian struggle going on,” Tapping said. “But I’ll be honest, there always has been.”


USA and Japan’s giant robot battle was a slow, brilliant mess

USA and Japan’s giant robot battle was a slow, brilliant mess

The oft-delayed giant robot fight has finally taken place. On Tuesday, Team USA’s mechs scrapped it out with Japan’s Kuratas in an abandoned steel mill for the world to watch. There could only be one victor, and it proved to be the red, white, and blue. Yes, the MegaBots team representing America came out on top, but not before three gruelling rounds of robopocalypse.

Those who tuned into Twitch to view the action saw Team USA’s Iron Glory get knocked down by Japan’s Kuratas bot straight out the gate. Its paintball canon clearly no match for its 13-foot rival’s half-ton fist. In the second round, the MegaBots pilots came back with the newer Eagle Prime machine, itself decked out with a mechanical claw and gattling gun. But, they still struggled to land a deadly blow, instead getting stuck to their foe — with Kuratas’ drone sidekick making life that much harder. Then, in the final round, things got grizzly. Eagle Prime whipped out a chainsaw to dismember Suidobashi Heavy Industry’s juggernaut and end the carnage.

Okay, so Team USA had the unfair advantage of using two bots, and the entire event may have been as choreographed as a WWE match, but it was strangely watchable regardless.

With a win under its belt, the MegaBots team now wants to start a full-blown giant robots sports league. And, there’s at least one contender waiting in the wings.


Pastry Chefs Forced to Get Creative as Vanilla Prices Soar

Pastry Chefs Forced to Get Creative as Vanilla Prices Soar

As Hurricane Harvey barreled toward Texas, Rebecca Masson, owner of Houston’s Fluff Bake Bar, thought about what was most important to her; what she had to keep safe. She ran to her pantry, grabbed the last 10 quarts of vanilla she had, and sped to shelter. At a time when top vanilla producers are charging $600 to $750 per kilogram for vanilla beans, Masson’s stash of vanilla was nothing short of liquid gold. “I could not risk it being flooded or stolen,” she says. “To lose all my vanilla? That would be no joke.”

Bakers and ice cream makers across the country have been crushed by the price surge for vanilla, which spiked after a cyclone hit Madagascar, the world’s leading producer of vanilla, on March 7. The current $600 per kilogram price is up from around $100 in 2015, and near $500 per gallon for pure vanilla extract, which sold for $70 a gallon in 2015.

While price hikes due to weather or a poor harvest are nothing new, the current vanilla crisis is unique. “The increase feels different than any other price hike we have seen because it is both prolonged and dramatic,” says Allison Kave, who co-owns Brooklyn bar and bakery Butter & Scotch.

“I’ve seen hikes before,” Masson says, recalling a 2005 surge when vanilla bean prices doubled. “But six to seven months later, prices went back down.” Not so this time. Instead, prices have showed no signs of softening.

Craig Nielsen, VP of sustainability at Nielsen-Massey, which has been in the business of making vanilla since 1907, says his company does not expect a change in price anytime soon. Vanilla plants take about three years to mature and produce beans. When the cyclone hit this spring, it tore through the main vanilla-growing areas in Madagascar, known as the SAVA region. Not only were crops devastated, but the surrounding trees, essential to filter sunlight and diffuse the heat hitting the vanilla vines, were also decimated. This means future crops may also be damaged or die from the stress of too much sun.

But Nielsen says the price hike is about more than the cyclone. In 2007, vanilla production began to decline in alternate growing regions (regions outside of Madagascar) because prices had fallen so low. It makes sense: farmers were not willing to invest the time and labor to grow and harvest vanilla in that depressed market, and supply started to decline.

Then, in 2015, vanilla prices started to climb as consumers began demanding natural ingredients in their candy bars, ice cream, and cakes. In November 2015, Hershey’s announced that it would swap out the artificial ingredient “vanillin” for the real deal in its kisses and chocolate bars. The move was the first in a series of changes to remove all artificial ingredients from the chocolates. With big food demanding real vanilla, prices started to climb to $150, then $200, then $275 a gallon, according to Masson. Add on a cyclone, and the three- to four-year life cycle of the crop, and prices went through the roof.

Some makers, like Amy Keller of Jane’s Ice Cream in Kingston, New York, were smart enough to stockpile vanilla at the first sign of a price surge a few months ago. But Keller is already worrying about what will happen she runs out, as prices have gone up not only for Madagascar vanilla (which accounts for 75 to 80 percent of world supply) but for vanilla from other sources — Indonesia, Mexico, Uganda, India — because of the heightened demand.

“If I could increase the price of my ice cream at the same percentage as the rising price of vanilla, I’d be doing really well right now — like, really well,” says Peter Arendsen, owner of the wholesale ice cream company Ice Cream Alchemy. Unfortunately, he can’t, so for now, he eats the cost. As does Ample Hills Creamery in Brooklyn, where co-owner Jackie Cuscuna says she will not pass the cost onto her customers, but notes that she has stopped introducing any new flavors made with vanilla.

Others have had to take more severe action. This summer, the organic ice cream company Blue Marble stopped selling its vanilla base to its wholesale customers, instead offering sweet cream or buttermilk flavors. Elsewhere, New York City pastry chef Fany Gerson, who relies on vanilla for her La Newyorkina popsicles and her doughnuts at Dough, took to milking the most out of every pod: She uses the beans once, then soaks them, uses the liquid that results, then dries them and grinds them into a vanilla sugar.

Four months ago, when Eric Berley, who co-owns the Philadelphia ice cream shop Franklin Fountain with his brother Ryan, started paying $544 a gallon for vanilla, he crunched the numbers and estimated that he would have to spend $22,000 more on vanilla this summer than last. To mitigate losses, he painstakingly reviewed every ice cream recipe and held blind taste tests with lower amounts of vanilla. The tweaked recipes have helped somewhat. “We didn’t have to take the full hit,” he says.

The soaring cost of vanilla did force prices up at Butter & Scotch; on August 1, the price of its whole vanilla birthday cake went up to $72, up from $60. “It was a really hard decision, but we’d seen the price increase so dramatically,” says Kave. Kave and co-owner Keavy Blueher have also abandoned offering homemade cream soda (made from whole vanilla beans), and, like Berley, have tweaked recipes to use the least amount of vanilla possible. They’ve even looked into making their own vanilla extract, but after doing the math on pricing on the beans and labor, found it would not make sense.

Imitation product is available, sure, but most bakers worth their weight in frosting won’t touch the stuff. “I don’t use anything artificial or made in a lab,” says Masson, who managed to find a blend of Tahitian and Madagascar vanilla extract at $1.72 an ounce ($500/case) in July. But she’s not sure what she will do when her vanilla runs out: She says the case is already up to $991. “Vanilla is all I think about,” she says. “I dream about it. Because at this rate, I just won’t be able to afford it.”

While bakers and makers are reeling, there may be a silver lining in this story after all. Nielsen points out that previously low price levels were not sustainable for the farmers, because of how labor intensive the crop is to grow, harvest, and produce. “There needed to be an adjustment in price to keep farmers interested in growing and maintaining the vines,” he says.

Nielsen predicts future vanilla prices will undergo a measured, not dramatic, price decline because of the continued strong global demand, tied to a commitment by large food manufacturers to use natural versus artificial flavors. A more moderate price, somewhere around $100 or $150 a gallon, might be the best of both worlds.


Why is our universe three dimensional? Cosmic knots could untangle the mystery

Why is our universe three dimensional? Cosmic knots could untangle the mystery

Next time you’re untangling your earbuds in frustration, here’s an idea to help put it in perspective: knots may have played a crucial part in kickstarting our universe, and without them we wouldn’t live in three dimensions. That’s the strange story pitched by a team of physicists in a new paper, and the idea actually helps plug a few plot holes in the origin story of the universe.

Our universe has three spatial dimensions. That’s such a basic fact of reality that most people don’t ever stop to question why it’s the case. But in theory, three dimensions seems like a somewhat arbitrary number. Why doesn’t our universe have four, or five, or 11 dimensions? The question has plagued physicists, but trying to answer it has all-too-often been relegated to the “too-hard” basket.

After five years of tackling the problem, an international team of physicists has developed a theory that not only explains how the universe arose in its three dimensional state, but also solves several other mysteries of its birth and growth. The key is a fairly common element of the Standard Model of particle physics called a flux tube.

Flux tubes are flexible strands of energy that bind elementary particles together – linking quarks and antiquarks with the help of gluons. But as the particles drift apart, they can eventually break the flux tube between them. That gives off a burst of energy that creates a new quark-antiquark pair, which bind to the existing particles to form two complete pairs.

Flux tubes are a well known phenomenon, but for the new theory the physicists found that by kicking those up to a higher energy level, they can solve some mysteries about why the universe happens to be exactly the way we see it.

In the early days of everything, the universe was just a hot, thick primordial soup called quark-gluon plasma. With so many elementary particles in close proximity, they would have created a whole mess of flux tubes. Most of these tubes would have quickly been destroyed though, since matter and antimatter annihilate each other when they meet, taking the flux tubes with them.

But there are times when flux tubes can survive longer than the particles that they link. If those particles move in just the right way, they can twist their flux tubes into knots, which are stable enough to exist on their own. And if several of these flux tubes intertwine, they can form an even more stable network of knots, which would have quickly filled the early universe.

The team soon realized this idea explained two long-standing issues with the currently-accepted idea of how the universe came to be. The story goes that in its first few moments, the universe underwent a period of extremely rapid expansion – from the size of a single proton to a grapefruit in less than a trillionth of a second. After that, expansion happened much more slowly, although it is currently accelerating.

But two questions about that story have never been properly answered: what triggered that sudden burst of expansion, and then why did it slow back down again? When the team calculated how much energy would be tied up in their knotty network, they realized it gave a convenient explanation for both of those.

“Not only does our flux tube network provide the energy needed to drive inflation, it also explains why it stopped so abruptly,” says Thomas Kephart, co-author of the study. “As the universe began expanding, the flux-tube network began decaying and eventually broke apart, eliminating the energy source that was powering the expansion.”

The story neatly fits in with existing ideas of the origins of everything. After the flux tube network breaks down, it releases particles and radiation into the universe, which then continues to expand and evolve the way other theories have explained.

That also brings us back to the question of why the universe is three dimensional. According to knot theory, knots can only exist in three dimensions: as soon as you add a fourth, they quickly unravel. That means that during the early period, the knotted flux tubes would have only caused rapid expansion in the three spatial dimensions. By the time the flux tube network broke down, the groundwork had already been laid for a 3D universe to evolve, and any higher dimensions that exist would remain tiny and essentially undetectable.

While the theory is certainly intriguing, it’s still a work in progress. Before the idea can be properly proposed, the researchers say they need to develop it further to allow it to make testable predictions about the nature of the universe.

And in the end, maybe tangled earbuds are a small price to pay, considering we might not exist without them.


Carnegie Mellon Solves 12-Year-Old DARPA Grand Challenge Mystery

Carnegie Mellon Solves 12-Year-Old DARPA Grand Challenge Mystery

Carnegie Mellon’s Red Team went into the 2005 DARPA Grand Challenge as the favorite to win. They’d led the pack in the 2004 event, and had been successfully running their two heavily modified autonomous Humvees, H1ghlander and Sandstorm, on mock races across the desert for weeks without any problems. When H1ghlander set out on the 212 km (132 mi) off-road course at dawn on 8 October 2005, it led the pack, and gradually pulled away from Stanford’s robot, Stanley.

About two hours into the race, however, H1ghlander’s engine began to falter, causing it to struggle in climbs and never reach its top speed. Nobody could tell what the issue was, but it slowed the vehicle down enough to cost it more than 40 minutes of race time. Stanley passed H1ghlander and went on to win the race by just 11 minutes. Even after the event, CMU wasn’t able to figure out exactly what happened. But last weekend, at an event celebrating the 10th anniversary of the DARPA Urban Challenge (which CMU won handily with their autonomous Chevy Tahoe BOSS), they accidentally stumbled onto what went wrong.

Here’s the point in the race where H1ghlander started to falter; it’s part of a fantastic NOVA documentary on the DARPA Grand Challenge which you should watch in its entirety if you have time:

Even the DARPA Grand Challenge winner, Stanford University, seemed a bit surprised by how things turned out. “It was a complete act of randomness that Stanley actually won,” Stanford team lead Sebastian Thrun later said. “It was really a failure of Carnegie Mellon’s engine that made us win, no more and no less than that.”

Here are some excerpts from Red Team’s race logs recorded immediately following the Grand Challenge:

October 11: The root cause that capped H1ghlander’s speed and crippled its climbs is not yet known. Requested speeds above 20 mph were under-achieved, even on the long, straight, level roads. H1ghlander didn’t even reach intended speeds going downhill. H1ghlander apparently stopped, rolled backwards, then re-climbed a few times. Weak climbing and stopping are not great practices for winning races. The capped speeds and weak climbs cost H1ghlander over 40 minutes of schedule time. The root cause is still a mystery.

October 12: H1ghlander’s engine was observed to be shaky immediately following the race. The first indication of possible engine trouble was observed when driving H1ghlander from the finish line to the inspection area with a human at the wheel. The engine was running very rough and almost died repeatedly in just that 50 yards of driving with a human foot on the accelerator pedal.

This engine problem is unlike any one that we have seen in the past, as engine performance is severely degraded at and anywhere near idle. Data indicated no limp home mode, no safety mode, and no low-torque mode. Detailed fuel, oil and transmission samples will be analyzed. We do not yet know the root cause that slowed H1ghlander’s driving on race day.

It turned out that the fuel was okay. The oil and transmission fluid were also okay. The electrical system was fine too. With the DARPA Urban Challenge up next and a completely new vehicle under development for that, the CMU team moved on.

Last week, CMU celebrated the 10th anniversary of BOSS’ DARPA Urban Challenge win in 2007. BOSS, Sandstorm, and H1ghlander were all pulled out of storage at CMU and tidied up a bit to be put on display. As H1ghlander’s engine compartment was being cleaned with the engine running, Spencer Spiker (CMU’s operations team leader for the DARPA challenges) leaned against the engine with his knee, and it started to die. This little box is what he was leaning against, as shown to Clint Kelly (who directed DARPA’s research programs in robotics and autonomous systems in the 1980s) by CMU Red Team leader William “Red” Whittaker.

The box is a filter that goes in between the engine control module and the fuel injectors, one of only two electronic parts in the engine on a 1986 Hummer. Spencer discovered that just touching the filter would cause the engine to lose power, and if you actually pushed on it, the engine died completely. But, from a cold start, if the filter wasn’t being touched, the engine would run fine. There was nothing wrong with H1ghlander’s sensors, or software: this filter cost H1ghlander 40 minutes of race time, and the win. “How about that, buddy!” Red said to Chris Urmson (who was working on perception for Red Team during the DARPA challenge, and ran Google’s self driving car program for seven years before starting his own autonomous vehicle company) at the CMU event, showing him the filter. “You’re off the hook!”

As to what may have caused this hardware failure in the first place, many team members at the CMU event suggested that it may have happened just a few weeks before the Grand Challenge, on September 19, when H1ghlander got into a bit of an accident after a 140-mile autonomous test:

Here’s an excerpt from a blog post by Vanessa Hodge, who worked on vehicle navigation and was following H1ghlander in a chase car that night:

H1ghlander was driving autonomously back to the entrance road so we could drive it back to the shop to pamper it before the race. We came to a part of the trail where there was a swamp on the left and a boulder-ridden mountain side on the right, with a road width a little bit larger than the vehicle. H1ghlander kicked up some thick dust and I slowed down to a stop to let the dust settle before catching up. The team in the chase car watched our vehicle display which monitors its actions while the dust settled. Problems appeared in the display, and one team member immediately hit the emergency pause button, but it was too late. In the second we lost visual, H1ghlander tracked off to the right of the path up the slope, slid on its side and flipped entirely to the other side.

By September 22, H1ghlander was back up and running, “hot and strong.” But, that filter may have taken some damage that was difficult or impossible to diagnose, and it ended up failing at the worst possible time.

While it’s impossible to know how a DARPA Grand Challenge win by H1ghlander might have changed autonomous car history, the people I spoke with at CMU generally seemed to feel like everything worked out for the best. BOSS had a convincing win at the DARPA Urban Challenge in 2007, and Stanley’s performance at the Grand Challenge helped to solidify Stanford’s place in the field. Roboticists from both CMU and Stanford helped to form the core of Google’s self-driving car program in California, and today, Pittsburgh is one of the places where both established companies and startups come to do self-driving research, development, and testing. There are very few lingering feelings about what happened, and everyone involved has long since moved on to bigger and better things. But all the same, it’s nice that at last, this final mystery has been solved.


How Cities Are Trying to Convince Landlords to Rent to the Homeless

How Cities Are Trying to Convince Landlords to Rent to the Homeless

Families wait years to get off the government’s waiting list for a rental voucher, sometimes while living in a homeless shelter. When they finally get that housing aid, they often struggle to find landlords willing to rent to them.

Most landlords screen out people who have a criminal background, poor credit or a history of evictions, making it difficult for voucher holders to find somewhere to live, even when they can afford rent. In fact, it’s common for people to lose their vouchers — which have expiration dates — after months of unsuccessful searching for a home.

To ease landlords’ worries and house more of the homeless, a growing number of cities are offering to reimburse landlords for certain losses — unpaid back rent or repairs for tenant-caused damages — that result from accepting applicants who have rental vouchers.

“Many, many communities are doing this, and it’s out of necessity,” says Elisha Harig-Blaine, who works on affordable housing issues at the National League of Cities. “They simply can’t get people placed into housing with these subsidies.”

This month, Boston and the District of Columbia announced their own “housing guarantee” or “risk mitigation” programs.

In Boston, the city will reimburse landlords for up to $10,000 in unpaid back rent or property damages that go beyond normal wear and tear. In D.C., a nonprofit is raising $500,000 in private funds to cover up to $5,000 in landlord costs per tenant. In both places, program staff will be available to address landlord complaints and provide case management for the tenants.

The question is, will that be enough to convince landlords to accept tenants who pay with rental vouchers?

In many of the cities that have these programs, affordable housing is hard to find, but renters with clean criminal and financial backgrounds are not.

“At the end of the day, real estate is a business. These landlords want to do the right thing, but we’re talking about their livelihood,” says Harig-Blaine, who has attended landlord recruitment events in nine communities across the country.

Landlords, he says, don’t want to deal with missed payments or other trouble that might come with renting to someone who was recently homeless.

Nevertheless, local officials in D.C. — which is getting 800 new residents every month and has some of the country’s highest rents — are optimistic.

“Rather than [renting] to the millennial who is just moving in from some other part of the country,” Neil Albert, president and executive director of the DowntownDC Business Improvement District, the nonprofit raising the money, told Governing. Albert thinks the risk funds will spur landlords to “weigh our needs and give equal consideration” to voucher holders.

There is no official number of landlord assistance programs, according to the U.S. Interagency Council on Homelessness, but they exist in Denver, Fargo, N.D.; Marin County, Calif.; Orlando, Fla.; Portland, Ore.; and Seattle-King County, Wash, which started one of the first almost a decade ago. Some states, such as Minnesota and Oregon, offer them as well.

Before launching its program, Boston researched them in other cities and found that participating landlords rarely had to use the risk funds, according to Boston’s Department of Neighborhood Development. Last year, in Seattle and King County, for example, participating landlords filed mitigation claims for only 15 percent of the renters covered by the program. Data on how many landlords participate in each city and how many people are housed through such programs, however, is not readily available.

D.C. officials, though, expect demand for the risk funds to be higher in their city.

“We think it will be a little different here in D.C. We think people will actually use this fund,” says Albert, adding that if it results in more units being rented to voucher-holders, then “that’s a great problem to have.”

One difference between the Boston and D.C. landlord programs is the funding and management structure. In Boston, the city is putting up the risk funds and managing its landlord relations on a two-year pilot basis. In D.C., a nonprofit business improvement district is raising funds — mostly from developers — and a local housing nonprofit is administering the program. That’s because landlords and property managers in D.C. pushed for a privately managed fund that could provide reimbursements faster than a government agency, says Albert.

More than 5 million Americans receive some kind of rental voucher from the state or federal government, according to the Center on Budget and Policy Priorities. To qualify, a person or household must be below the federal poverty line or make less than 30 percent of the area median income. Because the program is not an entitlement, less than a quarter of all eligible families receive housing assistance, and many households wait years before a voucher becomes available.

Landlord assistance programs are trying to address a chicken-and-egg problem, says Laura Zeilinger, D.C.’s director of human services. Landlords want renters who have jobs and earn a steady income. But stable housing is usually the first step to helping people get and keep a job. She’s hoping that landlords in D.C. will waive income requirements in their applications.

“Housing is an important foundation for people to be able to work and to achieve their potential,” she says. “It’s a really difficult thing for people to do while living in a shelter environment.”


The Dark History Behind Ouija Board’s Baltimore Origins

The Dark History Behind Ouija Board’s Baltimore Origins

Charles Kennard always had his eye out for a chance to make a buck, but he was not the greatest, nor the luckiest, businessman. It appears that he wasn’t the most honest guy, either. The second child of a successful Delaware merchant, Kennard moved to Maryland’s Eastern Shore in the late 1880s after developing “secret” bone-mix recipes for fertilizer. (In fairness, everyone in the fertilizer business claimed a “secret” recipe.) Following initial success, his Chestertown plant went to auction due to a combination of drought, competition, and debt. But all was not lost. A Prussian immigrant named E.C. Reiche kept an office next to Kennard’s on the first floor of the four-story, wood-frame hotel in Chestertown’s tiny business district. A furniture maker turned coffin maker turned undertaker—not an atypical career progression for the day—Reiche was also an inveterate tinkerer and Kennard had another plan.

Back story: Two generations earlier, a pair of girls in upstate New York named the Fox sisters, claiming to be mediums able to interpret mysterious “knocks” from the other side, had launched a spiritualist movement that continued to hold sway across the country. In fact, in the aftermath of the Civil War, with so many husbands, fathers, and sons lost in the conflict’s bloody battles, spiritualism—the belief the dead can speak to the living—had only gained steam with people desperate for a connection to departed loved ones and greater meaning for their own lives.

It’s in this context in 1886, during the period Kennard and Reiche shared a hallway, that newspaper reports began appearing about a “talking board” phenomenon sweeping Ohio, including an Associated Press story that ran in the local Kent County News. It’s also about this time, according a later Baltimore American story, that Kennard and Reiche—most likely inspired by the AP account—began collaborating and making at least a dozen of their own “talking” boards.

“Reiche, the biggest coffin maker in town, is making these on the side,” explains Robert Murch, the world’s foremost talking-board historian, and it’s these prototypes that became the Ouija board. “But it’s Kennard, when he leaves Chestertown for Baltimore in 1890, where he continues in the fertilizer game, and starts a real-estate business, who begins pitching what he says is his talking-board invention to potential investors.”

After numerous rejections, Elijah Bond, a local attorney who claimed his sister-in-law was a strong medium, finally took an interest. Soon enough, the Kennard Novelty Company, which incorporated the day before Halloween 125 years ago, began manufacturing Ouija boards much as they appear today. Bond was right about his sister-in-law, too: Helen Peters proved convincing enough with Kennard’s new talking board to win over a skeptical U.S. patent office. She not only gets credit for earning the stamp of legitimacy from the federal government, certifying the board delivered as promised, but also for “receiving” the O-U-I-J-A name from the board itself, which told her the strange word meant “good luck.”

(In truth, the name “Ouija” was written on the necklace locket that Peters was wearing at the time.)

So, yes, an undertaker and an opportunist named Kennard invented the only patented board game—billed as both a mystical oracle for communicating with the spirits and wholesome amusement—ever to outsell Monopoly in a given year.

“It comes straight from the 19th-century séances,” says Nic Ricketts, curator at The National Museum of Play in Rochester, NY, noting that a glow-in-the-dark board and a classic version are still sold today. “There has never been another brand board game like it, and I don’t see it fading away any time soon.”

The story of the Ouija board, however, is more than a tale of snake oil salesmen duping the Victorian masses or, subsequently, a game of harmless fun at a million junior-high sleepovers. While it remains an amazingly enduring pop-culture phenomenon—tied to the rise of the horror movie/paranormal industrial complex—its saga is also about the universal desire to find answers to life’s biggest questions, the history of psychology, and even the development of neuroscience.

“It’s always been a board game, a parlor game, but it has always been more than a board game for some people, too,” Murch says. “In the 19th century, people had a much different relationship to death than we do today—it was much closer to their everyday experience. Now, we do everything we can in hopes of avoiding aging, let alone engage in any real thoughts of death. But in the 1800s, people only lived to be 50 years old. Mothers would have 12 children and six of them would die. Their parlor rooms were also their funeral rooms.”

Not surprisingly perhaps, there’s a dark side or two buried in Ouija’s origin story. There always is when money is at stake, and by the early 1890s, some 2,000 Ouija boards were already being sold a week. William Fuld, who worked for and invested in the Kennard Novelty Company—and eventually gained control of the Ouija business after the founder cashed out too early—went on to make millions manufacturing the board in Baltimore and elsewhere, but only after his brother was cut out of the company. Their ensuing lawsuits were no mere spat. William’s brother, Isaac, became so embittered that he had his baby daughter exhumed and relocated from the Fuld family gravesite during a cemetery renovation. The two sides of the family would not speak for 96 years.

And, tragically, William Fuld would suffer a fatal accident at his Harford Avenue factory, one he claimed in a 1919 Baltimore Sun story that the Ouija had told him to build. (“Prepare for big business.”) Overseeing the installation of a flag, an iron railing gave way and he fell off the roof of the structure, which still stands and has been converted into a senior apartment complex. “On his death bed—the coroner’s report said a broken rib pierced his heart—he made his children promise to never sell the Ouija out the family,” says Murch.

Of course, Fuld’s family did sell—but not for four decades—to Parker Brothers, which promptly moved Ouija to its base of operations in Salem, MA. In 1967, the first year it was headquartered in the town infamous for its witch trials, Ouija sold two million boards.

By comparison, Monopoly—an early version was invented in 1903—wasn’t popular until the Great Depression, when it fulfilled a kind of fantasy escapism. Ouija, on the other hand, was a sensation from the outset, long before even its first film appearances, which date back to Hollywood’s beginnings.

But Ouija’s public image has always been complicated. Initially, the “mysterious oracle” was marketed as a game to enliven a party or encourage a little light-hearted intimacy for romantic—or would-be romantic—couples, who are often depicted in early advertisements with the board resting on their knees as they sit across from each other, both of their hands on the planchette. Norman Rockwell, who was fond of depicting the revealing moments of everyday life, painted a well-dressed suitor and young woman, chairs pulled face-to-face, playing with a Ouija board for the cover of The Saturday Evening Post in 1920.

Less well known is the Ouija board’s use as inspiration or as an “automatic” writing tool by acclaimed novelists and poets, such as Sylvia Plath, who wrote “Dialogue over a Ouija Board,” and Pulitzer Prize winner James Merrill. Merrill used notes from Ouija “consultations” in his 560-page epic poem, The Changing Light at Sandover, which contained messages from W.B. Yeats, friend Maya Deren, and the Archangel Michael.

But over time, the relative innocence of the Ouija board—or at least its nonpartisan relationship between good and evil—gave way to a more sinister reputation as Hollywood began utilizing it for darker purposes. After The Exorcist, in which actress Linda Blair’s character Regan explains to her mom, played by Ellen Burstyn, how she used the family’s Ouija board to ask questions of “Captain Howdy”—the demon who eventually takes possession of her soul—the board’s occult status was cemented.

Since then, it has shown up in more than 20 films, and made countless appearances in the ever-growing number of paranormal-themed TV shows. Forums around Ouija-associated phenomena populate the Internet, of course. Most recently, the 2014 movie Ouija did so well at the box office that Ouija 2 is already in the works. When it was released last fall, the movie so dramatically boosted board sales that petitions by evangelical Christian groups to ban the Ouija started popping up again. Catholic.com, a lay-run Catholic apologetics and evangelization website, describes Ouija as “far from harmless.”

Still, the most interesting thing about the Ouija board might be the latest research around it from University of British Columbia that shows it actually does work—just not in the way we might assume.

A few years ago, Sidney Fels, professor of electrical and computer engineering at UBC, brought out a Ouija board at a Halloween party attended by graduate students, including many who were foreign-born and unfamiliar with how it works. They assumed it required batteries. “‘No, you don’t need batteries. It will move,’ I told them,” Fels recalls. “I gave them some mystical explanation tied into Halloween and they had a good laugh.”

But lo and behold, when Fels returned later, the grad students were enthralled because the planchette was moving on its own. Or so it appeared. The mechanism at work was actually something known as the ideomotor effect, which refers to the influence of the unconscious mind on muscle movements. (First identified in 1852, preceding Sigmund Freud’s theory of the unconscious mind by decades, Dr. William Benjamin Carpenter discovered the ideomotor effect while investigating the unconscious mind’s ability to direct motor activity. Shortly thereafter, other researchers began linking that discovery to—you guessed it—spiritual phenomena.)

Days later, still fascinated by the students’ experience, Fels shared the story with colleague Ron Rensink, a psychology and computer science professor, and that got the ball rolling about whether the board could serve as a tool to look at unconscious knowledge.

“We didn’t know if we’d find anything, but when we did, the results really surprised us,” Fels says. When study participants were asked to answer or guess at a set of challenging questions, they were correct about 50 percent of the time. But when responding while using the board—which participants believed had the ability to “receive” correct answers from another person teleconferencing via a robot Ouija partner—they scored correctly upwards of 65 percent of the time.

In actuality, the robot was a ruse; it was not responding to the video-conferencing player, but subtly amplifying the study participants’ tiny, unconscious movements. “It was significant how much better they did on these questions,” Rensink says. “If you don’t think so, consider the difference playing roulette when the odds are 50-50 versus 65-35.”

The implication is that one’s unconscious is much smarter than anyone knew, capable of pulling up bits of stored information not accessible to the conscious mind.

Results in a follow-up study replicated the findings, which they reported in the academic journal, Consciousness and Cognition.

Rensink believes the results open greater possibilities for further study. For example, is unconscious memory affected by Alzheimer’s and other neurodegenerative diseases in the same way as conscious memory?

It’s work that William Fuld—the guy who fell from the factory roof and is considered the “father” of the Ouija (he was also a state delegate and philanthropist)—would probably appreciate. When asked directly by a reporter if he believed in the Ouija’s mystical powers, he replied: “I should say not. I’m no spiritualist. I’m a Presbyterian.”

The discovery of the Ouija’s ability to tap into unconscious knowledge is not the only development in the talking board’s 125-year-old story, however. The reconciliation of William Fuld’s family with his brother Isaac’s clan after nearly a century of silence is the other compelling occurrence.

The two sides had long lost contact until Murch began posting his research on the web nearly two decades ago. That’s when Stuart Fuld, the then-sixtysomething grandson of Isaac Fuld, and Kathy Fuld, the granddaughter of William Fuld, separately reached out to Murch, in hopes of learning more about their ancestors. “I was talking to each one individually at first without the other one knowing it,” Murch recalls. “I was aware of the feud and didn’t want to upset either one, but then Kathy called one night and asked for Stuart’s phone number.”

“It turned out we were living five miles apart while growing up and didn’t know it,” says Kathleen Fuld, Stuart’s wife. (Stuart Fuld passed last year.)

The two sides of the family, which now include great-grandchildren and great-great grandchildren of the brothers, have been getting together regularly ever since.

While some of the descendants did hold on to Ouija and other talking-board memorabilia—Isaac later attempted to launch a talking board competitor named the “Oriole” board—no one, apparently, ever took a serious spiritual interest in Ouija. Not even when they were kids.

“Not me,” says Kathleen Fuld, chuckling. “I was a good Irish Catholic girl.
I had eight cousins who were nuns.”

She adds, however, that Stuart did take a great deal of interest in learning about his grandfather and ancestors, as well as the history of the former family business—if not the surrounding mysticism—especially as he got older.

“I’ll tell you a funny story,” she says. “We went up to the Poconos for a golfing trip one year and there was a conference of priests taking place at the hotel where we stayed. I don’t remember why or how it came up, but Stuart ends up telling
a group of priests we’re talking with that his family once made the Ouija board.

“All the priests immediately started making little crosses with their fingers,” Fuld continues. “They started asking Stuart all kinds of questions. They wanted to know the whole story and got the biggest kick out of that.”

Even better, the priests invited the couple to take advantage of the conference’s complimentary evening cocktail parties for the weekend—which they did.

“But it didn’t matter,” she adds. “Every time we saw those priests, in the elevator, or wherever, they’d start making those crosses with their fingers.”


These Two Small Letters Heralded the Beginning of Online Communication

These Two Small Letters Heralded the Beginning of Online Communication

An uncountable number of letters have been sent from one person to the other via the internet in the years since 1969–in ARPANET message boards, the recently-deceased AOL Instant Messenger and currently-in-vogue Slack, to name a few platforms. Hard to believe, but this communication revolution started with two letters.

Late at night on October 29, 1969, today celebrated as International Internet Day, the first message was sent over the Internet. Two groups of researchers in two separate facilities sat before rudimentary computer terminals, on the phone, making yet another attempt at talking to each other. Their planned first transmission wasn’t anything too fancy, Len Kleinrock, who headed the UCLA lab engaged in the research, told Guy Raz for NPR. But it turned out to be amazing anyways.

The UCLA researchers were trying to transmit the message “login,” as in a login command, to the computer at Stanford. Charley Kline, who sent the initial transmission from UCLA, said they’d tried this before with no success. This time, however, something happened. “The first thing I typed was an L,” he told NPR. Stanford computer scientist Bill Duvall said over the phone that he’d received it. He typed the O: it also went through. Then came the G: “And then he had a bug and it crashed.”

Later that night, after some more tinkering, they successfully transmitted the whole word. Then they went home to get some sleep, having no way of knowing what would ensue because of this development.

“We should have prepared a wonderful message,” Kleinrock told Raz. It would have placed them in the tradition of discoverers who had pithy statements– “What hath God wrought,” “a giant leap for mankind,” etcetera. Samuel Morse, Neil Armstrong and the others “were smart. They understood public relations. They had quotes ready for history.”

But “lo,” the accidentally abbreviated first transmission, would have to do, and in fact actually works quite well. Merriam-Webster defines the word as an exclamation “used to call attention or to express wonder or surprise” that has a history of use going back as far as the 12th century. Its predecessor, the Middle English “la,” goes back even farther. According to the Oxford English Dictionary, “la” can be found in Beowulf and the Ormulum, amongst other works. Its more modern incarnation is found in the King James Bible, in the first scene of Hamlet and in Tennessee Williams’s A Streetcar Named Desire, to name a few examples.

What the teams at UCLA and Stanford had pioneered was the ARPANET, the predecessor to the internet, which has come to contain all of the above texts as well as many, many more pedestrian statements. By the spring of 1971, it could be found at 19 research institutions, writes Leo Beranek for the Massachusetts Historical Review, and it’s only spread from there.