Why it’s time to lay the stereotype of the ‘teen brain’ to rest

Why it’s time to lay the stereotype of the ‘teen brain’ to rest

A deficit in the development of the teenage brain has been blamed for teens’ behavior in recent years, but it may be time to lay the stereotype of the wild teenage brain to rest. Brain deficits don’t make teens do risky things; lack of experience and a drive to explore the world are the real factors.

As director of research at a public policy center that studies adolescent risk-taking, I study teenage brains and teenage behavior. Recently, my colleagues and I reviewed years of scientific literature about adolescent brain development and risky behavior.

We found that much of the risk behavior attributed to adolescents is not the result of an out-of-control brain. As it turns out, the evidence supports an alternative interpretation: Risky behavior is a normal part of development and reflects a biologically driven need for exploration – a process aimed at acquiring experience and preparing teens for the complex decisions they will need to make as adults.

We often characterize adolescents as impulsive, reckless and emotionally unstable. We used to attribute this behavior to “raging hormones.” More recently, it’s been popular in some scientific circles to explain adolescent behavior as the result of an imbalance in the development of the brain.

According to this theory, the prefrontal cortex, the center of the brain’s cognitive-control system, matures more slowly than the limbic system, which governs desires and appetites including drives for food and sex. This creates an imbalance in the adolescent brain that leads to even more impulsive and risky behavior than seen in children – or so the theory goes.

This idea has gained currency to the point where it’s become common to refer to the “teenage brain” as the source of the injuries and other maladies that arise during adolescence.

In my view, the most striking failure of the teen brain hypothesis is its conflating of important differences between different kinds of risky behavior, only a fraction of which support the notion of the impulsive, unbridled adolescent.

Adolescents as explorers

What clearly peaks in adolescence is an interest in exploration and novelty seeking. Adolescents are by necessity engaged in exploring essential questions about themselves – who they are, what skills they have and who among their peers is worth socializing with.

But these explorations are not necessarily conducted impulsively. Rising levels of dopamine in the brain during adolescence appear to drive an increased attraction to novel and exciting experiences. Yet this “sensation seeking” behavior is also accompanied by increasing levels of cognitive control that peak at the same age as adolescents’ drive for exploration. This ability to exert cognitive control peaks well before structural brain maturation, which peaks at about age 25.

Researchers who attribute this exploratory behavior to recklessness are more likely falling prey to stereotypes about adolescents than assessing what actually motivates their behavior.

If adolescents were truly reckless, they should show a tendency toward risk-taking even when the risks of bad outcomes are known. But they don’t. In experiments where the probabilities of their risks are known, adolescents take fewer risks than children.

In experiments that mimic the well-known marshmallow test, in which waiting for a bigger reward is a sign of self-control, adolescents are less impulsive than children and only slightly more so than adults. While these forms of decision-making may place adolescents at a somewhat greater risk of adverse outcomes than adults, the change in this form of self control from mid-adolescence to adulthood is rather small and individual differences are great.

There is a specific kind of risk-taking that resembles the imbalance that the brain-development theory points to. It is a form of impulsivity that is insensitive to risk due to acting without thinking. In this form of impulsivity, the excitement of impulsive urges overshadows the potential to learn from bad experience. For example, persons with this form of impulsivity have trouble controlling their use of drugs, something that others learn to do when they have unpleasant experiences after using a drug. Youth with this characteristic often display this tendency early in childhood, and it can become heightened during adolescence. These teens do in fact run a much greater risk of injury and other adverse outcomes.

But it is important to realize that this is characteristic of only a subset of youth with weak ability to control their behavior. Although the rise in injurious and other risky behavior among teens is cause for concern, this represents much more of a rise in the incidence of this behavior than of its prevalence. In other words, while this risky behavior occurs more frequently among teens than children, it is by no means common. The majority of adolescents do not die in car crashes, become victims of homicide or suicide, experience major depression, become addicted to drugs or contract sexually transmitted infections.

Furthermore, the risks of these outcomes among a small segment of adolescents are often evident much earlier, as children, when impulse control problems start to appear.

The importance of wisdom

Considerable research suggests that adolescence and young adulthood is a heightened period of learning that enables a young person to gain the experience needed to cope with life’s challenges. This learning, colloquially known as wisdom, continues to grow well into adulthood. The irony is that most late adolescents and young adults are more able to control their behavior than many older adults, resulting in what some have called the wisdom paradox. Older adults must rely on the store of wisdom they have built to cope with life challenges because their cognitive skills begin to decline as early as the third decade of life.

A dispassionate review of existing research suggests that what adolescents lack is not so much the ability to control their behavior, but the wisdom that adults gain through experience. This takes time and, without it, adolescents and young adults who are still exploring will make mistakes. But these are honest mistakes, so to speak, because for most teens, they do not result from a lack of control.

This realization is not so new, but it serves to place the recent neuroscience of brain development in perspective. It is because adolescents are immature in regard to experience that makes them vulnerable to mishaps. And for those with weak cognitive control, the risks are even greater. But we should not let stereotypes of this immaturity color our interpretation of what they are doing. Teenagers are just learning to be adults, and this inevitably involves a certain degree of risk.



Astronomers Are ‘Racing Against Time’ as Humanity Clogs the Air With Radio Signals

Astronomers Are ‘Racing Against Time’ as Humanity Clogs the Air With Radio Signals

In a remote valley in the British Columbia interior, a massive telescope called the Canadian Hydrogen Intensity Mapping Experiment (CHIME) is scouring the skies for traces of dark energy, a mysterious force that drives the expansion of the universe but has never been directly detected. As if hunting for dark energy isn’t challenging enough, radio astronomers fear it might not be long before proliferating tech like smartphones and space satellites make these kinds of studies—and even the ongoing search for aliens—impossible, due to radio interference.

According to Mark Halpern, principal investigator at CHIME and astronomy professor at the University of British Columbia, the growing number of communications satellites in space as well as technologies on the ground that emit radio waves are interfering with CHIME’s data-collecting, and could potentially do more damage in the future. If radio astronomers aren’t able to do their research, it could prevent us from making future discoveries about our universe.

“I feel like we’re racing against time to get CHIME done while we still can,” Halpern said.


Radio frequencies are everywhere. They’re used to transport information for radio broadcasts, television, and your cell phone. As anybody who’s ever awkwardly spoken to someone on a walkie-talkie that accidentally hooked up to the wrong frequency knows, it’s really easy to interrupt those channels when one signal bleeds into the other.

It might not be such a big deal when your television gets a little bit of static. But interference can cause radio astronomers to lose their research data. Radio astronomy has led to the discovery of quasars, the imaging of asteroids, and showed us the cosmic microwave background, which is leftover radiation from the Big Bang. Just this week, scientists discovered a new source of gravitational wave: the violent merger of two neutron stars 130 million light years away. Astronomers will study the resulting radio waves to learn more about the energy of a neutron star collision, and how much mass is ejected.

The International Telecommunications Union (ITU), the United Nations’ agency for policing frequencies, provides recommendations for how radio frequencies should be distributed. The agency sets aside a band of radio waves specifically for radio astronomy projects.

But the nature of CHIME’s experiment makes it so that the telescope has to access a broader range of frequencies outside of that spectrum to map out more parts of the universe at once.

Halpern said the huge amount of frequencies they were accessing wasn’t a problem when they initially started the project. They’re located in a radio safe zone in a valley near Penticton, BC, where there is government-approved signage in the area telling drivers to turn off all electronic devices. But he said that around three years ago, a series of television stations started opening up near Penticton, causing bleeds into their signal.

Although the valley protects CHIME against local radio waves, the television satellites locked in orbit still cause interference. And since the scientists don’t own the frequencies they use—and won’t likely ever afford to buy a spectrum needed to perform their experiments, as they sell for billions—these scientists can’t do much about it. Halpern expects the rest of their radio waves will be auctioned off for television eventually.

“It’s completely not in the cards that CHIME could use any part of its budget to buy its own frequency,” Halpern said. In terms of funding and priority, he said, communications services dwarf the resources of radio astronomers.


Satellites aren’t the only thing bleeding into other frequencies. According to Ken Tapping, an astronomer at the National Research Council’s Dominion Radio Astrophysical Observatory in Penticton, where CHIME is also based, everyday tech like smartphones and those new wireless car keys emit accidental radio waves, called “unwanted emissions,” that interfere with other frequencies.

“These things splatter across all paths of the radio spectrum. They’re produced inconsequentially,” Tapping told me in a phone interview.

The ITU recommends that radio astronomy studies expect a maximum of five percent of their total data lost due to interference. It might not seem like a lot, but for experiments that depend on tracking short radio wave bursts, it could mean losing information that’s crucial to the experiment. Tapping said that in the future, even if everybody sticks to their allotted radio emissions, the amount of interference will increase due to the sheer amount of technology.

“They’re dirt cheap, they’re imported from abroad, and they’re being deployed all over the place in a currently uncontrolled fashion,” Tapping said, referring to the proliferation of cheap electronic devices. “If these reach a certain density of use, then radio astronomy could become rather difficult.”

Tapping hopes that it the ITU will be able to cap its losses at five percent. If a radio astronomy study is losing more of their research than that, he said, it might cause problems with funding, since funding sources could feel as though they’re not getting an adequate return on their investment.

Growing radio interference will also make it harder for scientists to receive signals from extraterrestrial life. At the Search for Extraterrestrial Life Institute (SETI) in Mountain View, California, radio astronomers are constantly looking for any sort of message that didn’t come from humans. But, according to SETI senior astronomer Seth Shostak, the signals they look for are the same ones humans produce every day.

“The question is ‘Is this ET on the line, or is it another telecommunications satellite passing overhead?'” Shostak told me. When SETI detect a promising radio signal, their astronomers check if it moves in correlation with the rotation of the Earth, and make sure that their other receivers don’t detect it, since that would indicate it’s either a human-made satellite in orbit or a ground-based radio system.

The clear solution for radio astronomers is to move their instruments to remote areas with little human presence. This can occasionally result in rules that are somewhat dystopian: China banned any electronic devices and created a resident-free zone around its massive new radio telescope to ensure there would be no interference, relocating 9,000 residents in the process.

Halpern said his team did field measurements in remote places like the Sahara Desert when trying to figure out where to put CHIME. But remote locations come with their own challenges, like ensuring safety and access for scientists, and the extra cost of building in unpopulated areas.

Another option, Shostak suggests, is to move radio astronomy projects to the backside of the Moon, which is shielded from any frequencies from Earth. The obvious problem, he said, is that these projects don’t have the astronomical amount of funding necessary for a lunar mission, so they’re Earthbound for now.

To prevent the death of radio astronomy, Tapping said that astronomers have to work more closely with communications companies for solutions. The introduction of more low-power transmitters for smartphones and other technologies could reduce the amount of unwanted emissions, and increase the battery life of those products, too. But Tapping pointed out that would affect these companies’ bottom line, so the risk of increased interference lives on.

“There’s a Darwinian struggle going on,” Tapping said. “But I’ll be honest, there always has been.”


Why is our universe three dimensional? Cosmic knots could untangle the mystery

Why is our universe three dimensional? Cosmic knots could untangle the mystery

Next time you’re untangling your earbuds in frustration, here’s an idea to help put it in perspective: knots may have played a crucial part in kickstarting our universe, and without them we wouldn’t live in three dimensions. That’s the strange story pitched by a team of physicists in a new paper, and the idea actually helps plug a few plot holes in the origin story of the universe.

Our universe has three spatial dimensions. That’s such a basic fact of reality that most people don’t ever stop to question why it’s the case. But in theory, three dimensions seems like a somewhat arbitrary number. Why doesn’t our universe have four, or five, or 11 dimensions? The question has plagued physicists, but trying to answer it has all-too-often been relegated to the “too-hard” basket.

After five years of tackling the problem, an international team of physicists has developed a theory that not only explains how the universe arose in its three dimensional state, but also solves several other mysteries of its birth and growth. The key is a fairly common element of the Standard Model of particle physics called a flux tube.

Flux tubes are flexible strands of energy that bind elementary particles together – linking quarks and antiquarks with the help of gluons. But as the particles drift apart, they can eventually break the flux tube between them. That gives off a burst of energy that creates a new quark-antiquark pair, which bind to the existing particles to form two complete pairs.

Flux tubes are a well known phenomenon, but for the new theory the physicists found that by kicking those up to a higher energy level, they can solve some mysteries about why the universe happens to be exactly the way we see it.

In the early days of everything, the universe was just a hot, thick primordial soup called quark-gluon plasma. With so many elementary particles in close proximity, they would have created a whole mess of flux tubes. Most of these tubes would have quickly been destroyed though, since matter and antimatter annihilate each other when they meet, taking the flux tubes with them.

But there are times when flux tubes can survive longer than the particles that they link. If those particles move in just the right way, they can twist their flux tubes into knots, which are stable enough to exist on their own. And if several of these flux tubes intertwine, they can form an even more stable network of knots, which would have quickly filled the early universe.

The team soon realized this idea explained two long-standing issues with the currently-accepted idea of how the universe came to be. The story goes that in its first few moments, the universe underwent a period of extremely rapid expansion – from the size of a single proton to a grapefruit in less than a trillionth of a second. After that, expansion happened much more slowly, although it is currently accelerating.

But two questions about that story have never been properly answered: what triggered that sudden burst of expansion, and then why did it slow back down again? When the team calculated how much energy would be tied up in their knotty network, they realized it gave a convenient explanation for both of those.

“Not only does our flux tube network provide the energy needed to drive inflation, it also explains why it stopped so abruptly,” says Thomas Kephart, co-author of the study. “As the universe began expanding, the flux-tube network began decaying and eventually broke apart, eliminating the energy source that was powering the expansion.”

The story neatly fits in with existing ideas of the origins of everything. After the flux tube network breaks down, it releases particles and radiation into the universe, which then continues to expand and evolve the way other theories have explained.

That also brings us back to the question of why the universe is three dimensional. According to knot theory, knots can only exist in three dimensions: as soon as you add a fourth, they quickly unravel. That means that during the early period, the knotted flux tubes would have only caused rapid expansion in the three spatial dimensions. By the time the flux tube network broke down, the groundwork had already been laid for a 3D universe to evolve, and any higher dimensions that exist would remain tiny and essentially undetectable.

While the theory is certainly intriguing, it’s still a work in progress. Before the idea can be properly proposed, the researchers say they need to develop it further to allow it to make testable predictions about the nature of the universe.

And in the end, maybe tangled earbuds are a small price to pay, considering we might not exist without them.


National Aquarium | Cephalopods: Arms or Tentacles?

National Aquarium | Cephalopods: Arms or Tentacles?

Cephalopods are a class of marine mollusks including the octopus, cuttlefish and squid. The name cephalopod means “head-foot” because they have limbs attached to their head, and these mollusks are well-known for their arms and tentacles. And while all cephalopods have arms, not all cephalopods have tentacles.

Tentacles are long, flexible organs found on invertebrate animals. They are important for feeding, sensing and grasping. Tentacles are longer than arms, are retractable and have a flattened tip that is covered in suckers.

Arms are similar to tentacles, but still distinctly different. Arms are covered with suckers that help with grasping food items. In addition, these arms are useful to attach to surfaces while resting.

The names may seem interchangeable, but when it comes to cephalopods, there’s a difference between arms and tentacles. An easy way to spot the difference is that arms have suckers along their entire length, while tentacles only have suckers at the tip.

This means that octopuses have eight arms and no tentacles, while other cephalopods—such as cuttlefish and squids—have eight arms and two tentacles.


Why Bacteria in Space Are Surprisingly Tough to Kill | Smart News | Smithsonian

Why Bacteria in Space Are Surprisingly Tough to Kill | Smart News | Smithsonian

Bacteria in space may sound like the title of a bad science fiction movie, but it’s actually a new experiment that tests how the weightlessness of space can change microbes’ antibiotic resistance.

While the vacuum of space may be a sterile environment, the ships (and eventually habitats) humans travel and live in are rife with microbial life. And keeping these microbes in check will be vital for the health of the crew and even the equipment, reports George Dvorsky for Gizmodo.

Past research has shown that bacteria that would normally collapse in the face of standard antibiotics on Earth seem to resist those same drugs much more effectively in the microgravity of space, and even appear more virulent than normal. To figure out how weightlessness gives bacteria a defensive boost, samples of E. coli took a trip to the International Space Station in 2014 so astronauts could experiment with antibiotics.

Now, in a new study published this week in the journal Frontiers in Microbiology, researchers demonstrates that microgravity gives bacteria some nifty tricks that make a lot less susceptible to antibiotics. Their main defense: getting smaller.

The E. coli in space showed a 73 percent reduction in their volume, giving the bacteria much less surface area that can be exposed to antibiotic molecules, Dvorsky reports. Along with this shrinkage, the cell membranes of the E. coli grew at least 25 percent thicker, making it even harder for any antibiotic molecules to pass through them. And the defense mechanisms weren’t only the individual level—the E. coli also showed a greater propensity for growing together in clumps, leaving the bacteria on the edges open to danger, but insulating those within from exposure to the antibiotics.

All of these differences allowed the E. coli on the International Space Station to grow to 13 times the population of the same bacteria grown on Earth under the same conditions, according to the study. And understanding why and how these defense mechanisms form could help doctors better prevent the scourge of antibiotic resistance here on Earth.

Perhaps even more terrifying, compared to the bacteria grown in the same conditions on Earth, the space-bound E. coli developed fluid-filled sacs called vesicles on their cell membranes, giving them tools that can make them even better at infecting other cells. This means that astro-bacteria could make people ill more easily, creating an infection that is harder to treat.

As people head further out into space, many are still afraid about what will happen when we meet alien bacterial life. But travelers into the great beyond may also need to keep a close eye out for the bacteria we already thought we knew.


Brewing a great cup of coffee depends on chemistry and physics

Brewing a great cup of coffee depends on chemistry and physics

Coffee is unique among artisanal beverages in that the brewer plays a significant role in its quality at the point of consumption. In contrast, drinkers buy draft beer and wine as finished products; their only consumer-controlled variable is the temperature at which you drink them.

Why is it that coffee produced by a barista at a cafe always tastes different than the same beans brewed at home?

It may be down to their years of training, but more likely it’s their ability to harness the principles of chemistry and physics. I am a materials chemist by day, and many of the physical considerations I apply to other solids apply here. The variables of temperature, water chemistry, particle size distribution, ratio of water to coffee, time and, perhaps most importantly, the quality of the green coffee all play crucial roles in producing a tasty cup. It’s how we control these variables that allows for that cup to be reproducible.

How strong a cup of joe?

Besides the psychological and environmental contributions to why a barista-prepared cup of coffee tastes so good in the cafe, we need to consider the brew method itself.

We humans seem to like drinks that contain coffee constituents (organic acids, Maillard products, esters and heterocycles, to name a few) at 1.2 to 1.5 percent by mass (as in filter coffee), and also favor drinks containing 8 to 10 percent by mass (as in espresso). Concentrations outside of these ranges are challenging to execute. There are a limited number of technologies that achieve 8 to 10 percent concentrations, the espresso machine being the most familiar.

There are many ways, though, to achieve a drink containing 1.2 to 1.5 percent coffee. A pour-over, Turkish, Arabic, Aeropress, French press, siphon or batch brew (that is, regular drip) apparatus – each produces coffee that tastes good around these concentrations. These brew methods also boast an advantage over their espresso counterpart: They are cheap. An espresso machine can produce a beverage of this concentration: the Americano, which is just an espresso shot diluted with water to the concentration of filter coffee.

All of these methods result in roughly the same amount of coffee in the cup. So why can they taste so different?

When coffee meets water

There are two families of brewing device within the low-concentration methods – those that fully immerse the coffee in the brew water and those that flow the water through the coffee bed.

From a physical perspective, the major difference is that the temperature of the coffee particulates is higher in the full immersion system. The slowest part of coffee extraction is not the rate at which compounds dissolve from the particulate surface. Rather, it’s the speed at which coffee flavor moves through the solid particle to the water-coffee interface, and this speed is increased with temperature.

A higher particulate temperature means that more of the tasty compounds trapped within the coffee particulates will be extracted. But higher temperature also lets more of the unwanted compounds dissolve in the water, too. The Specialty Coffee Association presents a flavor wheel to help us talk about these flavors – from green/vegetative or papery/musty through to brown sugar or dried fruit.

Pour-overs and other flow-through systems are more complex. Unlike full immersion methods where time is controlled, flow-through brew times depend on the grind size since the grounds control the flow rate.

The water-to-coffee ratio matters, too, in the brew time. Simply grinding more fine to increase extraction invariably changes the brew time, as the water seeps more slowly through finer grounds. One can increase the water-to-coffee ratio by using less coffee, but as the mass of coffee is reduced, the brew time also decreases. Optimization of filter coffee brewing is hence multidimensional and more tricky than full immersion methods.

Other variables to try to control

Even if you can optimize your brew method and apparatus to precisely mimic your favorite barista, there is still a near-certain chance that your home brew will taste different from the cafe’s. There are three subtleties that have tremendous impact on the coffee quality: water chemistry, particle size distribution produced by the grinder and coffee freshness.

First, water chemistry: Given coffee is an acidic beverage, the acidity of your brew water can have a big effect. Brew water containing low levels of both calcium ions and bicarbonate (HCO₃⁻) – that is, soft water – will result in a highly acidic cup, sometimes described as sour. Brew water containing high levels of HCO₃⁻ – typically, hard water – will produce a chalky cup, as the bicarbonate has neutralized most of the flavorsome acids in the coffee.

Ideally we want to brew coffee with water containing chemistry somewhere in the middle. But there’s a good chance you don’t know the bicarbonate concentration in your own tap water, and a small change makes a big difference. To taste the impact, try brewing coffee with Evian – one of the highest bicarbonate concentration bottled waters, at 360 mg/L.

The particle size distribution your grinder produces is critical, too.

Every coffee enthusiast will rightly tell you that blade grinders are disfavored because they produce a seemingly random particle size distribution; there can be both powder and essentially whole coffee beans coexisting. The alternative, a burr grinder, features two pieces of metal with teeth that cut the coffee into progressively smaller pieces. They allow ground particulates through an aperture only once they are small enough.

There is contention over how to optimize grind settings when using a burr grinder, though. One school of thought supports grinding the coffee as fine as possible to maximize the surface area, which lets you extract the most delicious flavors in higher concentrations. The rival school advocates grinding as coarse as possible to minimize the production of fine particles that impart negative flavors. Perhaps the most useful advice here is to determine what you like best based on your taste preference.

Finally, the freshness of the coffee itself is crucial. Roasted coffee contains a significant amount of CO₂ and other volatiles trapped within the solid coffee matrix: Over time these gaseous organic molecules will escape the bean. Fewer volatiles means a less flavorful cup of coffee. Most cafes will not serve coffee more than four weeks out from the roast date, emphasizing the importance of using freshly roasted beans.

One can mitigate the rate of staling by cooling the coffee (as described by the Arrhenius equation). While you shouldn’t chill your coffee in an open vessel (unless you want fish finger brews), storing coffee in an airtight container in the freezer will significantly prolong freshness.

So don’t feel bad that your carefully brewed cup of coffee at home never stacks up to what you buy at the café. There are a lot of variables – scientific and otherwise – that must be wrangled to produce a single superlative cup. Take comfort that most of these variables are not optimized by some mathematical algorithm, but rather by somebody’s tongue. What’s most important is that your coffee tastes good to you… brew after brew.


The Cadillac of First Aid Kits Could Turn Civilians into Life-Savers | WIRED

The Cadillac of First Aid Kits Could Turn Civilians into Life-Savers | WIRED

EARLIER THIS YEAR, Collin Smith came into possession of an “intelligent” first aid kit. When he did, the first thing he did was try to outsmart it.

The kit in question was the Comprehensive Rescue System, a sturdy, gray, 17-pound case of supplies custom-built by emergency management startup Mobilize Rescue Systems. It contains gauzes, bandages, and ointments like any first-aid kit, but also carries tourniquets, chest seals, and QuikClot—the kind of stuff you hope you’ll never have to use, but that can keep someone with severe injuries alive while they’re waiting on an ambulance.

But a first aid kit is only as effective as the person using it, which is why Smith wasn’t interested in the supplies so much as he was in the iPad embedded in its lid, which came installed with an interactive app that distills some 1,600 pages of triage and emergency-response decision-trees drawn up by Mobilize Rescue’s team of SWAT- and military medics, emergency medicine physicians, EMS providers.

Smith, who oversees the Colorado School of Mines’ Energy, Mining, and Construction Industry Safety Program, has worked as a mine rescue trainer or team member for close to a decade. Having dealt firsthand with everything from heart attacks to crushed limbs, he immediately recognized the kit’s potential when he saw it at an industry conference in February. (The kit launched in December 2016.) The information in the app is presented in a series of simple, on-screen prompts designed to identify and treat the most serious injuries first. The goal: Make it as easy as possible for bystanders to provide lifesaving care to trauma victims.

“On remote job sites, a paramedic is almost always more than 20 minutes away,” Smith says. “And depending on the injury, you may not have 20 minutes.” Sure, you can train employees in first aid for severe trauma. But training wears off, and emergencies are stressful. “In a high-pressure scenario, you might not remember what you were taught six months ago, so it helps to be guided through it.”

Traumatic injuries have killed more than 2 million US civilians since 2001 and are the leading cause of death among Americans below the age of 47. Roughly half those deaths occur at the place of injury or on the way to a hospital. The National Academies of Sciences, Engineering, and Medicine estimates that, of the 147,790 deaths from trauma in 2014, as many as 30,000 of them could have been prevented with better, faster medical care. Mobile Rescue designed the Comprehensive Rescue System with these statistics in mind.

To see if the kit functioned as advertised, Smith and his three-person team of emergency medical specialists quizzed it with scenarios involving severe breaks, burns, bleeds, and traumas that are difficult to diagnose, or whose treatment protocols had recently changed. Injuries like a severe lower-leg break with an arterial bleed. Smith says the protocol used to be to apply direct pressure to the wound, then to a major artery in the groin. “But even experts have a hard time finding that pressure point. You’re sitting there digging into the victim’s groin while they lie there screaming and bleeding out,” Smith says. “The new protocol says: If direct pressure doesn’t work, move straight to a tourniquet.” Which is exactly what the kit prescribed.

Smith and his team spent three weeks conducting this initial round of tests. They ran hypothetical, pen-and-paper scenarios with firefighters, and brought in a training dummy with moulage to test laypersons with zero medical experience. “The kit handled everything we threw at it,” Smith says.

Smith was so impressed, he’s incorporated the kit into his program’s Mine Safety and Health Administration training courses, and advocates for its adoption throughout the mining industry. “Something like this, especially for remote locations, could be as valuable as fire extinguishers of sprinkler systems, when it comes to increasing your probability of survival,” he says. “It’s a big deal. I would not be surprised if you saw something like this become standard at job sites in the future.”

What sets Mobilize Rescue’s kit apart from other heavy-duty kits is the relationship between its contents and the interactive app. The screen displays color-coded illustrations, animations, and planograms to help users locate supplies in the kit’s bottom half and provide step-by-step instructions for their use. (With the exception of a metronome that plays during the instructions for CPR, the kit provides no audio cues.) Those supplies are arranged according to whatever will kill you fastest, in an inverted-horseshoe shape.

“You can bleed to death in three minutes—that’s way faster than you’ll choke to death, so tourniquets, labeled red, go in the bottom left,” says Chris Strattner, Mobilize Rescue’s head of product development. Above those, marked in yellow, are pressure dressings and packets of QuikClot. Chest seals, labeled green, go above that. Moving across the top of the kit, you’ll find things like glucose, a CPR mouth-shield, and burn dressings, labelled pink, blue, and grey, respectively. Matching the kit’s contents to the on-screen instructions is like the easiest game of Apples-to-Apples you’ve ever played—only, you know, with higher stakes.

The kit’s got supplies for less critical situations, too. Splints, cold compresses, that kind of stuff. “And then, on the bottom right, there’s a little pouch in there with a bunch of bandaids. Because if you don’t have enough bandaids in your workplace, the OSHA guys will come and put you in OSHA jail,” says Strattner.

The fact that the Comprehensive Rescue System is approved by OSHA and the American National Standards Institute means that Mobilize Rescue can sell these kits not just to industry types, but schools, offices, airports, stadiums, and malls—which are also some of the only places that will be able to pay for them. The kit’s biggest drawback is its price: $2,250 for the hard-case model, $1,750 for the more portable soft-case. (The company does offer a smaller, sparser, $180 kit, the app for which runs on the user’s cell phone—but it’s a less impressive product, holistically.)

“It’s the Cadillac of first aid kits, but not everybody can afford a Cadillac, and affordability is everything when it comes to consumer-level first aid kits” says Dave Hammond, who has been designing color-coded, audio-guidance first aid kits for more than 20 years. “It’s an exceptional product, just very expensive. I mean, it’s comparable in price to an automated external defibrillator.”

Which might actually be a fitting comparison. The automated external defibrillator’s AED’s design made it possible for someone with no prior training to shock and resuscitate a heart attack victim. “It was the single greatest intervention for decreasing out-of-hospital deaths from cardiac arrest,” says Eric Goralnick, the medical director of emergency preparedness at Brigham and Women’s Hospital. “Today, we need to find the equivalent to the AED for hemorrhage control.”

On this front, Goralnick says Mobilize Rescue’s kit looks very promising. So promising, he’s currently designing a study to test the kits under tightly controlled conditions—similar to what Smith did, only more rigorous. “When you first pop this thing open and see the way it’s designed, the way the iPad walks you through all these steps—it’s impressive. It’s simple, clean, with clear descriptions. Comprehensive, too. It can do more than just hemorrhage control. It looks wonderful, very innovative, and I think solutions like this are certainly the future of first aid.”

“It’s exciting,” he says. “But now we’ve got to do our due diligence and test it.”


Galactic Map of Every Human Radio Broadcast – How Far Have Our Signals Traveled Into Space?

Galactic Map of Every Human Radio Broadcast – How Far Have Our Signals Traveled Into Space?

Carl Sagan’s famous line from his 1990 speech about the Pale Blue Dot image—”Our planet is a lonely speck in the great enveloping cosmic dark”—is an understatement. We might consider our Milky Way, with its estimated 100 to 400 billion stars, a significant fixture in the cosmos. But there are some 100 billion galaxies just like it in the observable universe. It’s a daunting reality to consider when we’re thinking about the possibility of making contact with any intelligence that might be out there.

This map designed by Adam Grossman of The Dark Sky Company puts into perspective the enormity of these scales. The Milky Way stretches between 100,000 and 180,000 light-years across, depending on where you measure, which means a signal broadcast from one side of the galaxy would take 100,000 years or more to reach the other side. Now consider that our species started broadcasting radio signals into space only about a century ago. That’s represented by a small blue bubble measuring 200 light-years in diameter surrounding the position of the Earth. For any alien civilizations to have heard us, they must be within the bubble.

The very first experimentation with electromagnetic radiation was conducted some 200 years ago, when Danish physicist and chemist Hans Christian Ørsted discovered that electric currents create magnetic fields. This research was expanded by scientists including Michael Faraday, and it eventually resulted in James Clerk Maxwell’s theory of electromagnetism outlined in 1865 and demonstrated by German physicist Heinrich Hertz’s experiments more than two decades later. Even then, it wasn’t until Italian inventor and electrical engineer Guglielmo Marconi developed long-range radio transmission technologies around the turn of the 20th century that our species really started broadcasting its existence out into the void.

If we are optimistic, and we assume an advanced extraterrestrial species has the technological capabilities to detect humanity’s very first radio waves (and distinguish them from the general background noise of the universe), we can estimate our farthest signals are a little more that 100 light-years away. If you threw a dart at the map of the Milky Way, and wherever that dart landed is where an advanced alien species resides, there would be a cosmically small probability that they live close enough to be aware of our existence. Even if you threw 100 darts, it’s a near certainty that none would land in the little blue bubble of our radio waves.

The search for extraterrestrial intelligence (SETI) institute is constantly listening with our most capable radio telescopes, and they are broadcasting messages from us as well. But given the sheer size of the galaxy, SETI will likely have to listen and transmit for tens of thousands of years at least to have a chance of making contact with another intelligent species—and even that might not be long enough. Perhaps, in the meantime, we should contemplate Carl Sagan’s next line in his Pale Blue Dot speech:

“In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves.”

[Given what we have been broadcasting this is a good thing.]


99 percent of microbes in your body are completely unknown to science

99 percent of microbes in your body are completely unknown to science

Whenever you feel lonely, just remember: you’re always carrying several hundred trillion friends with you. A dizzying number of microbes call the human body home, and it turns out that science knows very little about most of them. In fact, a new Stanford survey of the foreign DNA fragments circulating in the human body has found that 99 percent of microbes inside us are completely unknown to science.

The discovery was initially made by accident, as a team investigated less invasive ways to predict whether a patient’s body would reject a transplanted organ. Rather than the wholly unpleasant experience of having a tissue biopsy taken, the researchers were studying whether a simple blood sample would suffice. Essentially, the idea was that if they found fragments of the organ donor’s DNA circulating in a patient’s blood, it was a good indication that the body was rejecting the transplant.

Along with the patient’s DNA and potentially that of the organ donor, the technique gives an insight into that person’s microbiome – the trillions of bacteria, viruses and other microbes that live throughout the body. Of all the non-human DNA floating around in there, the team found that a staggering 99 percent didn’t match anything in existing genetic databases.

“We found the gamut,” says Stephen Quake, senior author of the study. “We found things that are related to things people have seen before, we found things that are divergent, and we found things that are completely novel. I’d say it’s not that baffling in some respects because the lens that people examined the microbial universe was one that was very biased.”

The team then set about categorizing that pile of unknown DNA, and found that most of it belonged to a general group known as proteobacteria, which counts E. coli and Salmonella among its ranks, along with many, many others. On the virus side of things, the team found a huge amount of previously unknown members of the torque teno family, including an entirely new group that doesn’t quite fit current descriptions.

“We’ve doubled the number of known viruses in that family through this work,” says Quake. “We’ve now found a whole new class of human-infecting ones that are closer to the animal class than to the previously known human ones, so quite divergent on the evolutionary scale.”

With so many microbes living in the human body, it’s hardly surprising that science hasn’t gotten around to identifying them all, and the researchers say that attention is largely focused on a few particularly interesting species. The next step, the team says, is to apply the technique to the microbiomes of other animals in order to identify viruses that could potentially jump to humans and trigger pandemics, like avian and swine flu.


The Upside of Neuroticism – Pacific Standard

The Upside of Neuroticism – Pacific Standard

Neurotic people, by definition, spend much of their lives in a dark mood. Given the positive emotions are associated with good health, it’s reasonable to assume that all that guilt, anger, and anxiety will eventually lead to an early grave.

Well, surprise: A sizable new study from Great Britain reports that, for many neurotics, the opposite is true.

Among two large subsets of participants, “higher neuroticism was associated with reduced mortality from all causes,” writes a research team led by Catharine Gale of the University of Edinburgh.

This welcome effect apparently depends upon how one’s neuroticism manifests, and what actions it propels you to take.

The study, published in the journal Psychological Science, featured 321,456 people who were registered in the U.K. Biobank, a health-related resource designed to determine the causes of disease in middle-aged and older people. All were between the ages of 37 and 73 when they enrolled between the years 2006 and 2010.

Participants filled out a standard questionnaire identifying neuroticism. They also rated their health on a scale from “excellent” to “poor,” and reported whether they engaged in various health-related behaviors, including smoking, drinking, and exercise.

By the end of the study period, in June of 2015, 4,497 of them had died. Using official death certificates, the researchers noted the cause of their demise.

They report that “higher neuroticism was associated with lower mortality,” both in general and due to cancer, cardiovascular disease, and respiratory disease. But this was true “only in those people with fair or poor self-rated health.”

In other words, if you have pre-existing health issues—or at least think of yourself as an unhealthy person—neurotic tendencies seem to have a protective effect against premature death.

This was “not explained by the health behaviors we assessed (smoking, exercise, fruit and vegetable intake, and alcohol consumption,” Gale and her colleagues write. So what does explain it?

“It may be that individuals with higher neuroticism are more vigilant about their health if they perceive it to be less than excellent,” they write.

The researchers also delineated between two types of neuroticism. People who gave strongly affirmative answers to such questions as “Would you call yourself a nervous person?” and “Would you call yourself tense or highly strung?” were labeled “anxious-tense.”

Those with high scores on another group of questions, including “Are your feelings easily hurt?” and “Are you ever troubled by feelings of guilt?” were classified as “worried-vulnerable.”

No matter their self-reported health, “Higher scores on the worried-vulnerable facet were associated with a reduced risk of death from all causes,” they write. However, this was not true among those in the “anxious-tense” category; their form of neuroticism was not related to mortality either way.

If you assume that worried people make more visits to the doctor, this finding adds weight to the aforementioned heightened-vigilance thesis. “The propensity to seek medical help in response to worries about health could plausibly result in earlier identification of cancer, and greater likelihood of survival,” the researchers note.

So if you’re fretting about that darkened patch of skin on your arm, it might drive you crazy—but it could also propel you to the dermatologist, who can remove it if your worst fears turn out to be true.

Of course, that depends on whether you have access to health care, which is a different worry altogether.