Simple blood test might predict onset of Alzheimer’s – Futurity

Simple blood test might predict onset of Alzheimer’s – Futurity

A set of biomarkers found in blood samples seem to predict with about 85 percent accuracy whether or not a person will develop Alzheimer’s disease.

The findings, published in the Journal of Alzheimer’s Disease, are based on a study of 292 people with early signs of memory problems.

“Our research proves that it is possible to predict whether or not an individual with mild memory problems is likely to develop Alzheimer’s disease over the next few years,” says Paul Morgan, a professor and director of Cardiff University’s Systems Immunity Research Institute. “We hope to build on this in order to develop a simple blood test that can predict the likelihood of developing Alzheimer’s disease in older people with mild, and possibly innocent, memory impairment.”

Researchers took blood samples from people presenting with very common symptoms of memory impairment and measured a large number of proteins belonging to a part of the immune system, which is known to drive inflammation and has previously been implicated in brain diseases.

When the individuals were re-assessed a year later, about a quarter had progressed to Alzheimer’s disease and three of the proteins measured in their blood showed significant differences from the blood of participants who did not go on to develop the disease.

“Alzheimer’s disease affects around 520,000 people in the UK and this number is continually growing as the population ages,” says Morgan. “As such it is important that we find new ways to diagnose the disease early, giving us a chance to investigate and instigate new treatments before irreversible damage is done.”

These new findings laid the groundwork for a much larger, ongoing study funded by the Wellcome Trust and involving several UK universities and pharmaceutical companies that will try to replicate the findings and refine the test.


Will Pedestrians Be Able to Tell What a Driverless Car Is About to Do? – The Atlantic

Will Pedestrians Be Able to Tell What a Driverless Car Is About to Do? – The Atlantic

A fully autonomous self-driving car doesn’t really need a steering wheel, or a rearview mirror, or even windows to get where it’s going. But the first models are still likely to have them. (And not just because such features could be legally required.)

In the coming years and decades, as the public decides how to feel about autonomous cars, the way that self-driving vehicles appear will be arguably as important as how they function. And people, Americans in particular, have clearly defined expectations about what cars ought to look like.

“When we’re looking at new devices, you could make them anything, right? Any shape, any form,” said Robert Brunner, the industrial designer who worked for many years at Apple and now runs his own design studio. “But we’re also trying to get people to relate to and understand the technology.” Self-driving vehicles, he says, should feel inviting and friendly, and should inspire confidence. The way to do this might be to follow Google’s lead, and make driverless cars cute. At the very least, Brunner told me, the ideal self-driving car probably shouldn’t be a “black menacing thing with lots of red lights.”

Engineers and designers will also have to take into account some of the new challenges that accompany driverlessness. For instance: How will self-driving vehicles communicate with human drivers, pedestrians, and bicyclists? The use of blinkers, brake lights, and hazard lights can be automated, surely, but there are many human gestures and cues that are a crucial part of how people navigate the roads—eye contact, the waving of a hand at an intersection—which a machine can’t precisely emulate.

“There are ways of drastically reducing the level of complexity of these systems and making them logical and understandable and reliable,” said Sam Arbesman, the author of Overcomplicated: Technology at the Limits of Comprehension. “The problem is—because of the fact we build the new on top of the old—we aim for these really, really pristine constructions that are built with all these design practices and principles, but things cannot be perfect.”

A more slippery existential problem is that new ways for driverless cars to communicate with pedestrians will only work if people respond to them. But getting people to respond to a new kind of design signal—just getting them to understand it in the first place—is iffy at best.

“People hate ambiguity and unpredictability,” said Chris Rockwell, the CEO and founder of Lextant, a design consulting firm. “I don’t care if it’s your toaster or your car; if you’re confused, you’re not having a great experience. And if it acts in strange or unpredictable ways, it’s not acceptable.”

The trouble is, people are unpredictable. So designing new ways for machines to communicate with them isn’t exactly straightforward. Many ideas for new communications systems have been proposed—driverless cars might feature audible chimes, voice instructions, or text displays to communicate their next moves—but few if any such systems have been tested. “The ideas aren’t the problem; it’s raining ideas,” Rockwell said. “The challenge is really understanding what problem we’re solving. These are human systems, ultimately.”

“From our standpoint, autonomous vehicles and self-driving systems will happen,” he added. “It’s kind of an inevitability. But the challenge won’t be around the technology as much as it will be around the psychology. It’s going to be critical to gain trust—and that trust can be designed into these systems. Trust not only with the passengers, but also the pedestrians outside.”

In an attempt to better understand how pedestrians might respond to self-driving vehicles, roboticists at Duke recently carried an experiment that involved comparing the effectiveness of several different prototypes for vehicle-to-pedestrian communications. (They detailed their findings in a paper that’s now under review for presentation at the Transportation Research Board’s annual meeting.)

The researchers used a van meant to look like a driverless vehicle, and outfitted it with a large display that could feature “walk” and “don’t walk” signals, as well as a numeric display of the speed at which the vehicle was traveling. “The idea was that the participants would use the speedometer to determine whether it was safe to cross,” said Michael Clamann, a roboticist at Duke and one of the lead authors of the paper. “Reading ‘0’ would be the safest, but the objective was to provide a display that would indicate the vehicle was decelerating.”

As it turned out, most pedestrians ignored the new-fangled display, whichever iteration was used. Pedestrians were more likely to rely on “legacy behaviors”—like eyeballing an approaching car’s speed and inferring how quickly to dart across the street—rather than external displays.

“As we start the transition to driverless vehicles, designers need to be aware that people will rely on old habits when interacting with the new technologies,” Clamann said. Part of the problem is that for a display to be useful, the text has to be huge. And even when it’s big enough, a lot of people seemed to ignore it anyway. To be visible from a distance of 100 feet, a single letter would need to be six inches tall and nearly four inches wide. “So a screen designed to display a simple message like ‘safe to cross’ without scrolling horizontally would require a screen at least 47 inches wide,” the researchers wrote. From 200 feet away, the same message would have to be over 100 inches wide; wider than most cars.

“Right now, a pedestrian communicates with a driver. In the future, this communication will be between a human and a machine, which is an area that requires exploration and careful design decisions,” Clamann told me. “We learn from birth how to communicate with other people, but communicating with machines is a very different skill. We need to make sure the displays and signals work as intended before we release them.”

The team at Duke found no significant differences between any of the 35 different displays they tested, meaning each was “as effective as the current status quo of having no display at all.”

Nearly 5,000 pedestrians were killed by cars in 2014 in the United States alone, according to data from the National Highway Traffic Safety Administration. Driverless cars—with their famously sterling safety records—may be able to reduce those statistics significantly. Still, about half of the pedestrian deaths in the 10-year period ending with 2014 occurred because the pedestrian ran into the road, failed to yield to a vehicle with the right of way, or otherwise crossed the street improperly, the Duke researchers said. Even the best-programmed autonomous cars will be unable to prevent every pedestrian death unless those vehicles can find a way to prompt safer pedestrian behaviors. In other words, with self-driving cars facing a critical test period for the public’s trust, the status quo isn’t going to be good enough.

Robin Mills: Region quenches oil demand with end to subsidies | The National

Robin Mills: Region quenches oil demand with end to subsidies | The National

Which region has contributed the most to growth in oil demand so far this century? China, of course, which gained 7.3 million barrels per day of consumption between 2000 and last year. Which is second? Maybe India, or Latin America? No, it was the heart of global oil exports, the Middle East, whose demand for its own crude swelled by 4.4 million barrels per day.

Swelling economies under the stimulus of high prices drove the Middle East’s oil thirst, as new gas guzzlers cruised freshly built highways. Lavish consumption was encouraged by subsidies, which kept Middle Eastern fuel prices the lowest in the world. Those same subsidies discouraged gas development, so some countries turned to burning oil for electricity. So 240 million people in the Middle East managed to increase their demand by more than did 3 billion Asians outside China.

Now all those factors have gone into reverse. The slowing of the region’s oil demand is a further factor prolonging the current oil price slump.

While prices stay low, regional growth will be moderate at best. This year, it is forecast at 2.3 per cent, respectable by global standards but the lowest since the 2009 financial crisis. Next year may be a little stronger, with 3 per cent expansion.

With an end to the boom, attempts to cut oil, gas and electricity subsidies have accelerated. Iran moved first in December 2010, with a wide-ranging reform of all energy prices coupled with cash compensation to families. Initially successful, high inflation, currency depreciation and deep recession undermined its effects and have led to further rounds of price adjustment.

Oil-importing nations in the Middle East and North Africa were badly hit by soaring prices and in response Morocco, Jordan and Tunisia have essentially reached market-based energy prices.

Arab oil exporters were the last to move, but the UAE decided to set its fuel prices at international levels from August last year, Saudi Arabia raised prices in December although they remain heavily subsidised, and Bahrain, Oman and Qatar largely eliminated petrol and diesel subsidies in January. Algeria too has raised prices in this year’s budget.

Amongst the hold-outs, Kuwait tried to cut subsidies last year but reversed the decision in the face of parliamentary opposition. The country raised diesel prices in January and will try again with petrol from next month. Fiscal pressures and IMF loan conditions for Iraq are likely to encourage subsidy reform here too.

What effect has this all had? Analysts trying to understand the twin impacts of slowing economies and subsidy reform have to read the tea-leaves – firm data is patchy and late. What the available numbers do show is that, in the months following their reforms, petrol consumption in Bahrain and Qatar fell slightly; in Oman, sharply. Saudi overall oil demand is down by 2 per cent this year, partly replaced by new gas supplies to power plants.

Diesel use in Egypt fell by 23 per cent in the month after its July 2014 reform, such a steep drop that it suggests fuel was probably being smuggled out of the country before. Only this summer did demand again reach June 2014 levels.

If Middle Eastern economic growth rebounds, does this mean that oil demand will soar again? Probably not. Subsidy reform has become entrenched policy in most of the major consuming countries, which have budgets to defend and repair. New gas supplies and alternative energy should slow the use of oil for electricity in Saudi Arabia and Iraq, and largely eliminate it elsewhere.

Reducing the region’s unsustainable surge in oil demand has been financially and environmentally essential for years. The full impact of an end to subsidies will ripple through the economy in the years to come. But as oil not used at home goes for export, the Middle East is doing its part in prolonging the global glut.

Hackers Stole Account Details for Over 60 Million Dropbox Users | Motherboard

Hackers Stole Account Details for Over 60 Million Dropbox Users | Motherboard

Hackers have stolen over 60 million account details for online cloud storage platform Dropbox. Although the accounts were stolen during a previously disclosed breach, and Dropbox says it has already forced password resets, it was not known how many users had been affected, and only now is the true extent of the hack coming to light.

Motherboard obtained a selection of files containing email addresses and hashed passwords for the Dropbox users through sources in the database trading community. In all, the four files total in at around 5GB, and contain details on 68,680,741 accounts. The data is legitimate, according to a senior Dropbox employee who was not authorized to speak on the record.

Earlier this week, Dropbox announced it was forcing password resets for a number of users after discovering a set of account details linked to a 2012 breach. The company did not publish an exact figure on the number of resets, and said it had taken the move proactively.

“Our security teams are always watching out for new threats to our users. As part of these ongoing efforts, we learned about an old set of Dropbox user credentials (email addresses plus hashed and salted passwords) that we believe were obtained in 2012. Our analysis suggests that the credentials relate to an incident we disclosed around that time,” the company wrote.

These 60 million user accounts are related to the same data breach incident. Motherboard was provided the full set by breach notification service Leakbase, and found many real users in the dataset who had signed up to Dropbox in around 2012 or earlier.

“We’ve confirmed that the proactive password reset we completed last week covered all potentially impacted users,” said Patrick Heim, Head of Trust and Security for Dropbox. “We initiated this reset as a precautionary measure, so that the old passwords from prior to mid-2012 can’t be used to improperly access Dropbox accounts. We still encourage users to reset passwords on other services if they suspect they may have reused their Dropbox password.”

Nearly 32 million of the passwords are secured with the strong hashing function bcrypt, meaning it is unlikely that hackers will be able to obtain many of the users’ actual passwords. The rest of the passwords are hashed with what appears to be SHA-1, another, aging algorithm. These hashes seem to have also used a salt; that is, a random string added to the password hashing process to strengthen them.

Dropbox has changed its password hashing practices several times since 2012, in order to keep passwords secure.

The Dropbox dump does not appear to be listed on any of the major dark web marketplaces where such data is often sold: the value of data dumps typically diminishes when passwords have been adequately secured. One hacker told Motherboard he or she was already in possession of the data though.

This is just the latest so-called “mega-breach” to be revealed. This summer, hundreds of millions of records from sites such as LinkedIn, MySpace, Tumblr, and from years-old data breaches were sold and traded amongst hackers.

Not using smartphones can improve productivity by 26%, says study | Business Standard News

Not using smartphones can improve productivity by 26%, says study | Business Standard News

Smartphones might be helping employees keep in touch with colleagues and do urgent tasks on the move, but using these devices at workplace actually make people less productive, says a new study by the Universities of Würzburg and Nottingham-Trent.

The study, commissioned by Kaspersky Lab, showed that employees’ performance improved 26 per cent when their smartphones were taken away. The experiment tested the behaviour of 95 persons between 19 and 56 years of age in laboratories at the universities of Würzburg and Nottingham-Trent.

The experiment unearthed a correlation between productivity levels and the distance between participants and their smartphones. “Instead of expecting permanent access to their smartphones, employee productivity might be boosted if they have dedicated ‘smartphone-free’ time. One way of doing this is to enforce rules such as no phones in the normal work environment,” says Altaf Halde, managing director – South Asia at Kaspersky Lab.

Contrary to expectations, the absence of smartphones didn’t make participants nervous. Anxiety levels were consistent across all experiments. However, in general, women were more anxious than their male counterparts, leading researchers to conclude that anxiety levels at workplace are not affected by smartphones (or the absence of smartphones), but can be impacted by gender.

“Previous studies have shown that separation from one’s smartphone has negative emotional effects such as increased anxiety, but studies have also demonstrated that one’s smartphone might act as a distractor. In other words, both the absence and presence of a smartphone could impair concentration,” said Jens Binder from the University of Nottingham-Trent.

“Our findings from this study indicate that it is the absence, rather than the presence, of a smartphone that improves concentration,” says Astrid Carolus from the University of Würzburg.

The results of the experiment correlate with the findings of an earlier survey named ‘Digital Amnesia at Work’. In this survey, Kaspersky Lab demonstrated that digital devices can have a negative impact on concentration levels. It showed, for example, that typing notes into digital devices during meetings lowers the level of understanding of what is actually happening in the meeting.

This food bank doesn’t want your junk food. Good. – Vox

This food bank doesn’t want your junk food. Good. – Vox

Nancy Roman, president and CEO of the Capital Area Food Bank, has been overwhelmed by cake. Week after week, dozens of frosted, layered confections arrive at her warehouse. A 5-year-old might think this is a dream come true. But for Roman, it’s a nightmare.

The cakes are donations meant to be passed along to the food bank’s end users — more than half a million residents of Washington, DC, and its Maryland and Virginia suburbs who don’t have enough to eat. Like other low-income Americans, many of these folks struggle with obesity, diabetes, heart disease, and high-blood pressure. And lately, Roman has worried that sending them highly processed, sugary foods — which are energy-dense and nutrition-poor — isn’t going to help matters.

On September 1, the Capital Area Food Bank is going to do something to address its “incredible exploding warehouse of sheet cakes”: reject junk food donations — including the cake and other baked sweets, caloric sodas, and candy. It’s part of an effort to clean up their food supply, said Roman. “We have a moral obligation to not just get food to people — but the right food.”

With the move, DC’s largest food bank is joining a handful of others that are now turning away donations for the needy — from cookies to Kraft macaroni and cheese — on the basis of quality. These food banks are saying no to the salty, sugary, and fatty foods that will create a double tragedy for the hungry: driving up chronic disease in people who can’t afford healthy food.

It’s also the latest sign that more nutrition leaders — inside and outside of the health community — are beginning to take measures to address two seemingly unlikely bedfellows: hunger and obesity.

Why obesity so often stems from food insecurity

We often think about obesity as being related to having too much to eat. (The word comes from the Latin obesus — which means “having eaten until fat.”)

But there’s a growing body of evidence linking food insecurity (having too little or uncertain access to food) to obesity — particularly among women. (The data on whether food insecurity is a risk factor for obesity in men and adolescents is more mixed.)

The idea isn’t new. In 1995, a case report tilted “Does Hunger Cause Obesity?” tracked an obese 7-year-old African-American girl who lived in a household that relied on food stamps. “At least two possibilities could explain the association of hunger and obesity in the same patient,” the author wrote. “In this family, the increased fat content of food eaten to prevent hunger at times when the family lacked the money to buy food represents the most likely reason for the association of obesity and hunger. An alternative possibility is that obesity may represent an adaptive response to episodic food insufficiency.”

Since then, researchers have come to accept this counterintuitive association — once called the “food insecurity obesity paradox” — and are finding that it has to do with a lot more than stocking up on fat.

Fruits and vegetables are more expensive than junk foods

There are several explanations for why food scarcity and obesity can coexist, and the first one to understand has to do with basic food prices.

“Historically, when we talked about the obesity-hunger paradox, it was described in terms of cost,” said Hilary Seligman, an associate professor at the UCSF School of Medicine. “Healthy food, calorie for calorie, costs more.”

As you can see in the chart below, when it comes to how many calories you get per dollar, sugar, vegetable oils, and refined grains deliver a higher bang for your buck than fruits and vegetables.

If your household income is low or you’re food-insecure, you’re probably going for the cheapest, highest-calorie options. The latter also happen to be designed to encourage overeating. But in the long run, it’s the nutrients in food (like fiber, vitamins, and minerals) that matter more for health than calories alone.

“To maintain adequate energy intake, many families with limited resources select lower-quality diets, including high-calorie, energy-dense foods,” Angela Odoms-Young, an assistant professor of nutrition at the University of Illinois at Chicago, explains. Fruit and vegetable consumption also goes down significantly as food-insecurity status worsens.

So that means that at home, poor people aren’t getting enough quality food. Yet researchers have found that food donations too often fail to meet basic nutritional requirements. So when the needy turn to charities to get food, their options can be similarly limited.

A lack of access to healthy food promotes binge eating

There’s a second proposed reason for why hunger leads to obesity, and it’s also pretty easy to understand: A lack of access to food may cause people to binge eat when they’re worried about where their next meal is coming from.

“Those who are eating less or skipping meals to stretch food budgets may overeat when food does become available, resulting in chronic ups and downs in food intake that can contribute to weight gain,” reads this 2015 review of the problem from the Food Research and Action Center. “Cycles of food restriction or deprivation also can lead to disordered eating behaviors, an unhealthy preoccupation with food, and metabolic changes that promote fat storage — all the worse when combined with overeating.”

So bingeing is a coping mechanism that works in the short term — to create buffers between food abundance and food shortages — but may increase the risk of obesity in the long run.

Food insecurity is linked to stress — which is an obesity driver

Living in a food-insecure household is about the most stressful thing you can do, and obesity is highly related to stress.

“Lots of studies done in animal models find that if you stress an animal with a food insecurity–like state, they do release a bunch of different stress hormones,” explained Seligman. “Over time they also develop obesity and insulin resistance.”

That stress can also prime someone to want sugary, fatty, and other energy-dense foods. Here’s a nice summary of the science, according to a 2013 paper on food insecurity and chronic disease:

Evidence from animal models subjected to food scarcity as a stressor suggests that food intake is altered and a preference for high-fat, high-sugar foods is activated under stress conditions. Exposure to household food insecurity is associated with stress and depression, episodic food availability, household food shortages, and reliance on high energy-dense foods. A “new” model of comfort foods suggests that stress activates the hypothalamic-pituitary-adrenal axis, releasing cortisol which can alter metabolic processes. In addition to the stress pathway being activated, 2 other systems are activated: the hedonic (reward) pathway and memory. This comfort food model is based on observations that within a very short time period, animals learn that the high-fat, high-sugar foods are rewards that dampen the stress response. As a result, the animal seeks the same food the next time stress is introduced, even with much lower stress stimuli.

So those calorific foods mess with the metabolic response and reward system in the body, raising the risk of chronic diseases. Those foods are also just the kinds that happen to be cheap and readily available, especially at food banks that haven’t yet cleaned up their food supply.

How food banks can make a difference

Food banks are only one small part of the food system, but they interact with a group that is most vulnerable to food-related chronic diseases like obesity. So it should be clearer why some food banks are making an effort to offer quality calories, instead of just any calories that come their way through donations.

As Seligman said, “Everyone deserves access to healthy food, and to the extent that we can make that food available and easily accessible in every community in the US — and not just our wealthier communities — the better we will do at improving people’s quality of life, helping children develop palates for healthier foods, and preventing obesity and diabetes down the line.”

Making these changes won’t be easy. When Roman first floated the idea of the new policy, it wasn’t welcomed by all. “In the beginning, it took a lot of courage,” she said. “There are a lot of people who feel you can’t offend the donors.”

A big step involved getting retailers like Giant (which supply the food bank) on board. “They basically [do] the work of sorting out the bakery goods and not giving them as a contribution to the food bank,” she said, adding that the many retailers have been really amenable.

Today, fully one-third of the food Capital Area Food Bank gives to the poor are fruits and vegetables — a ratio that will now hopefully improve with the new policy. “We don’t want to be the food police,” Roman added. “We believe people are entitled to eat cakes. But we want [ours] to be a balanced offering.” If only other food banks would follow suit.

Finding a Better Way to Value Companies in the Digital World – Knowledge@Wharton

Finding a Better Way to Value Companies in the Digital World – Knowledge@Wharton

In a world rapidly switching to digital business models, the old ways of classifying industries and measuring business performances do not suffice anymore. So argue Barry Libert, CEO of OpenMatters, Megan Beck, the chief insights officer, and Wharton marketing professor Jerry (Yoram) Wind. In this opinion piece, they say it’s time to upgrade Standard & Poor’s Global Industry Classification Standard (GICS) and GAAP (Generally Accepted Accounting Principles) to more fully reflect the value of intangible assets as digital companies take the lead in this economy. The top market cap rankings in the S&P 500 already show this shift, with tech titans coming out on top and supplanting industrial firms.

These days, every company either is or must become a digital organization if they want to survive and grow in the age of platforms and networks. But getting there is no cakewalk. The journey requires leaving behind old mental models of industry and value.

A decade ago, the five most valuable companies on the Standard & Poor’s 500 Index were Exxon, GE, Microsoft, Gazprom and Citigroup. Today, the ranking has radically changed. The index’s top five most valuable companies are in tech: Apple, Alphabet (parent company of Google), Amazon, Microsoft, and Facebook.

What is so amazing is not just the fact that these five companies have risen to such heights in market cap, but also the following:

  1. They are all American companies.
  2. They rose relatively quickly to reach top market valuations.
  3. They supplanted asset-based organizations, which experienced an astonishing decline.

Among the five, Apple, Alphabet, Facebook and Microsoft are designated by S&P as information technology companies. Only Amazon is considered a consumer discretionary business. But it is arguable whether Amazon is truly a consumer discretionary company, like Wal-Mart. Indeed, Amazon is the only consumer discretionary company left in the S&P’s top 10 most valuable companies. But is Amazon’s classification correct?

And how about the other four — are they all the same type of high tech company? To understand the context to this question – whether today’s industry classifications make sense in a flat world — one must simply look back at the history of industries and the way value has been measured.

Industry Silos Are Out, Business Models Are In

In 1957, oil had an outsized impact on the U.S. economy. Thus, the biggest energy company was also the most valuable company: Standard Oil of New Jersey — an ancestor of Exxon Mobil. No pure tech company ranked in the top 10 back then, not even IBM, which soon became the dominant tech company — a position it held until Microsoft surpassed it in the mid-1990s. Indeed, half a century ago, it was all about physical assets, manufacturing and distribution of material goods — such as GE appliances and locomotives.

But this is a new time and place — one dominated increasingly by horizontal digital platforms, virtual networks and Big Data that span traditional industry verticals. And new companies such as Facebook, LinkedIn, Uber and Airbnb have sprung up to take advantage of today’s new opportunity — to build horizontal platforms that leverage the assets of you and me and what we have (cars and homes), do (drive) and know (friends).

The 10 main industry allocations by S&P — or Global Industry Classification Standard (GICS) — have become antiquated in a world that is flattening, a phenomenon first depicted by New York Times’ columnist Thomas Friedman in his bestselling book, The World Is Flat. S&P’s classifications are based on an old-style vertical or silo approach to corporate designations. These outdated groupings also affect where investors put their money.

However, the five tech leaders are pan-company, pan-industry and even pan-country, and as a result, they dominate the physical and material world and supplant the old way of thinking as evidenced by the GICS. For example, materials and energy companies, which together constituted almost half of the total value of the S&P 500 in 1957, now account for less than 10%. Even that level is not sustainable if they want to compete for investor dollars and employee talent in an increasingly digitally dominated and flat world. But that’s only half the story.

Time for a New Mental Model

Looking back, it wasn’t long ago that huge companies (at least by the traditional measures of sales and assets) such as GE, GM and Exxon Mobile were the companies considered most valuable by the market. Subsequently, asset behemoths were replaced by financial services giants (Citigroup and JPMorgan Chase), which had fewer physical assets. In time, services firms fell from the top spots as investors valued intellectual and network capital more than physical capital and services.

To put this massive value shift in context, not only is the current classification system outdated, so is the measurement system that supports it — called GAAP — or Generally Accepted Accounting Principles. In many ways, there is a massive gap – pun intended — in GAAP. According to financial consultancy Ocean Tomo, 83% of the market value of the S&P comprised companies whose businesses were based on physical, or tangible assets in 1975. As of early 2015, that number had fallen to 16%.

To help companies pivot their business models to compete with the tech giants, it’s time to update the 50-year-old GICS as well as GAAP. We all need to recognize Facebook’s 1.7 billion monthly active users as valuable an asset as GM’s or GE’s physical assets — if we want to help investors, customers and employees navigate the new tech landscape with better insights.

Further, every firm in time needs to become a digital firm that leverages intangibles, virtual networks and technology platforms if it wants to compete in the virtual and non-material world that will be characterized by virtual reality and augmented reality. It’s time to fix the underlying systems that guide our every action – our industry classification and accounting systems. Together, these two systems miss the point when they categorize today’s most valuable companies without prioritizing intangible assets. Moreover, they put companies like Amazon in the wrong bucket.

What the tech giants can do is bring the world of accountants and regulators into the 21st century of horizontal business models and intangible assets. If they don’t, investors will circumvent them and newer, more modern standards will emerge to disrupt those standard setters just as disruptors are emerging in other industries.

Tolerance Used to Predict Infection’s Death Toll | Quanta Magazine

Tolerance Used to Predict Infection’s Death Toll | Quanta Magazine

When we get the flu, we feel miserable. We swallow pain relievers, drink lots of tea, slurp down chicken soup. None of these treatments actually eradicates the flu virus itself; our immune system eventually takes care of that. Instead, these remedies make us feel better by alleviating the symptoms: inflammation, dehydration and congestion. “Most of what makes us sick is actually inflammation — the immune response — not the pathogen itself,” said Ruslan Medzhitov, an immunologist at Yale University.

Yet while scientists have carefully chronicled the damage that the immune system can wreak on the body, they have paid much less attention to the mechanisms in place to repair it. “We spend a lot of our time figuring out how to stop the disease, but the real problem is how to get better, how to recover,” said David Schneider, an immunologist at Stanford University. “It’s possible that getting better is a different thing, not just the reverse of getting sick.”

Schneider and others have begun to study the recovery process on its own, arguing that it is just as essential a component of the immune system as the body’s attempts to eradicate foreign pathogens. They have divided the immune response into two basic categories: the traditional part, dubbed resistance, which fights the pathogen itself; and the less-studied part, called tolerance, which aims to curb or repair the damage inflicted by the pathogen or by resistance mechanisms. The research that they have published in the last few years hints that tolerance may be a crucial factor in whether individuals will survive infections such as malaria, cholera and sepsis.

In a paper published in April, Schneider and collaborators used physiological measurements of tolerance to predict whether malaria-infected mice would live or die. Schneider hopes that a similar approach will one day help predict whether a patient infected with the malaria parasite or another microbe will get better with moderate treatment or get worse and need more aggressive treatment. “It’s a completely different way of perceiving how a pathogen causes disease,” said Miguel Soares, an immunologist at the Gulbenkian Institute of Science in Portugal. “I think it will have huge implications for how we treat disease.”

Blood-Borne Mystery

When a malaria-infected mosquito bites someone, the parasite enters the bloodstream and infects the liver and eventually red blood cells. Hidden from the immune system within its host cell, the parasite multiplies, ultimately rupturing its cellular refuge. That releases a toxic molecule, damaging surrounding tissue. To recover from malaria, the immune system has to both kill the parasite and repair the damage caused by the parasite and the immune system’s attack on the parasite. In severe cases, people suffer kidney failure, anemia and brain injury.

A 2007 study examining how mice respond to malaria infection was among the first to explore the role of tolerance in this disease. Andrew Read, a biologist at Pennsylvania State University, and collaborators infected various genetic strains of mice with malaria. They found that the same level of infection — the same type and number of parasites — could cause very different effects in different mouse strains. One might look healthy while another looked very sick. “It was a tipping point for people like me,” Medzhitov said, who thought that tolerance “is a fundamental and overlooked concept.”

In 2011, Soares and collaborators showed that mice with a sickle-cell mutation, a gene variant that creates oddly shaped blood cells, had increased tolerance to malaria. The work helps to explain why people living in regions of Africa with high malaria rates are more likely to carry the sickle-cell trait as well. For a long time, scientists assumed that the trait enhanced resistance — that people with the variant were better at fighting infection by the malaria parasite. But people and animals with the sickle-cell trait have to break down and detoxify misshapen red blood cells their entire lives. So when they’re infected with malaria, the molecular machinery for cleaning up the mess is already in place.

If the sickle-cell trait is so helpful, why doesn’t everyone have it? Having one copy of the protective gene boosts tolerance. But possessing two versions is harmful, leading to sickle-cell anemia, a life-threatening condition. The fact that the gene variant persists despite the potential danger suggests that, when it comes to malaria, tolerance is an important means of protection.

The Recovery Loop

Genetics isn’t the only factor that influences tolerance against malaria. In the lab, about 20 percent of a genetically identical group of mice will die if infected with malaria. The rest recover, a phenomenon that has long puzzled scientists. In their April paper, Schneider and collaborators took the first step in solving this puzzle. They showed that as soon as the animals were infected, they could predict which would die.

The researchers envisioned the journey from infection to illness to recovery as a loop. Some infected animals appear outwardly healthy, develop signs of the illness, then recover to their initial healthy state. In others, a failure to complete the loop results in death. To create a malaria-specific loop, the researchers tracked the health of mice during the course of a malaria infection, measuring several factors, including the number of parasites and different immune cells. They then tried plotting many of these variables against one another, to see which pair of them best resembled the recovery loop the researchers had envisioned.

When they plotted the number of red blood cells against the number of immature red blood cells, they found that animals had a loop that took them from health to illness and back to health. Animals that recovered quickly had tight loops within this plot; those that did poorly had wide loops. The animals whose loops veered off course died. The plots of the doomed mice, for instance, looked different right from the beginning, with a unique ratio of red blood cells to immature red blood cells. Schneider said they don’t yet know why this ratio varies from animal to animal, but he speculates it was in place before the infection — the mouse equivalent of a pre-existing condition.

“It illustrates the power of analyzing the entire trajectory of disease going from the normal healthy state through recovery and back to health,” said Medzhitov, who was not involved in the study. “It’s a very original and valuable way to look at it.”

The loop approach is quite different from standard immunology studies, which look at single variables such as the number of parasites over time. The most powerful aspect of the loop model is that it recreates the time taken by the course of an infection, even though scientists don’t explicitly include time in the analysis. That’s especially important for translating the research into clinical practice. People suffering from malaria probably won’t know exactly when they were bitten by an infected mosquito. For physicians to predict whether a patient will likely recover with standard treatment or need a more aggressive approach, they need to have a predictive measure independent of time, such as the blood-cell ratio.

Schneider’s team validated the model in humans, analyzing published data from children infected with malaria. Children with the sickle-cell trait, who are more resilient to the infection, have ratios of red blood cells that mimic those of the resilient mice.

In June, the researchers launched a new study to map how genetically diverse strains of mice respond to malaria. The goal of the three-year project, funded by the Defense Advanced Research Projects Agency, or DARPA, is to identify additional genetic factors that drive tolerance.

Schneider and collaborators are also trying to better understand the cycle of infection. They want to figure out if there is a certain spot in the recovery loop that represents a point of no return, a threshold where treatment given beforehand prevents illness but is useless if given afterward.

In one preliminary experiment, scientists treated mice with drugs to kill parasites. If given early in an infection, the animals never got sick. But if the researchers held off treatment until a certain point, all the animals fell ill. “What happens to make that change?” Schneider asked.

Life in Balance

The sickle-cell trait boosts tolerance to malaria by enhancing the body’s existing tools for dealing with stress. Though the details likely vary from infection to infection, scientists theorize that tolerance mechanisms tend to fall under this general umbrella. They co-opt the repair machinery that evolved to deal with other insults.

Soares and others hope that by better understanding these tolerance mechanisms, we can figure out new ways to enhance them. Indeed, the researchers have found that studying tolerance can point to quite unexpected avenues for drug development.

Sepsis, for example, is a potentially deadly complication from infection. It develops when the immune system spirals out of control and releases a flood of inflammatory cytokine molecules. That flood damages blood vessels and other tissues, which can lead to organ failure. “The immune reaction is so strong that people die,” Soares said. The fatality rate for severe sepsis, in fact, is a startling 28 to 50 percent. “It seems to us and others to be a clear case of disrupted tolerance that can’t be solved with antibiotics.”

In 2013, Luis Moita, an immunologist at the Gulbenkian Institute of Science, and Soares and collaborators showed how drugs might be used to boost tolerance rather than resistance. The researchers searched for drugs that could stem the flood of cytokines that the body releases in response to infection. They then induced sepsis in mice and treated them with different compounds. Drugs known as anthracyclines, which trigger DNA damage and are sometimes used in chemotherapy for cancer, were found to prevent the sepsis from becoming severe. “This was totally unexpected,” said Dominique Ferrandon, a geneticist in France at CNRS, the national center for scientific research.

Scientists theorize that the drugs work by triggering a mild stress that protects against a later, potentially lethal stress, a concept called hormesis. “Our interpretation is that by using a low dose of a DNA-damaging drug, we trigger a DNA-damage response, which might, in addition to repairing DNA, protect the organism against the tissue damage,” Moita said.

The mice still had the same number of pathogens, suggesting that the drugs boost tolerance rather than resistance. “Perhaps we can now screen for drugs that specifically target these kinds of mechanisms,” Soares said.

Malaria faces a similar challenge. Pharmaceutical companies are looking for drugs that kill the malaria pathogen. “But thousands of people die despite the fact that they can kill the pathogen,” Soares said. A complementary approach might be to look for drugs that help infected people survive. Soares has identified the chemical pathway that helps people with the sickle-cell trait tolerate malaria. Drugs exist that target molecules in this pathway, but Soares said he’s unaware of clinical trials testing them for malaria.

Source: Tolerance Used to Predict Infection’s Death Toll | Quanta Magazine

CNS – California Judges & Court Workers Blast Pay Proposal

CNS – California Judges & Court Workers Blast Pay Proposal

Judges and court officials throughout California have united against a proposal to standardize salaries for the state’s trial court employees, saying it’s likely to be incredibly costly and won’t do much to improve service.

The idea to study uniform classification and compensation levels was suggested by the Commission on the Future of the California Court System, set up last year by Chief Justice Tani Cantil-Sakauye to plot the course of California’s trial courts for the next 10 years.

Michael Roddy, the head clerk at San Diego Superior Court, told the commission at a public comment session in Los Angeles on Monday that though the Lockyer Isenberg Trial Court Funding Act of 1997 established statewide funding for the courts, there is no corresponding statewide employment structure for court staff.

“While much has been accomplished to achieve the goals of the Trial Court Funding Act, there is little consistency and predictability and equity relating to areas of employment,” Roddy said.

Roddy is a member of the commission’s Fiscal/Court Administration Working Group, where the proposal originated. He told the full commission there are “significant differences” in pay for court employees who do similar jobs throughout the state.

But the idea of homogenizing the way employees are classified and paid doesn’t sit well with California’s judges and court administrators.

Sherri Carter, head clerk at Los Angeles Superior Court, said she opposes the idea for several reasons, including the cost of simply conducting a study to create uniform classification and salary structures statewide.

“A class and compensation study of this magnitude, that would include 58 counties, would be incredibly expensive,” Carter said, noting that the proposal does not identify a source of funding for the study.

Carter said proposal would cripple innovation and stifle the ability of courts to solve their own fiscal problems creatively.

“During the Great Recession, Los Angeles had to reduce its workforce by 25 percent. Today we are re-engineering our entire court, in every litigation area, in an effort to become more efficient. We are integrating automation, we are streamlining processes, and it will touch every position in the court. The court could not have done either of these big changes if there would have been a statewide class and comp structure in place,” Carter said.

“I believe one major strength of the California trial court system is our ability to promptly, effectively, efficiently and creatively serve our local needs. The governor has challenged us to be innovative. I believe that [this] concept will cripple the branch’s ability to meet this challenge.”

Presiding Judge Elizabeth Johnson, of Trinity County, said her small court thrives on the flexibility of its staff.

“We have a group of clerks who are wonderful generalists. They do everything from probate to traffic and they do it well. And they serve the counter, they work in the courtroom, they take collections, and they do a little bit of self-help.

“With a uniform class and compensation system, this flexibility to adapt would be severely compromised,” Johnson said.

Carter also said the proposal is inconsistent with the California Constitution, an opinion shared by the reformist group, the Alliance California Judges, which also weighed in.

In a letter to Futures Commission Chairwoman Supreme Court Justice Carol Corrigan, Alliance President Judge Steve White cited Government Code § 77001, a constitutional provision granting local courts the authority to manage their own operations and personnel.

“We are baffled by the rationale offered for this concept,” White wrote. “The concept’s authors assert, with no evidentiary support, that local variation in employee pay and classification somehow affects ‘how courts’ users are served.’ We fail to see how. There is no valid reason why a court clerk in Imperial County should be given the same title and pay as a clerk in San Francisco. Their duties and working conditions may vary widely. Their costs of living are vastly different.

“Any attempt to impose uniformity on employee pay and classification would render meaningless the mandate contained in Government Code, § 77001, of ‘a decentralized system of trial court management’ with ‘local authority and responsibility of trial courts to manage day-to-day operations’ and ‘countywide administration of the trial courts.’ If local courts can’t set pay grades and salaries and negotiate with organized labor, they aren’t really administering much of anything.”

At Monday’s comment session, Presiding Judge Risë Jones Pichon, of Santa Clara County, said a uniform statewide employment structure would have done little to help end a recent eight-day strike of 400 court employees, who walked out in early August in a wage dispute.

“One would think that I would be here to say that we applaud a uniform classification and compensation study and system, but no, we don’t,” Pichon said. “There was not anything in the proposal that would have helped us with an impending strike.”
Pichon’s court is one of 54 that signed a letter opposing the concept.

“I think that is very unusual, but speaks to the fact that we all agree that this isn’t a way to improve access to justice or to improve service levels,” Pichon told the commission.

Eleven labor unions representing California court employees also blasted the idea, saying a proposal that assumes uniformity to be better than the localized system demonstrates lack of understanding about how the courts work.

“The trial courts are varied and the needs of local communities are varied,” the unions wrote in a letter to Corrigan. “The very nature of local employment allows trial courts to tailor their employment needs to the demands of the court and their communities, and establish salaries based on the local labor market and local costs of living. This concept alleges variance in wages and classification titles somehow impacts services. However, there is no evidence whatsoever to support this allegation.”

The unions, which include the Service Employees International Union, American Federation of State County and Municipal Employees and employee associations in Orange, San Diego and San Luis Obispo counties, called the idea a power grab by the centralized court bureaucracy, now called the Judicial Council staff.

“We view this concept as nothing more than a backdoor effort to undo current law and give a centralized bureaucracy with a questionable performance and transparency history the ability to attain more power over the administration of trial courts and trial court employees,” their letter said.

At the close of Monday’s hearing, Corrigan said none of the concepts discussed — which included giving judges the discretion to reduce fines and fees, reducing “non-serious” misdemeanors to infractions, and using digital recording rather than court reporters in some hearings — are firmly established, and the commission’s job is only to make recommendations to the chief justice and note concerns and opposition to the ideas presented in its final report.

“There is nothing etched in stone as of yet,” Corrigan said. “We are in the process of trying to figure out what might work and what wouldn’t work, and what we are not anticipating, as other aspects of these questions come forward.”

Trigger Warnings and Safe Spaces on College Campuses Can Silence Religious Students – The Atlantic

Trigger Warnings and Safe Spaces on College Campuses Can Silence Religious Students – The Atlantic

Last week, the University of Chicago’s dean of students sent a welcome letter to freshmen decrying trigger warnings and safe spaces—ways for students to be warned about and opt out of exposure to potentially challenging material. While some supported the school’s actions, arguing that these practices threaten free speech and the purpose of higher education, the note also led to widespread outrage, and understandably so. Considered in isolation, trigger warnings may seem straightforwardly good. Basic human decency means professors like myself should be aware of students’ traumatic experiences, and give them a heads up about course content—photographs of dead bodies, extended accounts of abuse, disordered eating, self-harm—that might trigger an anxiety attack and foreclose intellectual engagement. Similarly, it may seem silly to object to the creation of safe spaces on campus, where members of marginalized groups can count on meeting supportive conversation partners who empathize with their life experiences, and where they feel free to be themselves without the threat of judgment or censure.

In response to the letter, some have argued that the dean willfully ignored or misunderstood these intended purposes to play up a caricature of today’s college students as coddled and entitled. Safe spaces and trigger warnings pose no real threat to free speech, these critics say—that idea is just a specter conjured up by crotchety elites who fear empowered students.

Perhaps. But as a professor of religious studies, I know firsthand how debates about trigger warnings and safe spaces can have a chilling effect on classroom discussions. It’s not my free speech I’m worried about; professors generally feel confident presenting difficult or controversial material, although some may fear for their jobs after seeing other faculty members subjected to intense and public criticism. Students, on the other hand, do not have that assurance. Their ability to speak freely in the classroom is currently endangered—but not in the way some of their peers might think. Although trigger warnings and safe spaces claim to create an environment where everyone is free to speak their minds, the spirit of tolerance and respect that inspires these policies can also stifle dialogue about controversial topics, particularly race, gender, and, in my experience, religious beliefs.

Students should be free to argue their beliefs without fear of being labeled intolerant or disrespectful, whether they think certain sexual orientations are forbidden by God, life occurs at the moment of conception, or Islam is the exclusive path to salvation; and conversely, the same freedom should apply to those who believe God doesn’t care about who we have sex with, abortion is a fundamental right, or Islam is based on nothing more than superstitious nonsense. As it stands, that freedom does not exist in most academic settings, except when students’ opinions line up with what can be broadly understood as progressive political values.

Trigger warnings and safe spaces are terms that reflect the values of the communities in which they’re used. The loudest, most prominent advocates of these practices are often the people most likely to condemn Western yoga as “cultural appropriation,” to view arguments about the inherent danger of Islam as hate speech, or to label arguments against affirmative action as impermissible microaggressions. These advocates routinely use the word “ally” to describe those who support their positions on race, gender, and religion, implying that anyone who disagrees is an “enemy.”

Understood in this broader context, trigger warnings and safe spaces are not merely about allowing traumatized students access to education. Whatever their original purpose may have been, trigger warnings are now used to mark discussions of racism, sexism, and U.S. imperialism. The logic of this more expansive use is straightforward: Any threat to one’s core identity, especially if that identity is marginalized, is a potential trigger that creates an unsafe space.

But what about situations in which students encounter this kind of discussion from fellow students? Would a University of Chicago freshman want to express an opinion that might make her someone’s enemy? Would she want to be responsible for intolerant, disrespectful hate speech that creates an unsafe space? Best, instead, to remain silent.

This attitude is a disaster in the religious-studies classroom. As the Boston University professor Stephen Prothero put it in his book God Is Not One, “Students are good with ‘respectful,’ but they are allergic to ‘argument.’” Religion can be an immensely important part of one’s identity—for many, more important than race or sexual orientation. To assert that a classmate’s most deeply held beliefs are false or evil is to attack his or her identity, arguably similar to the way in which asserting that a transgender person is mistaken about their gender is an attack on their identity.

Objections to “anti-Muslim” campus speakers as promoting “hate speech” and creating a “hostile learning environment” vividly illustrate the connection between contentious assertions about religion, trigger warnings, and safe spaces. The claim that Islam—or, by implication, any religious faith—is false or dangerous is indistinguishable from hostile hate speech. To make such a claim in class is to be a potential enemy of fellow students, to marginalize them, disrespect them, and make them feel unsafe. If respect requires refraining from attacking people’s identity, then the only respectful discussion of religion is one in which everyone affirms everyone else’s beliefs, describes those beliefs without passing judgment, or simply remains silent.

As Prothero notes, that’s usually what ends up happening. According to anonymous in-class surveys, about one-third of my students believe in the exclusive salvific truth of Christianity. But rarely do these students defend their beliefs in class. In private, they have told me that they believe doing so could be construed as hateful, hostile, intolerant, and disrespectful; after all, they’re saying that if others don’t believe what they do, they’ll go to hell. Then there are my students, about one-fourth of them, who think no religion is true. They probably agree with Thomas Jefferson that the final book of the New Testament is “merely the ravings of a maniac, no more worthy, nor capable of explanation, than the incoherences of our own nightly dreams.” But they’d never say so in class. This kind of comment would likely seem even worse when directed at religious minorities, including those who practice Judaism, Islam, or Buddhism.

One could make the case that students who refrain from religious debate are making a mistake by confusing religious identity, which is free game for criticism, with racial and gender identity, which are not. Racial and gender identity deserve special consideration because they are unchosen aspects of one’s biological and historical self, while religious identity is a set of propositions about reality that can be accepted or rejected on the basis of evidence and argument. But this argument is itself controversial. Religion is a part of one’s historical self, and to reject religious beliefs often means rejecting family and friends. (Nor, as Jews can attest, are the categories of religion and race separable.) Religion also has a great deal to say about sex and gender, and may shape people’s perceptions of their own sexuality or gender identity.

The unpleasant truth is that historically marginalized groups, including racial minorities and members of the LGBT community, are not the only people whose beliefs and identities are marginalized on many college campuses. Those who believe in the exclusive truth of a single revealed religion or those who believe that all religions are nonsensical are silenced by the culture of trigger warnings and safe spaces. I know this is true because I know these students are in my classroom, but I rarely hear their opinions expressed in class.

There is no doubt that in America, the perspective of white, heterosexual Christian males has enjoyed disproportionate emphasis, particularly in higher education. Trigger warnings, safe spaces, diversity initiatives, and attention to social justice: all of these are essential for pushing back against this lopsided power dynamic. But there is a very real danger that these efforts will become overzealous and render opposing opinions taboo. Instead of dialogues in which everyone is fairly represented, campus conversations about race, gender, and religion will devolve into monologues about the virtues of tolerance and diversity. I have seen it happen, not only at the University of Chicago, my alma mater, but also at the school where I currently teach, James Madison University, where the majority of students are white and Christian. The problem, I’d wager, is fairly widespread, at least at secular universities.

Silencing these voices is not a good thing for anyone, especially the advocates of marginalized groups who hope to sway public opinion. Take for example the idea that God opposes homosexuality, a belief that some students still hold. On an ideal campus, these students would feel free to voice their belief. They would then be confronted by opposing arguments, spoken, perhaps, by the very people whose sexual orientation they have asserted is sinful. At least in this kind of environment, these students would have an opportunity to see the weaknesses in their position and potentially change their minds. But if students do not feel free to voice their opinions, they will remain silent, retreating from the classroom to discuss their position on homosexuality with family, friends, and other like-minded individuals. They will believe, correctly in some cases, that advocates of gay rights see them as hateful, intolerant bigots who deserve to be silenced, and which may persuade them to cling with even greater intensity to their convictions.

A more charitable interpretation of the University of Chicago letter is that it is meant to inoculate students against allergy to argument. Modern, secular, liberal education is supposed to combine a Socratic ideal of the examined life with a Millian marketplace of ideas. It is boot camp, not a hotel. In theory, this will produce individuals who have cultivated their intellect and embraced new ideas via communal debate—the kind of individuals who make good neighbors and citizens.

The communal aspect of the debate is important. It demands patience, open-mindedness, empathy, the courage to question others and be questioned, and above all, attempting to see things as others do. But even though academic debate takes place in a community, it is also combat. Combat can hurt. It is literally offensive. Without offense there is no antagonistic dialogue, no competitive marketplace, and no chance to change your mind. Impious, disrespectful Socrates was executed in Athens for having the temerity to challenge people’s most deeply held beliefs. It would be a shame to execute him again.