The Great Transmogrification of Atoms to Bits – IEEE Spectrum

The Great Transmogrification of Atoms to Bits – IEEE Spectrum

As I mentioned in my February 2013 column, “Balancing Act,” the belief that our life offline is separate from our life online has been denounced as digital dualism. But there’s less of a debate when it comes to differentiating between analog objects and digital data. Yes, the print and electronic copies of the same book contain the same words, but it’s obvious to most people (and, increasingly, to researchers) that the two reading experiences are quite different.

We need to understand such differences because the world is going to see a lot more digital data in the near future. This includes born-digital [pdf] data, which is originally created in an electronic format, as well as born-analog data, which starts life as a physical object and then is reborn digital. A great example of this digitization came earlier this year when the New York Public Library announced that it was making more than 180,000 digitized items available to anyone with an Internet connection, no questions asked.

That librarians would turn themselves into digital curators is no surprise, since as analog curators for the past few centuries they have been constantly bumping into the physical constraints of storage space and material decay. One approach is to get rid of stuff, and librarians and archivists employ a pleasing variety of terms related to the removal of unwanted or duplicate material from their collections: Weeding and culling generally refer to the removal of individual items, while purging, screening, and stripping are most often used for the removal of multiple related items. But the main problem with physical materials is that they possess what archivists call, poetically, inherent vice: the tendency for something to deteriorate over time because of some fault in the material itself (for example, the presence of lignin in cheap paper, which causes the paper to yellow) or the way the material reacts with its surroundings (for instance, the fact that bugs eat some books because they’re attracted to the mold that grows in damp paper).

The digitization of analog materials can solve these problems, and engineers are constantly trying to find faster ways to turn atoms into bits. For now, though, we mostly have to rely on the skills of scanops (scanner operators) to generate those bits, although on their less skilled days those operators end up scanning their own body parts, such as fingers and hands, a phenomenon known as Google hands. Some companies are applying the principles of crowdsourcing and gamification to the digitizing realm, creating leisure activities that let users contribute to the process. (I would be remiss if I didn’t mention the opposite process: turning digital Web documents and data into books and zines, a genre called the printed Web.)

Ideally, digitized data is online (readily available), but it might end up either offline (not available) or nearline (only indirectly available). It can also end up in dark archives (which are inaccessible to the public), dim archives (which are usually inaccessible but can be made accessible), or light archives (another term for those that are fully accessible).

Having digitized some data, the archivist now faces a new problem: the eventual obsolescence of the data structures or media used to store the data, necessitating a format migration (or a media migration) to something newer. Copying the data without changing the format or media type is called refreshing.

http://spectrum.ieee.org/computing/it/the-great-transmogrification-of-atoms-to-bits

Advertisements

Machines Just Got Better at Lip Reading – IEEE Spectrum

Machines Just Got Better at Lip Reading – IEEE Spectrum

Soccer aficionados will never forget the headbutt by French soccer great Zinadine Zidane during the 2006 World Cup final. Caught on video camera, Zidane’s attack on Italian player Marco Materazzi after a verbal exchange got him a red ticket. He left the field, making it easier for Italy to become world champions. The world found out later about Materazzi’s abusive words of Zidane’s female relatives.

“If we had good lip-reading technology Zidane’s reaction could have been explained or they would’ve both gotten sent out,” says Helen Bear, a computer scientist at the University of East Anglia in Norwich, UK. “Maybe the match outcome would be different.”

Bear and her colleague Richard Harvey have come up with a new lip-reading algorithm that improves a computer’s ability to differentiate between sounds—such as ‘p’, ‘b,’ and ‘m’—that all look similar on lips. The researchers presented their work at the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) in Shanghai.

A machine that reliably reads lips would have uses beyond sport rulings, of course. It could be used to solve crimes or analyze car and airplane accidents based on recorded footage, Bear says. It could help people who go deaf later in life, for whom lip-reading doesn’t come as easily as to those who are born with the impairment. It could also be used for better movie dubbing.

Lip-reading, or visual speech recognition, involves identifying shapes that the mouth makes and then mapping those to words. It is more challenging than the audio speech recognition that are common today. That’s because the mouth assumes between 10 and 14 shapes, called visemes, while speech has 50 different sounds called phonemes. So the same viseme can correlate to multiple phonemes.

Bear and Harvey have developed a new machine-learning algorithm that more precisely maps a viseme to one particular phoneme. The algorithm involves two training steps. In the first, the computer learns to map a viseme to the multiple phonemes it can represent. In the second, the viseme is duplicated—say three times if it looks like ‘p’, ‘b’ and ‘m’—, and each copy trains on just one of those sounds.

The data to train the algorithm came from audio and video recordings of 12 speakers (7 men and 5 women) speaking 200 sentences. Bear used known computer vision algorithm that extract the shape of their mouths. She then labeled the extracted data with appropriate visemes and the audio data with phonemes and fed it to her training algorithm.

The algorithm identifies sounds correctly 25% of the time, an improvement over past methods, Bear says. And it recognizes words 5% better than on average for all speakers, which, Bear says, is a significant increase given the low accuracy of speech-recognition systems that have been developed so far.

http://spectrum.ieee.org/tech-talk/computing/software/machines-just-got-better-at-lip-reading

The Scary Efficiency of Autonomous Intersections – IEEE Spectrum

The Scary Efficiency of Autonomous Intersections – IEEE Spectrum

As you know if you’ve driven anywhere ever, traffic lights are part of a vast conspiracy designed to make it as difficult and time consuming as possible for you to get where you want to go. Lights change from green to red because someone might be coming from another direction, which isn’t a very efficient way to run things, since you spend so much of your travel time either slowing down, speeding up, or stopped uselessly.

The only reason that we have to suffer through red lights is that humans in general aren’t aware enough, quick enough, or kind enough to safely and reliably take turns through intersections at speed. Autonomous cars, which are far better at both driving and cooperating, don’t need these restrictions. So, with a little bit of communication and coordination, they’ll be able to blast through even the most complex intersections while barely slowing down.

These autonomous intersections are “slot-based,” which means that they operate similarly to the way that air traffic control systems at airports coordinate landing aircraft. Air traffic controllers communicate with all incoming aircraft, and assign each one of them a specific place in the landing pattern. The individual aircraft speed up or slow down on their approach to the pattern, such that they enter it at the right time, in the right slot, and the overall pattern flows steadily. This is important, since fixed-wing aircraft tend to have trouble coming to a stop before landing.

The reason that we can’t implement this system in cars is twofold: we don’t have a centralized intersection control system to coordinate between vehicles, and vehicles (driven by humans) don’t communicate their intentions in a reliable manner. But with autonomous cars, we could make this happen for real, and if we do, the advantages would be significant. Using a centralized intersection management and vehicle communication system, slot-based intersections like the one in the video above could significantly boost intersection efficiency. We’ve known this anecdotally for a while, but a new paper from researchers at MIT gives our hunches about the increase in efficiency empirical heft. The MIT team also suggests ways in which traffic flow through intersections like these could be optimized.

Rather than designing traffic management systems so that they prioritize vehicle arrival times on a first come, first served basis, the researchers suggest sending vehicles through in batches—especially as traffic gets heavier. This would involve a slight delay for individual vehicles (since they may have to coordinate with other vehicles to form a batch), but it’s more efficient overall, since batches of cars can trade intersection time better than single vehicles can.

Cars That ThinkTransportationSelf-Driving
The Scary Efficiency of Autonomous Intersections

By Evan Ackerman
Posted 21 Mar 2016 | 17:00 GMT

Illustration: MIT Senseable City Lab
As you know if you’ve driven anywhere ever, traffic lights are part of a vast conspiracy designed to make it as difficult and time consuming as possible for you to get where you want to go. Lights change from green to red because someone might be coming from another direction, which isn’t a very efficient way to run things, since you spend so much of your travel time either slowing down, speeding up, or stopped uselessly.

The only reason that we have to suffer through red lights is that humans in general aren’t aware enough, quick enough, or kind enough to safely and reliably take turns through intersections at speed. Autonomous cars, which are far better at both driving and cooperating, don’t need these restrictions. So, with a little bit of communication and coordination, they’ll be able to blast through even the most complex intersections while barely slowing down.

These autonomous intersections are “slot-based,” which means that they operate similarly to the way that air traffic control systems at airports coordinate landing aircraft. Air traffic controllers communicate with all incoming aircraft, and assign each one of them a specific place in the landing pattern. The individual aircraft speed up or slow down on their approach to the pattern, such that they enter it at the right time, in the right slot, and the overall pattern flows steadily. This is important, since fixed-wing aircraft tend to have trouble coming to a stop before landing.

The reason that we can’t implement this system in cars is twofold: we don’t have a centralized intersection control system to coordinate between vehicles, and vehicles (driven by humans) don’t communicate their intentions in a reliable manner. But with autonomous cars, we could make this happen for real, and if we do, the advantages would be significant. Using a centralized intersection management and vehicle communication system, slot-based intersections like the one in the video above could significantly boost intersection efficiency. We’ve known this anecdotally for a while, but a new paper from researchers at MIT gives our hunches about the increase in efficiency empirical heft. The MIT team also suggests ways in which traffic flow through intersections like these could be optimized.

Rather than designing traffic management systems so that they prioritize vehicle arrival times on a first come, first served basis, the researchers suggest sending vehicles through in batches—especially as traffic gets heavier. This would involve a slight delay for individual vehicles (since they may have to coordinate with other vehicles to form a batch), but it’s more efficient overall, since batches of cars can trade intersection time better than single vehicles can. The video above shows the batch method, while the video below (from 2012 research at UT Austin) shows a highly complex intersection with coordination of single cars rather than batches.

Simulations suggest that a slot-based system sending through groups of cars could double the capacity of an intersection, while also significantly reducing wait times. In the simplest case (an intersection of two single-lane roads), cars arriving at a rate of one every 3 seconds would experience an average delay of about 5 seconds if they had to wait for a traffic light to turn green. An autonomously controlled intersection would drop that wait time to less than a second. Not bad. But the control system really starts to show its worth as traffic increases. If a car arrives every 2.5 seconds, the average car will be delayed about 10 seconds by a traffic light, whereas the slot-based intersection would hold it up for a second and a half. And as the arriving cars start to overload the capacity of our little intersection at 1 car every 2 seconds, you’d be stuck there for 99 seconds (!) if there’s a light, but delayed only 2.5 seconds under autonomous slot-based control.

We should point out that this only works if all of the cars traveling through the intersection are autonomous. One human trying to get through this delicately choreographed scrum would probably cause an enormous number of accidents due to both lack of coordination and unpredictability. For this reason, it seems likely that traffic lights aren’t going to disappear until humans finally give up driving. An interim step, though, might be traffic lights that stay green (in all directions) as long as only autonomous cars are passing through them, reverting to a traditional (frustrating and inefficient, that is) state of operation when a human approaches.

http://spectrum.ieee.org/cars-that-think/transportation/self-driving/the-scary-efficiency-of-autonomous-intersections

Linux at 25: Why It Flourished While Others Fizzled – IEEE Spectrum

Linux at 25: Why It Flourished While Others Fizzled – IEEE Spectrum

Revolution was brewing on the European periphery in the summer of 1991. Years of struggle by outspoken rebels against the status quo were coming to a boil, and a way of life that once seemed unassailable neared its collapse.

Ask most historians to tell you about that revolution and they’ll describe the events that preceded the dissolution of the Soviet Union. As an attempted coup d’état by reactionary hard-liners failed and Boris Yeltsin outlawed the Communist Party, it became clear to the world that the radical fervor that began sweeping across Eastern Europe in the late 1980s would soon undo the once-mighty Soviet empire.

Yet, in another corner of Europe, a revolution of a different sort was stirring. No one—not even its chief instigator—recognized its significance. Nonetheless, the code that an irreverent Finnish college student named Linus Torvalds quietly unveiled in August 1991 has ended up touching at least as many lives as did the political upheavals of the late 20th century. I’m talking, of course, about Linux.

Of course, in 1991 it would have been ludicrous to suggest that Linux would end up as anything notable. Torvalds had less than two years of college-level work to his name when he started writing the Linux kernel—the code that manages an operating system’s core functions—early that year. He worked out of a sparse apartment in Helsinki, where his only computer was an off-the-shelf PC with an Intel 80386 processor, known colloquially as a “386.”

Meanwhile, two major teams of professional programmers at some of the world’s most elite computer-science research labs were developing other free-software kernels. (The term open-source software, more commonly used to describe Linux and similar efforts today, did not come into vogue until 1998.) These teams had each spent years laboring to create a kernel that could do what Linux alone ultimately did.

One team was working on an operating system known as Berkeley Software Distribution, or BSD, which had been in development since the late 1970s at the University of California, Berkeley. BSD was initially conceived as an enhanced version of the Unix operating system, whose code was owned by AT&T. But the BSD project evolved into an endeavor to create a Unix clone that was completely free of AT&T code.

“Free software” is in the eye of the beholder, however, and not everyone was satisfied with the code from Berkeley. On the other side of the United States, in Cambridge, Mass., another team of programmers led by Richard Stallman was building its own freely redistributable Unix clone, called GNU, a recursive acronym for “GNU’s Not Unix!”

Stallman, who viewed the BSD license as problematic because it did not require the source code of derivative works to remain available, started the GNU project in January 1984. Like the BSD developers, Stallman and his team were professional computer scientists. They had access to computer labs at MIT, where Stallman—who in 1990 received a MacArthur “genius” grant for his work on GNU—had been employed prior to launching the GNU project.

GNU’s Hurd project was well publicized, as was the release of BSD’s operating system. Even in far-off Helsinki, Torvalds was well aware of both. And yet he wrote the Linux kernel anyway. Why?

The answer has several parts. The simplest is that Torvalds did it “just for fun,” as he explained in his autobiography. To him, Linux was a learning exercise. He was especially keen on using it to familiarize himself with the peculiarities of the 386 PC he had recently acquired.

Torvalds’s second motivation stemmed from the deficiencies that he found in the operating system he was running at the time—Minix. He used it because BSD’s Net/2 did not run on PC hardware and because GNU was not yet complete. Minix was yet another Unix-like operating system, released in 1987 by Andrew Tanenbaum, a professor of computer science at VU Amsterdam.

It would’ve made more sense if the Hurd, BSD, or even Minix had caught on. But Linux proved far more enduring. Why?

Most observers of the open-source software movement have attributed Linux’s success to fortunate timing. Lars Wirzenius, a Finnish programmer who shared an office with Torvalds at the University of Helsinki when they were students, noted at the 1998 Linux Expo that “if the Hurd had been finished a few years ago, Linux probably wouldn’t exist today. Or the BSD systems might have taken over the free operating system marketplace.”

Tanenbaum made a similar point when he wrote in the early 2000s that legal troubles surrounding BSD in the 1990s “gave Linux the breathing space it needed to catch on.” Those troubles began with a lawsuit in 1992 by Unix System Laboratories (USL), which at the time owned the Unix trademark. The company alleged that distributors of BSD-based software had improperly incorporated Unix code into their products. The case was settled out of court the next year, but then the Regents of the University of California countersued, claiming that USL had not properly credited the university for BSD code that was present in Unix.

The momentum that Linux established in the early 1990s on the foundation of its fortunate timing, copyleft licensing, and lack of commercial ambitions among its core developers has sustained the kernel for 25 years. As of June 2015, Linux totaled 19.5 million lines of code—up considerably from just over 10,000 in 1991 and about 250,000 at the start of 1995. It is the work of more than 12,000 individual authors, some of whom have contributed just a few lines of code, others vast amounts. On average, 7.71 updates make their way into the kernel code every hour.

Linux is now used not just in many of the machines you’d recognize as a computer but also for embedded applications, meaning you’ll find Linux in things like your wireless router, your e-reader, and your smart thermostat. A 2008 study estimated the kernel’s total worth in monetary terms—difficult as that is to quantify for something often available for free—to be $1.4 billion. By now it must be several times this figure.

http://spectrum.ieee.org/computing/software/linux-at-25-why-it-flourished-while-others-fizzled

Meet Remaiten – a Linux bot on steroids targeting routers and potentially other IoT devices

Meet Remaiten – a Linux bot on steroids targeting routers and potentially other IoT devices

ESET researchers are actively monitoring malware that targets embedded systems such as routers, gateways and wireless access points. Recently, we discovered a bot that combines the capabilities of Tsunami (also known as Kaiten) and Gafgyt. It also provides some improvements as well as a couple of new features. We call this new threat Linux/Remaiten. So far, we have seen three versions of Linux/Remaiten that identify themselves as versions 2.0, 2.1 and 2.2. Based on artifacts found in the code, the authors call this new malware “KTN-Remastered” or “KTN-RM”.

A prominent feature of Linux/Gafgyt is telnet scanning. When instructed to perform telnet scanning, it tries to connect to random IP addresses reachable from the Internet on port 23. If the connection succeeds, it will try to guess the login credentials from an embedded list of username/password combinations. If it successfully logs in, it issues a shell command to download bot executables for multiple architectures and tries to run them. This is a simple albeit noisy way of infecting new victims, as it is likely one of the binaries will execute on the running architecture.

Linux/Remaiten improves upon this spreading mechanism by carrying downloader executables for CPU architectures that are commonly used in embedded Linux devices such as ARM and MIPS. After logging on via the telnet prompt of the victim device, it tries to determine the new victim device’s platform and transfer only the appropriate downloader. This downloader’s job is to request the architecture-appropriate Linux/Remaiten bot binary from the bot’s C&C server. This binary is then executed on the new victim device, creating another bot for the malicious operators to use.

http://www.welivesecurity.com/2016/03/30/meet-remaiten-a-linux-bot-on-steroids-targeting-routers-and-potentially-other-iot-devices/

Watch out, Waze: INRIX’s new Traffic app is coming for you | Ars Technica

Watch out, Waze: INRIX’s new Traffic app is coming for you | Ars Technica

You may not have heard of INRIX, a traffic data company based in Kirkland, Washington. But if your car’s navigation system has real-time traffic data, there’s a good chance you’ve been using its services. For example, the Audi A4 and Telsa Model X we drove earlier this month get real-time traffic data from INRIX. In the BMW i3 and i8, INRIX provides the range finder service that lets you know how far you can go before needing to recharge (and where you can do that).

Today, the company is taking aim at the mighty Waze with a new smartphone app that leverages its vast crowdsourced traffic database.

The addition of live traffic updates might have been almost as significant. Suddenly, a navigation system wasn’t just useful for finding your way on unfamiliar roads. Even on a regular commute—one you might drive five days a week, year after year—a navigation system could alert you to jams, roadwork, or hazards.

INRIX has been in on this action, offering iOS and Android traffic apps for a while now, but this latest version looks like a big upgrade in functionality. For one thing, it adds turn-by-turn navigation. “So what,” you’re probably thinking, given that the default map apps for both platforms already have this capacity. In this case, however, the navigation is informed by INRIX’s real-time traffic database, which is constantly updated via the 275 million cars and devices (worldwide, not just in the US) that are already using INRIX’s services.

Perhaps the more interesting aspect is the use of machine learning. Rather than having to manually enter favorite or frequently visited places, Traffic will keep tabs on your routine and work out for itself your frequently traveled routes, including the times of day you make those trips. Like Waze, the apps will interface with your calendar and alert you to the best time to get on the road.

And you can also plan trips based on when you need to arrive at a given destination (a feature Waze recently added for iOS, but which is yet to make it to Android). A neat touch here is a little more customization of alerts, so you can choose how much advanced notice you’re given regarding when it’s time to get on the road; the site’s traffic (using its cloud servers) will keep tabs on this and adjust the notification based on the predicted congestion.

It also takes a slightly different philosophical approach to finding you the quickest route. As anyone who uses Waze knows, these apps are always on the lookout to shave time off your route. This often means sending you through sleepy residential streets replete with speed bumps or telling you to make left turns across busy roads. INRIX takes a different approach, according to Joel Karp, INRIX’s director of product management.

“A great example of this; I just moved up from Los Angeles to Seattle (where INRIX is headquartered), and whenever I needed a route, it always took me on the major freeways, which I never drove. Because we have machine learning and we’re learning your preferred route, and because we think humans are creatures of habit who are unlikely to deviate off that route unless there’s a significant time delta—not 10 seconds, not 45 seconds, not two minutes, we think it’s six-plus minutes—we’re monitoring that route to make sure it’s the right one for you to take that day and if not, whether you know what the better route is, as opposed to just serving up any old route,” he told Ars.

http://arstechnica.com/cars/2016/03/watch-out-waze-inrixs-new-traffic-app-is-coming-for-you/

MIT Media Lab Changes Software Default to FLOSS* — MIT MEDIA LAB — Medium

MIT Media Lab Changes Software Default to FLOSS* — MIT MEDIA LAB — Medium

The MIT Media Lab is part of an academic ecosystem committed to liberal sharing of knowledge. In that spirit, I’m proud to announce that we are changing our internal procedures to encourage more free and open-source software.

Previously, software releases using free and open source licenses were approved by an internal committee. But since we’ve always allowed our developers to open-source their work, we’re eliminating the unnecessary hurdle: from now on any open source request will be viewed as the default and automatically approved. We respect the autonomy of our community members and will continue to let them choose whether to release their software as proprietary or open. But removing the open source approval step will level the playing field.

This change is a reflection of preferences within our community, as well as an acknowledgement of our position in an increasingly interconnected world. Encouraging free and open source software realigns our policies with our mission. As an academic institution, we believe that in many cases we can achieve greater impact by sharing our work.

https://medium.com/mit-media-lab/mit-media-lab-changes-software-default-to-floss-4305e478e40

​Microsoft and Canonical partner to bring Ubuntu to Windows 10 | ZDNet

​Microsoft and Canonical partner to bring Ubuntu to Windows 10 | ZDNet

According to sources at Canonical, Ubuntu Linux’s parent company, and Microsoft, you’ll soon be able to run Ubuntu on Windows 10.

This will be more than just running the Bash shell on Windows 10. After all, thanks to programs such as Cygwin or MSYS utilities, hardcore Unix users have long been able to run the popular Bash command line interface (CLI) on Windows.

With this new addition, Ubuntu users will be able to run Ubuntu simultaneously with Windows. This will not be in a virtual machine, but as an integrated part of Windows 10.

The details won’t be revealed until tomorrow’s morning keynote speech at Microsoft Build. It is believed that Ubuntu will run on top of Windows 10’s recently and quietly introduced Linux subsystems in a new Windows 10 Redstone build.

Microsoft and Canonical will not, however, sources say, be integrating Linux per se into Windows. Instead, Ubuntu will primarily run on a foundation of native Windows libraries. This would indicate that while Microsoft is still hard at work on bringing containers to Windows 10 in project Barcelona, this isn’t the path Ubuntu has taken to Windows.

http://www.zdnet.com/article/microsoft-and-canonical-partner-to-bring-ubuntu-to-windows-10/

1,400+ vulnerabilities found in automated medical supply system

1,400+ vulnerabilities found in automated medical supply system

Security researchers have discovered 1,418 vulnerabilities in CareFusion’s Pyxis SupplyStation system – automated cabinets used to dispense medical supplies – that are still being used in the healthcare and public health sectors in the US and around the world.

The vulnerabilities can be exploited remotely by attackers with low skills, and exploits that target these vulnerabilities are publicly available, ICS-CERT has warned in an advisory.

The worst part of it is that the affected versions of the software are at end‑of-life, and won’t be receiving a patch even though they are widely used.

The flaws are present in seven different third-party vendor software packages bundled in the vulnerable system, including MS Windows XP, Symantec Antivirus 9, and Symantec pcAnywhere 10.5.

https://www.helpnetsecurity.com/2016/03/30/1400-flaws-automated-medical-supply-system/

CNBC just collected your password and shared it with marketers | Computerworld

CNBC just collected your password and shared it with marketers | Computerworld

CNBC inadvertently exposed peoples’ passwords after it ran an article Tuesday that ironically was intended to promote secure password practices.

The story was removed from CNBC’s website shortly after it ran following a flurry of criticism from security experts. Vice’s Motherboard posted a link to the archived version.

Embedded within the story was a tool in which people could enter their passwords. The tool would then evaluate a password and estimate how long it would take to crack it.

A note said the tool was for “entertainment and educational purposes” and would not store the passwords.

That turned out not to be accurate, as well as having other problems.

Adrienne Porter Felt, a software engineer with Google’s Chrome security team, spotted that the article wasn’t delivered using SSL/TLS (Secure Socket Layer/Transport Layer Security) encryption.

SSL/TLS encrypts the connection between a user and a website, scrambling the data that is sent back and forth. Without SSL/TLS, someone one the same network can see data in clear text and, in this case, any password sent to CNBC.

“Worried about security? Enter your password into this @CNBC website (over HTTP, natch). What could go wrong,” Felt wrote on Twitter. “Alternately, feel free to tweet your password @ me and have the whole security community inspect it for you.”

The form also sent passwords to advertising networks and other parties with trackers on CNBC’s page, according to Ashkan Soltani, a privacy and security researcher, who posted a screenshot.

The companies that received copies of the passwords included Google’s DoubleClick advertising service and Scorecard Research, an online marketing company that is part of comScore.

Despite saying the tool would not store passwords, traffic analysis showed it was actually storing them in a Google Docs spreadsheet, according to Kane York, who works on the Let’s Encrypt project.

“The ‘submit’ button loads your password into a @googledocs spreadsheet!,” York wrote.

http://www.computerworld.com/article/3049414/security/cnbc-just-collected-your-password-and-shared-it-with-marketers.html