How Academia and Publishing are Destroying Scientific Innovation: A Conversation with Sydney Brenner

[Note:  This item comes from reader Randall Head.  DLH]

From: Randall Webmail <>
Subject: How Academia and Publishing are Destroying Scientific Innovation: A Conversation with Sydney Brenner
Date: February 28, 2014 at 18:44:51 PST
To: Dewayne Hendricks <>

“The most important thing today is for young people to take responsibility, to actually know how to formulate an idea and how to work on it. Not to buy into the so-called apprenticeship. I think you can only foster that by having sort of deviant studies. That is, you go on and do something really different. Then I think you will be able to foster it.

“But today there is no way to do this without money. That’s the difficulty. In order to do science you have to have it supported. The supporters now, the bureaucrats of science, do not wish to take any risks. So in order to get it supported, they want to know from the start that it will work. This means you have to have preliminary information, which means that you are bound to follow the straight and narrow. 

“There’s no exploration any more except in a very few places. You know like someone going off to study Neanderthal bones. Can you see this happening anywhere else? No, you see, because he would need to do something that’s important to advance the aims of the people who fund science.”


How Academia and Publishing are Destroying Scientific Innovation: A Conversation with Sydney Brenner
By Elizabeth Dzeng
Feb 24 2014

I recently had the privilege of speaking with Professor Sydney Brenner, a professor of Genetic medicine at the University of Cambridge and Nobel Laureate in Physiology or Medicine in 2002. I had originally intended to ask him about Professor Frederick Sanger, the two-time Nobel Prize winner famous for his discovery of the structure of proteins and his development of DNA sequencing methods, who passed away in November. I wanted to do the classic tribute by exploring his scientific contributions and getting a first hand account of what it was like to work with him at Cambridge’s Medical Research Council’s (MRC) Laboratory for Molecular Biology (LMB) and at King’s College where they were both fellows. What transpired instead was a fascinating account of the LMB’s quest to unlock the genetic code and a critical commentary on why our current scientific research environment makes this kind of breakthrough unlikely today.

It is difficult to exaggerate the significance of Professor Brenner and his colleagues’ contributions to biology. Brenner won the Nobel Prize for establishing Caenorhabditis elegans, a type of roundworm, as the model organism for cellular and developmental biological research, which led to discoveries in organ development and programmed cell death. He made his breakthroughs at the LMB, where beginning in the 1950s, an extraordinary number of successive innovations elucidated our understanding of the genetic code. This code is the process by which cells in our body translate information stored in our DNA into proteins, vital molecules important to the structure and functioning of cells. It was here that James Watson and Francis Crick discovered the double-helical structure of DNA. Brenner was one of the first scientists to see this ground-breaking model, driving from Oxford, where he was working at the time in the Department of Chemistry, to Cambridge to witness this breakthrough. This young group of scientists, considered renegades at the time, made a series of successive revolutionary discoveries that ultimately led to the creation of a new field called molecular biology.

To begin our interview, I asked Professor Brenner to speak about Professor Sanger and what led him to his Nobel Prize winning discoveries.

Sydney Brenner: Fred realized very early on that if we could sequence DNA, we would have direct contact with the genes. The problem was that you couldn’t get hold of genes in any way. You couldn’t purify what was a gene. That is why right from the start in 1954, we decided we would do this by using Fred’s method of sequencing proteins, which he had achieved [proteins are derived from the information held in DNA]. You have to realise it was only on a small scale. I think there were only forty-five amino acids [the building blocks of proteins] that were in insulin. We thought even scaling that up for proteins would be difficult. But finally DNA sequencing was invented. Then it became clear that we could directly approach the gene, and it produced a completely new period in science.

He was interested in the method and interested in getting the methods to work. I was really clear in my own mind that what he did in DNA sequencing, even at the time, would cause a revolution in the subject, which it did. And of course we immediately, as fast as possible, began to use these methods in our own research.

ED: This foundational research ushered in a new era of biological science. It has formed the basis of nearly all subsequent discoveries in the field, from understanding the mechanisms of diseases, to the development of new drugs for diseases such as cancer. Imagining the creative energy that drove these discoveries was truly inspirational, and so, I asked Professor Brenner what it felt like to be part of this scientific adventure.

SB: I think it’s really hard to communicate that because I lived through the entire period from its very beginning, and it took on different forms as matters progressed. So it was, of course, wonderful. That’s what I tell students. The way to succeed is to get born at the right time and in the right place. If you can do that then you are bound to succeed. You have to be receptive and have some talent as well.

ED: Today, the structure of DNA and how genetic information is translated into proteins are established scientific canon, but in the 1950s, the hypotheses generated at the LMB were dismissed as inconceivable nonsense.

SB: To have seen the development of a subject, which was looked upon with disdain by the establishment from the very start, actually become the basis of our whole approach to biology today. That is something that was worth living for.

I remember Francis Crick gave a lecture in 1958, in which he discussed the adapter hypothesis at the time. He proposed that there were twenty enzymes, which linked amino acids to twenty different molecules of RNA, which we call adapters. It was these adapters that lined up the amino acids. The adapter hypothesis was conceived I think as early as 1954 and of course it was to explain these two languages: DNA, the language of information, and proteins, the language of work.

Of course that was a paradox, because how did you get one without the other? That was solved by discovering that a molecule from RNA could actually have function. So this information on RNA, which happened much later really, solved that problem as far as origins were concerned.

ED: (Professor Brenner was far too modest here, as it was he who discovered RNA’s critical role in this translation from gene to protein.)

SB: So he [Crick] gave the lecture and biochemists stood up in the audience and said this is completely ridiculous, because if there were twenty enzymes, we biochemists would have already discovered them. To them, the fact that they still hadn’t went to show that this was nonsense. Little did the man know that at that very moment scientists were in the process of finding the very first of these enzymes, which today we know are the enzymes that combined amino acids with transfer RNA. And so you really had to say that the message kept its purity all the way through.

What people don’t realise is that at the beginning, it was just a handful of people who saw the light, if I can put it that way. So it was like belonging to an evangelical sect, because there were so few of us, and all the others sort of thought that there was something wrong with us.

They weren’t willing to believe. Of course they just said, well, what you’re trying to do is impossible. That’s what they said about crystallography of large molecules. They just said it’s hopeless. It’s a hopeless task. And so what we were trying to do with the chemistry of proteins and nucleic acids looked hopeless for a long time. Partly because they didn’t understand how they were built, which I think we molecular biologists had the first insight into, and partly because they just thought they were amorphous blobs and would never be able to be analysed.

I remember when going to London to talk at meetings, people used to ask me what am I going to do in London, and I used to tell them I’m going to preach to the heathens. We viewed most of everybody else as not doing the right science. Like one says, the young Turks will become old Greeks. That’s the trouble with life. I think molecular biology was marvellous because every time you thought it was over and it was just going to be boring, something new happened. It was happening every day.

So I don’t know if you can ride on the crest of a wave; you can ride on it, I believe, forever. I think that being in science is the most incredible experience to have, and I now spend quite a lot of my time trying to help the younger people in science to enjoy it and not to feel that they are part of some gigantic machine, which a lot of people feel today.

ED: I asked him what inspired them to maintain their faith and pursue these revolutionary ideas in the face of such doubt and opposition.

SB: Once you saw the light you were just certain that you had to be right, that it was the right way to do it and the right answer. And of course our faith, if you like, has been borne out. 

I think it would have been difficult to keep going without the strong support we had from the Medical Research Council. I think they took a big gamble when they founded that little unit in the Cavendish. I think all the early people they had were amazing. There were amazing personalities amongst them.

This was not your usual university department, but a rather flamboyant and very exceptional group that was meant to get together. An important thing for us was that with the changes in America then, from the late fifties almost to the present day, there was an enormous stream of talent and American postdoctoral fellows that came to our lab to work with us. But the important thing was that they went back. Many of them are now leaders of American molecular biology, who are alumni of the old MRC.

ED: The 1950s to 1960s at the LMB was a renaissance of biological discovery, when a group of young, intrepid scientists made fundamental advances that overturned conventional thinking. The atmosphere and camaraderie reminded me of another esteemed group of friends at King’s College – the Bloomsbury Group, whose members included Virginia Woolf, John Maynard Keynes, E.M. Forrester, and many others. Coming from diverse intellectual backgrounds, these friends shared ideas and attitudes, which inspired their writing and research. Perhaps there was something about the nature of the Cambridge college systems that allowed for such revolutionary creativity?

SB: In most places in the world, you live your social life and your ordinary life in the lab. You don’t know anybody else. Sometimes you don’t even know other people in the same building, these things become so large.

The wonderful thing about the college system is that it’s broken up again into a whole different unit. And in these, you can meet and talk to, and be influenced by and influence people, not only from other scientific disciplines, but from other disciplines. So for me, and I think for many others as well, that was a really important part of intellectual life. That’s why I think people in the college have to work to keep that going.

Cambridge is still unique in that you can get a PhD in a field in which you have no undergraduate training. So I think that structure in Cambridge really needs to be retained, although I see so often that rules are being invented all the time. In America you’ve got to have credits from a large number of courses before you can do a PhD. That’s very good for training a very good average scientific work professional.  But that training doesn’t allow people the kind of room to expand their own creativity. But expanding your own creativity doesn’t suit everybody. For the exceptional students, the ones who can and probably will make a mark, they will still need institutions free from regulation.

ED: I was excited to hear that we had a mutual appreciation of the college system, and its ability to inspire interdisciplinary work and research. Brenner himself was a biochemist also trained in medicine, and Sanger was a chemist who was more interested in chemistry than biology.

SB: I’m not sure whether Fred was really interested in the biological problems, but I think the methods he developed, he was interested in achieving the possibility of finding out the chemistry of all these important molecules from the very earliest.

ED: Professor Brenner noted that these scientific discoveries required a new way of approaching old problems, which resist traditional disciplinary thinking.

SB: The thing is to have no discipline at all. Biology got its main success by the importation of physicists that came into the field not knowing any biology and I think today that’s very important.

I strongly believe that the only way to encourage innovation is to give it to the young. The young have a great advantage in that they are ignorant.  Because I think ignorance in science is very important. If you’re like me and you know too much you can’t try new things. I always work in fields of which I’m totally ignorant.

ED: But he felt that young people today face immense challenges as well, which hinder their ability to creatively innovate.

SB: Today the Americans have developed a new culture in science based on the slavery of graduate students. Now graduate students of American institutions are afraid. He just performs. He’s got to perform. The post-doc is an indentured labourer. We now have labs that don’t work in the same way as the early labs where people were independent, where they could have their own ideas and could pursue them.

The most important thing today is for young people to take responsibility, to actually know how to formulate an idea and how to work on it. Not to buy into the so-called apprenticeship. I think you can only foster that by having sort of deviant studies. That is, you go on and do something really different. Then I think you will be able to foster it.

But today there is no way to do this without money. That’s the difficulty. In order to do science you have to have it supported. The supporters now, the bureaucrats of science, do not wish to take any risks. So in order to get it supported, they want to know from the start that it will work. This means you have to have preliminary information, which means that you are bound to follow the straight and narrow. 

There’s no exploration any more except in a very few places. You know like someone going off to study Neanderthal bones. Can you see this happening anywhere else? No, you see, because he would need to do something that’s important to advance the aims of the people who fund science.

I think I’ve often divided people into two classes: Catholics and Methodists. Catholics are people who sit on committees and devise huge schemes in order to try to change things, but nothing’s happened. Nothing happens because the committee is a regression to the mean, and the mean is mediocre. Now what you’ve got to do is good works in your own parish. That’s a Methodist. 


Losing a Generation of Scientists

Note:  This item comes from reader Geoff Goodfellow.  DLH]

From: the keyboard of geoff goodfellow <>
Subject: Losing a Generation of Scientists
Date: February 28, 2014 at 14:52:46 PST
To: Dewayne Hendricks <>, Dave Farber <>, ip <>

Goodbye Academia
By Lenny Teytelman
Feb 14 2014

I have enjoyed research and teaching for the last twelve years. Yet, I have resigned from my postdoctoral position at MIT a week ago, giving up on the dream of an academic position. I feel liberated and happy, and this is a very bad sign for the future of life sciences in the United States…


How Ars readers cope with bad Netflix and YouTube performance

How Ars readers cope with bad Netflix and YouTube performance
VPNs, DNS, and… DSL? If only we could just get some broadband competition.
By Jon Brodkin

Feb 28 2014

One of the most common complaints from Internet users is how slow streaming video services like YouTube and Netflix can be. There are various reasons for bad performance, ranging from technical glitches to business conflicts, but when low-quality video is the result, it’s frustrating and hard to avoid.

That doesn’t mean that users can’t try a variety of methods to speed up that video. This week, we asked Ars readers to share their streaming video strategies, and we got more than 100 responses. Here are some of the most interesting.

VPNs, DNS, and proxy servers

As we’ve written, a VPN (virtual private network) or third-party DNS (Domain Name System) service can improve streaming performance by routing traffic away from congested links. This can also have the opposite of its intended effect because it tends to force video traffic over a longer path by distance. But numerous users said the strategy has indeed worked wonders for them.

“YouTube is absolutely terrible from 7pm to 11pm ET,” rodalpho, a Time Warner Cable customer, commented. Blocking the IP addresses of certain YouTube caches didn’t work, nor did switching DNS providers, but a VPN did the trick.

“It’s effectively unusable during those time periods,” rodalpho wrote. “I tried blocking their cache CDNs ( and, no dice. I tried switching to Google DNS and easyDNS, no improvement. Activating a VPN immediately solved my problem, but I don’t want to pay for a VPN just to make YouTube work. I want Time Warner to fix the problem. It’s been over a year, and they haven’t, and I don’t expect them to. So I just don’t use YouTube during prime time.”

Similarly, MatthiasF of Maryland routes traffic through a proxy service to improve YouTube quality. “The very few times YouTube messes up for me here in Maryland, I use a proxy located in Texas,” MatthiasF wrote. “It seems like the connections to the Northeast YouTube servers (South Carolina) get congested every so often, but the ones in Oklahoma are usually fine (probably because of the hour difference and far fewer people).”

Gordon942 of Berkeley, CA reports using a VPN to fix Netflix on Comcast. “Between about 4:30pm and midnight, Netflix on Comcast will only play extremely low quality SD and will buffer for a long time about every 30 seconds,” the commenter wrote. “To fix this, I signed up for a VPN fromPrivate Internet Access. It’s about $7 a month, and it totally fixes the problem. With the VPN active, I get instantaneous HD with no buffering any time of the day.”

Netflix users on Comcast should start seeing better quality even without a VPN because of a new agreement between the companies to exchange traffic directly.

Ars commenters noted that connecting to a VPN on a computer is easy enough, but getting all video-capable devices onto a VPN may require reconfiguring a router and can thus be a hassle.

Eurynom0s, a Verizon FiOS user in Washington, DC, wrote that a VPN improves Netflix on a computer, but it’s still unwatchable on a TV not connected to a VPN. “Last night I had to use Netflix from my computer on my TV to get a watchable streaming quality despite having a smart TV with a Netflix app (since it’s a 60″, the shitty quality is ESPECIALLY noticeable, plus last night it kept pausing to rebuffer, too). When I plugged my computer in instead, I instantly got better quality,” the commenter wrote. “Now I’m going to have to dig out my old Linksys router with Tomato on it and try to figure out if I can set it up as a VPN tunnel for the TV. At least I already had the VPN and already own a router, but it’s still unbelievable that I have to waste my time setting all this up just to get what I’m already paying for.”

The extra effort is worth it, says Borzwazie, a Comcast user, who set up a wireless router to connect all devices on the local network to Private Internet Access. “The difference is night and day,” Borzwazie wrote. “I’m actually getting the bandwidth I pay for now with Comcast (25 down/5 up). Netflix and YouTube run great, and I can actually stream HD now. Web browsing is snappy. Steam downloads are fast. There’s occasional variability, but overall I am very satisfied. If this change isn’t evidence of the network shenanigans at Comcast, I don’t know what is.”

Getting more advanced

Other users got a little more complicated. DemBones79, a Verizon FiOS user in Maryland, “manually block[ed] the YouTube IP ranges specified in theMitch Ribar post, though I did it in the router itself. This helped a lot for YouTube (though there’s still the occasional troublesome video).” DemBones79 also “manually specified the OpenDNS IP addresses for my DNS server in the router.” This has apparently helped speed up browsing on some websites, but Netflix is still troublesome.

“Between Verizon’s terrible Netflix performance and Netflix’s frankly embarrassing ‘curation’ of their video catalog, I’m beginning to wonder why I maintain the subscription,” the commenter wrote.

Geese, a commenter in Virginia on Verizon FiOS, wrote that “Netflix is damn near unusable during late primetime (8pm EST to 1am EST). I pay for a 75 down/35 up connection because I run some company testing/lab servers in a basement rack. I can download anything at 5MB/s whenever I want, but streaming from Netflix is like snorkeling with a coffee stirrer whenever it’s prime time.”


Tenn. State University Requires Students to Wear Trackable IDs

[Note:  This item comes from reader Geoff Goodfellow.  DLH]

From: the keyboard of geoff goodfellow <>
Subject: University Students Required to Wear Trackable IDs
Date: February 28, 2014 at 14:23:14 PST
To: Dewayne Hendricks <>, Dave Farber <>

Tenn. State University Requires Students to Wear Trackable IDs
By Alec Torres
Feb 28 2014

Beginning Saturday, March 1, students and staff at Tennessee State
University will be required to present identification badges at any
time that can also track their movements in and out of buildings, …


Cellular’s open source future is latched to tallest tree in the village

Cellular’s open source future is latched to tallest tree in the village
Open source tech that could reshape mobile connects a West Papua village to the world.
By Sean Gallagher

Feb 27 2014

Deep in the jungles of West Papua’s central highlands, there is a village with its own mobile telecommunications network. That network runs in a box latched to the top of a tree, providing the only reliable cell coverage anywhere within a four-hour drive. This small setup has created a booming local mobile economy—and it could be the harbinger of a whole new class of private and community mobile networks that change the shape of mobile for those who have been underserved or overcharged by traditional phone carriers.

The single “tower” cell network is the work of graduate students from the University of California at Berkeley’s Technology and Infrastructure for Emerging Regions (TIER) research group, under the direction of Professor Eric Brewer—the founder of the content delivery network Inktomi. The group built its mobile solution with software developed in San Francisco and some off-the-shelf hardware adapted for the task. Working with the Methodist church-owned school Misionaris Sekolahin and local merchants, a TIER team led by graduate students Kurtis HeimerlShaddi Hasan and Kashif Ali gave this village of about 1,500 people its first local phone network—and a much-needed connection to the outside world.

And that network runs on open source. OpenBTS, an all-software cellular transceiver, is at the heart of the network running on that box attached to a treetop. Someday, if those working with the technology have their way, it could do for mobile networks what TCP/IP and open source did for the Internet. The dream is to help mobile break free from the confines of telephone providers’ locked-down spectrum, turning it into a platform for the development of a whole new range of applications that use spectrum “white space” to connect mobile devices of every kind. It could also democratize telecommunications around the world in unexpected ways. Startup Range Networks, the company that developed the open-source software powering the network, has much bigger plans for the technology. It wants to adapt the transceiver to use unlicensed spectrum for small-scale cellular networks all over the world without the need to depend on the generosity of incumbent telecom providers or government regulators.

The company’s new CEO sees white space spectrum as a huge opportunity to create private mobile networks in unlicensed spectrum. It could dramatically change the face of mobile communications for billions of people in developing countries around the world, and it could potentially have a similar impact here in the US—if it doesn’t get squashed by governments and phone companies in the process.

Opening the mobile stack

OpenBTS is a Unix-based software package that connects to a software-defined radio. On the radio side, it uses the GSM air interface used globally by 2G and 2.5G cellular networks, which makes it compatible with most 2G and 3G handsets. On the backend, it uses a Session Initiation Protocol (SIP) “soft-switch” or a software-based private branch exchange (PBX) server to route calls, so it can be integrated with VoIP phone systems.

OpenBTS is “not just open source mobile, but open mobile,” said new Range Networks CEO Edward Kozel. Kozel joined the company on January 28, and he knows a thing or two about networking. He was chief technology officer and a senior vice president of business development during his 12-year tenure at Cisco, and he arrived at Range after serving as Deutsche Telekom’s chief technology and innovation officer. Kozel sees OpenBTS as providing the same opportunity that TCP/IP created in networking.

“With TCP/IP, nobody owned the standard, so you had innovation up and down the stack.” With OpenBTS, he said, “you have the source code and complete access to the stack and can apply it in different ways. Before, you took what AT&T or Verizon had or you didn’t have mobile. Now you can roll your own applications.” The company’s current products are all 2G and 3G GSM, but a 4G LTE product is in the works as well. The airwaves are not quite as open a domain as the Internet—spectrum licensing and a global tangle of government regulations have seen to that. But the GSM standard itself is like TCP/IP—an open standard that can be used to do a lot more than operate large commercial mobile networks.

The fundamentals of GSM are “the underpinnings of an agile frequency management regime,” Kozel said. “In rural areas, a carrier could sublet some of their frequencies to a small local operator using a white space implementation.” One of the first advantages of GSM is simply the number of 2G and 3G handsets already out there that work with it worldwide. And some of that spectrum is opening up or has never been used in parts of the world. “They’ve deregulated GSM spectrum in some European countries—for example, there are some GSM channels in Sweden that don’t require license,” Kozel said.

For rural environments, where the government allows it and Telco providers don’t operate, small mobile providers could be set up using existing phones without much effort. And that’s just what TIER has done in West Papua. But the work done by OpenBTS developers, Heimerl, and his TIER colleagues has also laid the groundwork for a different sort of open mobile. They’re ready to exploit unlicensed spectrum. Kozel said Heimerl’s work is the foundation of a “GSM white space implementation,” which evolves the current GSM specification to allow for operation in frequency ranges not normally reserved for mobile phones.

The downside of operating a white space mobile network is that while software-defined radios like those used in Range’s systems allow for base stations to be set up in a wide range of frequencies, no existing handsets will work in the zone. “You have to have the wherewithal to do your own handsets,” Kozel said. “But it can be done. We’ve worked with customers who haven’t been able to get spectrum and went to China to find people who would make handsets in the industrial, scientific, and medical (ISM) band—unregistered spectrum.” The price for the custom chips required for such phones is now low enough that for “a modest markup, you can now get custom frequency handsets.”

Call of the jungle

There was no need for custom handsets in Papua. The TIER project is the first step in an attempt to create a small-scale sustainable community phone network in a place where there isn’t even reliable electricity. It’s already surpassed that goal. “This network is well beyond sustainable and into profitable,” Heimerl told Ars during a meeting at Range’s offices. Within days of being launched in February 2013, TIER had over 100 customers and was up over 500 within six months. “We sell a SIM card every one or two days in the village now,” he said.

The profit helps pay for the operation of the school, and it has created a new source of income for local merchants. The network has become a local currency itself thanks to a phone credit swap program developed by the TIER team. All this has happened without any help from Indonesia’s national telecommunications company. In fact, the phone numbers assigned to the phones in the village so that they can send SMS messages to the outside world are Swedish.

The TIER team was introduced to Range Network’s technology about three years ago when Range founders David Burgess and Harvind Samra came to speak at UC-Berkeley about their technology. Burgess has since moved on to another company, called Legba, and he continues to build on top of the OpenBTS work in other ways. Legba is based in Romania, and it targets urban mobile networks that need to support 2.5G GSM and 4G LTE devices over the same infrastructure. The company is developing an OpenBTS “distro” calledYateBTS that incorporates code from Null Team SRL’s YATE, an open-source SIP routing server used globally to plug in to VoIP network providers (including Google Voice). The system could help telcos deal with the dilemma of how to consolidate their networks while still supporting GSM and GPRS services.


Tor develops its own anonymous IM tool to hide chat from spying eyes

Tor develops its own anonymous IM tool to hide chat from spying eyes
Service will force instant message traffic over Tor’s network to evade surveillance.
By Sean Gallagher

Feb 28 2014

The Tor Foundation is moving forward with a plan to provide its own instant messaging service. Called the Tor Instant Messaging Bundle, the tool will allow people to communicate in real time while preserving anonymity by using chat servers concealed within Tor’s hidden network.

In planning since last July—as news of the National Security Agency’s broad surveillance of instant messaging traffic emerged—the Tor Instant Messaging Bundle (TIMB) should be available in experimental builds by the end of March, based on a roadmap published in conjunction with the Tor Project’s Winter Dev meeting in Iceland.

TIMB will connect to instant messaging servers configured as Tor “hidden services” as well as to commercial IM services on the open Internet.

The effort, which is funded by an anonymous donor organization, was originally called Attentive Otter. To ensure the anonymity of the user, TIMB will force all instant messaging traffic through the Tor network, regardless of whether it’s aimed at a server on the Tor network or not. TIMB will be based on Instantbird, an open source instant messaging tool which is itself based onMozilla’s XULrunner cross-platform runtime environment.

Instantbird was chosen after the TIMB team decided against using Pidgin orlibpurple, the GPL open-source instant messaging library used by Pidgin and Adium, mostly because of the amount of effort that would have been required to audit and maintain the library, and also because of some concerns about how seriously Pidgin’s developers took security concerns. The TIMB project will remove libpurple from Instantbird, a task that the Mozilla and Instantbird team were already working toward as they move the software to a pure JavaScript implementation.

The first experimental release of TIMB won’t include “off the record” (OTR) capability. OTR mode encrypts traffic further and uses an exchange of digital signatures to verify the identity of each party. But the signatures can’t be checked by anyone outside the instant messaging session and can’t be used to prove identity outside the session. The Tor team is hoping to develop OTR components for Instantbird and get them merged into future versions of the main Instantbird code line.

Peeping Webcam? With NSA Help, British Spy Agency Intercepted Millions of Yahoo Chat Images

Peeping Webcam? With NSA Help, British Spy Agency Intercepted Millions of Yahoo Chat Images
Feb 28 2014

The latest top-secret documents leaked by Edward Snowden reveal the National Security Agency and its British counterpart, the the Government Communications Headquarters (GCHQ) may have peered into the lives of millions of Internet users who were not suspected of wrongdoing. The surveillance program codenamed “Optic Nerve” compiled still images of Yahoo webcam chats in bulk and stored them in the GCHQ’s databases with help from the NSA. In one six-month period in 2008 alone, the agency reportedly amassed webcam images from more than 1.8 million Yahoo user accounts worldwide. According to the documents, between 3 and 11 percent of the Yahoo webcam images contained what the GCHQ called “undesirable nudity.” The program was reportedly also used for experiments in “automated facial recognition” as well as to monitor terrorism suspects. We speak with James Ball, one of the reporters who broke the story. He is the special projects editor for Guardian US.


James Ball, the special projects editor for Guardian US. He recently co-wrote an article with Spencer Ackerman called “Yahoo Webcam Images from Millions of Users Intercepted by GCHQ.”

Congress to FCC: Investigate sneaky phone fees

Congress to FCC: Investigate sneaky phone fees
By Kate Tummarello
The Hill
Feb 27 2014

Members of Congress are pushing the Federal Communications Commission to look into phone companies that add unexpected fees to customers’ monthly bills.

In a letter to FCC Chairman Tom Wheeler, Rep. Anna Eshoo (D-Calif.), ranking member of the House Commerce subcommittee on communications, and Reps. Howard Coble (R-N.C.), Mike Doyle (D-Pa.) and Ben Ray Luján (D-N.M.) asked the agency to investigate unexpected “below-the-line” charges and taxes.


Apple continues to hide its rotten security from consumers

[Note:  This item comes from reader Geoff Goodfellow.  DLH]

From: the keyboard of geoff goodfellow <>
Subject: Apple continues to hide its rotten security from consumers
Date: February 27, 2014 at 18:49:21 PST
To: Dewayne Hendricks <>, Dave Farber <>

Apple continues to hide its rotten security from consumers
By Nathaniel Mott 
Feb 27 2014

Apple continues to obfuscate security concerns by releasing vague statements about critical updates to both its mobile and desktop operating systems.

The company issued an update to its mobile operating system on Friday to fix a problem that might have allowed attackers to intercept data sent from its smartphones and tablets to the Web. The vulnerability was caused by a faulty implementation of a decades-old security standard that went unfixed for roughly 18 months. Instead of explaining the issue in terms consumers might understand, Apple used arcane language sure to confound its customers.

Johns Hopkins University cryptography professor Matthew Green had no difficulty explaining the extent of the problem. “It’s as bad as you could imagine,” he told Reuters. “That’s all I can say.” That’s much clearer than Apple’s explanation, which said that “an attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS.”

It gets worse. The update to Apple’s desktop computers and laptops doesn’t mention the vulnerability at the top of its list of changes made to the operating system. Users must scroll past fixes to non-critical problems to learn that the update “provides a fix for SSL connection verification.” (Slate notes that most consumers probably won’t even bother to read the entire list of changes and simply install the latest update without question.)

Considering the extent to which this vulnerability has been identified as a prime concern for consumers, Apple’s unwillingness to clearly explain the issue to its customers is irresponsible. Its attempt to assuage concerns among its business customers by releasing a document describing its mobile operating system’s security measures, which must protect financial information, digital communications, and fingerprints from prying eyes, is laughable. It would be difficult to trust future claims that its products are safer than competitive offerings.

Reactions from around the Web

The Globe and Mail explains why consumers should be concerned about the security of essentially any information they’ve sent through Apple’s products, even though attacks based on vulnerabilities like this one are rare:


Having been burned before, Google won’t bring Fiber to San Francisco

[Note:  This item comes from friend Tim Pozar.  DLH]

From: Tim Pozar <>
Subject: Having been burned before, Google won’t bring Fiber to San Francisco
Date: February 27, 2014 at 12:58:03 PST
To: Dewayne Hendricks <>


The article doesn’t really go into detail why the Google/Earthlink was a bad idea.  Things like old technology (802.11a/b), everything backhauled over 5.8GHz.  Not meeting the design goals.

The design they proposed for SF was deployed in Philly and look how that turned out.


Having been burned before, Google won’t bring Fiber to San Francisco
Feb 25 2014

When Google announced its list of 34 prospective new Google Fiber cities last Wednesday, some were baffled that the company had overlooked San Francisco.

This is, after all, supposed to be America’s technopolis: its capital of the future. And yet, in comparison to some other cities in the country — and the world — San Francisco’s Internet speeds remain embarrassingly slow. Residents of Kansas City, Mo or Chattanooga, Tn., or Lafayette, La., all have access to much faster connections. As do those in Riga, Latvia and Prague in the Czech Republic. More than 300 municipalities in the US are making headway toward faster community broadband, as is much of Europe and developed Asia. But not San Francisco.

The fact that four South Bay cities are among the 34 announced on Wednesday suggests that there’s something about San Francisco specifically, not California generally, that’s keeping Fiber away. And there is: Google knows San Francisco too well — and it’s been burned here before.

From 2004 to 2007, San Francisco jumped on board an ambitious proposal by Google and Earthlink to bring free, city-wide WiFi to the entire city, at no cost to taxpayers. The plan was announced, and championed, by then mayor Gavin Newsom, and gathered deep and broad support. In that pre-iPhone age, the move would put San Francisco way ahead of the curve, building a mobile broadband network that would have been the first of its kind in a major US city.

And then the project fell apart.

“That was a long and drawn-out fight,” says Brian Purchia, a new media analyst who worked as a tech spokesman for Newsom at the time.

The proposal stumbled and then drowned in the city planning process. Chris Sacca, who led the project for Google, publicly vented his frustrations over working with San Francisco officials. What should have taken months took years as the Board of Supervisors under Aaron Peskin dragged its feet. NIMBYism reared its head, with some residents opposing the installation of boxes on San Francisco’s historic, pristine… sidewalks.

Denouement: Google went on to develop a network infrastructure project completely in-house, without the liability of public partnerships, and sought out eager, less entangling locales. Almost a decade after Newsom’s first announcement, San Francisco’s government finally offered free mobile WiFi in public parks (supported by a $600,000 gift from Google) and along Market St.

“The scars from that are deep in San Francisco,” says Craig Settles, an industry analyst and host of Gigabit Nation, a radio program devoted to covering broadband. “The city government and the people of San Francisco felt burned and betrayed.”

Today, San Francisco Mayor Ed Lee tends to deflect the fiber conversation by trumpeting the Market St. Wi-Fi rollout, which went live in December.  For all his eloquence about 21st-century civic leadership, the mayor is keen to gloss over discussions about infrastructure that matters. The trauma of the failed Google/Earthlink deal still haunts the city’s neighborhoods, and echoes through City Hall — and that’s before you factor in the continuing protests against Google buses. Just imagine what would happen if Google started digging up the streets.

Of course, it’s possible for fiber to be laid in US cities without help from a giant corporation like Google. But it takes a lot of public engagement, and a lot of political action. Settles says the energy required from politicians and voters to get community broadband going is like running an election campaign. “You need to generate a lot of noise, what the NSA calls chatter – public pronouncements of vision, neighborhood canvassing, calls, emails, tweets… and there’s an emotional aspect, you need to cast yourself as a liberator, saving people from the occupation by the incumbents,” says Settles.

While there are glimmers of hope for ultra high-speed internet coming to San Francisco, so far they are haphazard, unconnected and without the groundswell needed to really score big. is making some headway on a pilot project to connect homes in the Outer Sunset to fiber wires providing 1gps service, and the company, which has a business fiber service underway in Sonoma county, represents a hungry, capable entrant into the ISP arena. Last month, Sonic CEO Dane Jasper posted enigmatically on the company’s San Francisco forum “we expect a number of fiber updates in the first half of 2014.”

Purchia says that despite the radio silence from City Hall, there is a lot of talk within the mayor’s office, the city Department of Technology and the Board of Supervisors about tackling fiber.

“It’s been worked on and investigated for a number of months, by lots of people,” he says.

Board of Supervisors President David Chiu introduced a plan last year for the city to lay fiber optic cable over the course of any unrelated infrastructure projects that required streets to be torn up. That legislation is currently being revised and should be presented to the Board of Supervisors for a vote in coming months. But the “Dig Once” plan comes too late to capitalize on recent substantial digs, and is more a trial balloon than major mobilization.

Judson True, an aide for Mr. Chiu, says that the city is already taking steps towards a more comprehensive project. “Right now, its a hot topic, and there is increasing recognition in city government that affordable, high capacity broadband is crucial for our economy. We’ve fallen behind and need to play catch up as fast as we can,” True says.

He said that the subject has been discussed in preliminary talks over the city’s two year budget which will be finalized this summer, and that Mr. Chiu and others see expanding high speed broadband as a great investment.

The city already operates 130 miles of “dark fiber” in San Francisco, which is currently used for municipal buildings, schools and all of San Francisco’s housing projects. The city also leases use of that network to hospitals and clinics. True says that there is real revenue potential in increasing the number of those leases.

That said, this has all been proceeding quietly and mostly behind closed doors so far. There has not been a request for a specific proposal, no public statement of purpose, no needs assessment process or town hall meetings – things that normally precede the big push needed to get a municipal broadband project going. San Francisco voters too have been strangely silent. This is in contrast to cities like Sacramento, where a group called SacHackerLab has pushed the conversation forward through grassroots advocacy. In San Leandro and Hayward, advocacy groups like the East Bay Broadband Consortium have been holding community meetings to highlight the need for access.  Despite San Francisco’s teeming ecology of coders, whose work is undermined and emotions destabilized daily by subpar service, there is little visible momentum here.