Building Better Bitcoins (Of course this author also wrote “Why negotiations with Iran are doomed to fail …”)

[Note:  This item comes from reader Randall Head.  DLH]

From: Randall Webmail <>
Subject: Building Better Bitcoins (Of course this author also wrote “Why negotiations with Iran are doomed to fail …”)
Date: November 30, 2013 at 6:43:03 PM PST
To: Dewayne Hendricks <>

Building Better Bitcoins
By Stephen L. Carter
Nov 29 2013

It has been a rough couple of months for bitcoins.

In October, the U.S. government shut down the Silk Road website, where bitcoins were being used to purchase illegal drugs. Then in early November a researcher at Cornell University published a paper asserting that the virtual currency is broken — that is, that the system of difficult algorithms that one must solve to obtain bitcoins might be successfully exploited by a group of sufficiently clever and selfish bitcoin miners.

Next came the U.S. Senate hearings aimed at discovering whether these so-called crypto-currencies are a tool for drug dealers and money launderers to do business beyond official scrutiny. And now, just in time for the holiday shopping season, comes the disclosure that the Bitcoin Internet Payment System — or BIPS — has been hacked. Customers lost the equivalent of about $1 million in bitcoins.

Hacking BIPS isn’t like stealing from a virtual bank run by an Australian teenager. According to its website, BIPS is “the premier bitcoin service provider,” helping to facilitate bitcoin use even in the brick-and-mortar world.

Bitcoins are mined by solving algorithms that were posted on the Internet five years ago by a programmer or group of programmers using the pseudonymSatoshi Nakamoto. Successful miners earn a certain number of bitcoins. The rate of bitcoin distribution automatically slows over time, so that the currency cannot be inflated. In fact, the total number of bitcoins is capped at 21 million.

Valuable Asset

Bitcoins may or may not be money — economists differ on whether they are in sufficiently widespread use to constitute a medium of exchange — but they are plainly a valuable asset. Like other assets, they will have value as long as people want them.

And right now, bitcoins are hot. Very hot. The bitcoin market has shrugged off this bad news, recently hitting a high of more than $1,000 — a considerable feat given that less than a year ago, a single bitcoin was worth about $13. One of the reasons digital thieves risk breaking into virtual vaults is that the value of bitcoins keeps rising.

Certainly bitcoins have captured the imagination of the news media. Consider the excitement over the first bitcoin ATM, installed in a Vancouver coffee shop, or the front-page Wall Street Journal story about a couple who traveled the world using only bitcoins.

At the same time, there have been lurid stories about how easily drug dealers and money launderers can manipulate the anonymity of the bitcoin for their own purposes. Yet the U.S. government’s takedown of Silk Road suggests that hiding in the bitcoin system isn’t as easy as it looks. “It is not in fact anonymous,” says Assistant Attorney General Mythili Raman. “It is not immune from investigation.”

If bitcoins are to have a future, the selling point can’t be anonymity — a characteristic sure to draw increasing government scrutiny. The selling points will have to be security and convenience.

The concepts are not unrelated. A principal knock on bitcoins has been the claim that they are inherently insecure. The principal defense has been that they are as secure as “real” currency. Both can be lost or stolen. And, as bitcoin supporters like to point out, most of the cases of hacking involve individuals who have followed poor password security procedures — the sort of carelessness that would cause you equal trouble with your dollar-denominated online banking account.

The trouble for the crypto-currencies is that being as safe as other forms of currency isn’t enough. Precisely because bitcoins are new and feel peculiar to a world raised on the notion that governments control the supply of money, bitcoins will probably enter widespread use only when their advocates can credibly argue that the virtual currency has advantages other than anonymity over the real thing.


The Fluid Dynamics of Spitting: How Archerfish Use Physics To Hunt With Their Spit

The Fluid Dynamics of Spitting: How Archerfish Use Physics To Hunt With Their Spit

Archerfish are incredible creatures. They lurk under the surface of the water in rivers and seas, waiting for an insect to land on the plants above. Then, suddenly, and with unbelievable accuracy, they squirt out a stream of water that strikes down the insect. The insect falls, and by the time it hits the water, the archerfish is already waiting in place ready to swallow it up. You have to marvel at a creature that excels at what seems like such an improbable hunting strategy – death by water pistol squirt.

Here’s a video by BBC Wildlife that shows the archerfish in action (the first half of the video is about archerfish).

Technically, the term archerfish doesn’t refer to a single species of fish but to a family of 7 different freshwater fish, that fall under the genus Toxotes. They strike with remarkable accuracy, and just a tenth of second after the prey is hit, they quickly move to the spot where it will hit the water. Unlike most baseball players who have to keep their eyes on a fly ball to track it, in less than the blink of an eye, the archerfish is in place, waiting for the insect to arrive.

If that isn’t impressive enough, consider this. When these archerfish squirt water, their eyes are underwater. If you’ve spent any time in a swimming pool, you’ll know that light bends when it enters water. A less astute fish might not correct for this bending of light, and would be tricked into thinking that the insect is somewhere it isn’t. But not the archerfish. This little aquatic physicist is able to seamlessly correct for the bending of light. And it isn’t a minor correction – when the perceived angle of the target is 45 degrees, its true angle is off by as much as 25 degrees.


The Vaccination Effect: 100 Million Cases of Contagious Disease Prevented

The Vaccination Effect: 100 Million Cases of Contagious Disease Prevented
Nov 27 2013

Vaccination programs for children have prevented more than 100 million cases of serious contagious disease in the United States since 1924, according to a new study published in The New England Journal of Medicine.

The research, led by scientists at the University of Pittsburgh’s graduate school of public health, analyzed public health reports going back to the 19th century. The reports covered 56 diseases, but the article in the journal focused on seven: polio, measles, rubella, mumps, hepatitis A, diphtheria and pertussis, or whooping cough.

Researchers analyzed disease reports before and after the times when vaccines became commercially available. Put simply, the estimates for prevented cases came from the falloff in disease reports after vaccines were licensed and widely available. The researchers projected the number of cases that would have occurred had the pre-vaccination patterns continued as the nation’s population increased.

The journal article is one example of the kind of analysis that can be done when enormous data sets are built and mined. The project, which started in 2009, required assembling 88 million reports of individual cases of disease, much of it from the weekly morbidity reports in the library of the Centers for Disease Control and Prevention. Then the reports had to be converted to digital formats.

Most of the data entry — 200 million keystrokes — was done by Digital Divide Data, a social enterprise that provides jobs and technology training to young people in Cambodia, Laos and Kenya.

Still, data entry was just a start. The information was put into spreadsheets for making tables, but was later sorted and standardized so it could be searched, manipulated and queried on the project’s website.

“Collecting all this data is one thing, but making the data computable is where the big payoff should be,” said Dr. Irene Eckstrand, a program director and science officer for the N.I.H.’s Models of Infectious Disease Agent Study.

The University of Pittsburgh researchers also looked at death rates, but decided against including an estimate in the journal article, largely because death certificate data became more reliable and consistent only in the 1960s, the researchers said.


For 20 Years the Nuclear Launch Code at US Minuteman Silos Was 00000000

[Note:  This item comes from friend Bob Frankston.  DLH]

For 20 Years the Nuclear Launch Code at US Minuteman Silos Was 00000000
By Karl Smallwood
Nov 29 2013

Today I found out that during the height of the Cold War, the US military put such an emphasis on a rapid response to an attack on American soil, that to minimize any foreseeable delay in launching a nuclear missile, for nearly two decades they intentionally set the launch codes at every silo in the US to 8 zeroes.

We guess the first thing we need to address is how this even came to be in the first place. Well, in 1962 JFK signed the National Security Action Memorandum 160, which was supposed to ensure that every nuclear weapon the US had be fitted with a Permissive Action Link (PAL), basically a small device that ensured that the missile could only be launched with the right code and with the right authority.

There was particularly a concern that the nuclear missiles the United States had stationed in other countries, some of which with somewhat unstable leadership, could potentially be seized by those governments and launched. With the PAL system, this became much less of a problem.

Beyond foreign seizure, there was also simply the problem that many U.S. commanders had the ability to launch nukes under their control at any time. Just one commanding officer who wasn’t quite right in the head and World War III begins. As U.S. General Horace M. Wade stated about General Thomas Power:

I used to worry about General Power. I used to worry that General Power was not stable. I used to worry about the fact that he had control over so many weapons and weapon systems and could, under certain conditions, launch the force. Back in the days before we had real positive control [i.e., PAL locks], SAC had the power to do a lot of things, and it was in his hands, and he knew it.

To give you an idea of how secure the PAL system was at this time, bypassing one was once described as being “about as complex as performing a tonsillectomy while entering the patient from the wrong end.” This system was supposed to be essentially hot-wire proof, making sure only people with the correct codes could activate the nuclear weapons and launch the missiles.

However, though the devices were supposed to be fitted on every nuclear missile after JFK issued his memorandum, the military continually dragged its heels on the matter. In fact, it was noted that a full 20 years after JFK had order PALs be fitted to every nuclear device, half of the missiles in Europe were still protected by simple mechanical locks. Most that did have the new system in place weren’t even activated until 1977.

Those in the U.S. that had been fitted with the devices, such as ones in the Minuteman Silos, were installed under the close scrutiny of Robert McNamara, JFK’s Secretary of Defence. However, The Strategic Air Command greatly resented McNamara’s presence and almost as soon as he left, the code to launch the missile’s, all 50 of them, was set to 00000000.

Oh, and in case you actually did forget the code, it was handily written down on a checklist handed out to the soldiers. As Dr. Bruce G. Blair, who was once a Minuteman launch officer, stated:

Our launch checklist in fact instructed us, the firing crew, to double-check the locking panel in our underground launch bunker to ensure that no digits other than zero had been inadvertently dialed into the panel.

This ensured that there was no need to wait for Presidential confirmation that would have just wasted valuable Russian nuking time. To be fair, there was also the possibility that command centers or communication lines could be wiped out, so having a bunch of nuclear missiles sitting around un-launchable because nobody had the code was seen as a greater risk by the military brass than a few soldiers simply deciding to launch the missiles without proper authorization.


How Steve Jobs Made the iPad Succeed When All Other Tablets Failed

How Steve Jobs Made the iPad Succeed When All Other Tablets Failed

Steve Jobs’s solution to Google’s Android-everywhere strategy was simple and audacious: he unveiled the iPad. Many knew Jobs was going to unveil a tablet despite what he had told Walt Mossberg of The Wall Street Journal seven years before. “It turns out people want keyboards . . . We look at the tablet and we think it is going to fail,” Jobs had said.

But he’d clearly reconsidered this. If Google was going to try to win the mobile-platform war on breadth, Jobs was going to win it on depth. All then-Android chief Andy Rubin had to do to expand Android was to get it on more and more machines; like Bill Gates with Windows, Rubin didn’t care which products were hits and which were not as long as in the aggregate the Android platform was growing. For Jobs to make Apple’s strategy work — to grow the iOS platform vertically — he needed to hit it out of the park every time.

When executives inside and outside Apple wondered if Jobs was making the same mistake against Android that he made against Microsoft — if he was keeping his platform too rigid — it seemed that, if anything, Jobs wasincreasing its rigidity. Starting in 2010, Jobs had more and more Apple products assembled with special screws to make it difficult for anyone with typical screwdriver heads to open the cases of his machines. (It seemed like a small thing, but to those inside Silicon Valley its symbolism was large: One of Android’s pitches to consumers was the flexibility of the software and the devices.)

Maybe more people in the world would own Android phones than iPhones. But the people who owned iPhones would also own iPads, iPod Touches, and a slew of other Apple products that all ran the same software, that all connected to the same online store, and that all generated much bigger profits for everyone involved. Only someone with the self-confidence of Jobs would have the guts to set such a high bar.

Is There Room for a Third Category?

Minutes after Jobs unveiled the iPad on January 27, 2010, it appeared as if he’d cleared the bar he’d set for Apple by a mile. He laid out his new invention for the world more slowly than usual, as if he were helping his audience complete a vast jigsaw puzzle. He put up a slide with picture of an iPhone and a Macbook laptop, put a question mark between them, and asked a simple question: “Is there room for a third category of device in the middle?”

Jobs then raised what had become the usual answer to this question: “Some people have thought that’s a netbook. The problem is that a netbook isn’t better at anything. They’re slow. They have low-quality displays. And they run clunky, old PC software [Windows]. They’re not better than a laptop at anything. They’re just cheaper.”

The foundation of Jobs’s iPad pitch was counterintuitive. Most people don’t buy a laptop for the tasks they were originally designed for — heavy office work, such as writing, crafting presentations, or financial analysis with spreadsheets. They use it mostly to communicate via email, text, Twitter, LinkedIn, and Facebook; to browse the Internet; and to consume media such as books, movies, TV shows, music, photos, games, and videos. Jobs said that you could do all this on an iPhone, but the screen was too small to make it comfortable. You could also do it all on a laptop, but the keyboard and the trackpad made it too bulky, and the short battery life often left you tethered to a power outlet.

What the world needed was a device in the middle that combined the best of both — something that was “more intimate than a laptop, and so much more capable than a smartphone,” he said.

Only after more buildup did Jobs say what the world was waiting for: “We think we have the answer.” A picture of the iPad dropped nicely into place between the iPhone and the Macbook on the slide.

In a Long Line of Tablets, How Did the iPad Succeed Where Others Failed?

It wasn’t the iPad’s looks that had everyone rapt. Many wondered if they were watching the world’s greatest entrepreneur make a huge mistake.

The tablet computer was the most discredited category of consumer electronics in the world. Entrepreneurs had been trying to build tablet computers since before the invention of the PC. They had tried so many times that the conventional wisdom was that it couldn’t be done.

Alan Kay of Xerox PARC — who is to certain geeks what Neil Armstrong is to the space program — drew up plans for the Dynabook in 1968 and laid out those plans in a 1972 paper titled “A Personal Computer for Children of All Ages.” Apple prototyped something it called the Bashful in 1983 but never released it. The first tablet to get any consumer traction came from Jeff Hawkins, the entrepreneur behind the PalmPilot in the late 1990s. He built the GRiDPad from Tandy, which was released in 1989. GO Corp. took the next whack at tablet computing with the EO in 1993. (GO Corp.’s early employees included Omid Kordestani, Google’s first business executive, and Bill Campbell, Apple’s vice president of marketing in the 1980s.)

Apple unveiled the Newton in 1994. This groundbreaking PDA turned out to be Silicon Valley’s Edsel: a one-word explanation for why tablets could never sell. It also became emblematic of Apple’s Jobs-less era, when the company was run by a series of increasingly unsuccessful executives until it nearly went into bankruptcy; it was, fittingly, one of the first projects Jobs killed when he returned in 1997. By then, if you wanted computing power that was portable, you could buy a laptop. Everything else involved too much compromise.

Indeed, the PalmPilot and devices like it became so popular for the next half decade because they didn’t try to do too much.


How do dogs think?

How do dogs think?
A neuroscientist ponders the canine brain
Nov 1 2013

Excerpted from “How Dogs Love Us”

To truly know what a dog is thinking, you would have to be a dog.

The question of what a dog is thinking is actually an old metaphysical debate, which has its origins in Descartes’s famous saying cogito ergo sum—“I think, therefore I am.” Our entire human experience exists solely inside our heads. Photons may strike our retinas, but it is only through the activity of our brains that we have the subjective experience of seeing a rainbow or the sublime beauty of a sunset over the ocean. Does a dog see those things? Of course. Do they experience them the same way? Absolutely not.

When Lyra [the author’s dog] was jumping and barking at the woman wrapped in purple, with a red dot on her forehead, Lyra experienced the same things at a primitive level that I did. Purple. Red. Screaming. Those are the sensory primitives. They originate in photons bouncing off dyes, pressure waves in the air around the woman’s vocal cords. But my brain interprets those events one way and Lyra’s brain another.

Observing Lyra’s behavior doesn’t tell us what she was thinking. From past experience, I knew that Lyra barked and jumped in response to different things. She barks when we’re eating. In that context, a natural assumption would be that she wants food too. But she also barks after dropping a tennis ball at my feet. I had no comparable frame of reference for what had attracted her to the screaming woman that night at the party.

The question of what it is like to be a dog could be approached from two very different perspectives. The hard approach asks the question: What is it like for a dog to be a dog? If we could do that, then all the questions about why a dog behaves the way it does would become clear. The problem with being a dog, though, is that we would have no language to describe what we felt. The best we can do is ask the related, but substantially easier question: What would it be like for us to be a dog?

By imagining ourselves in the skin of another animal, we can recast questions of behavior into their human equivalent. The question of why Lyra harassed the party guest becomes: If I were Lyra, why would I bark at that woman? Framed that way, we can form all sorts of speculations for dog behavior.

Many authors have written about the dog mind, and some have even attempted to answer the types of questions I have posed. I will not review this vast literature. I will, however, point out that much of it is based on two potentially flawed assumptions—both stemming from the paradox of getting into a dog’s mind without actually being a dog.

The first flaw comes from the human tendency to anthropomorphize, or project our own thoughts and feelings onto things that aren’t ourselves. We can’t help it. Our brains are hardwired to project our thoughts onto other people. This is called mentalizing, and it is critical for human social interactions. People are able to interact with each other only because they are constantly guessing what other people are thinking. The brevity of text messages, for example, and the fact that we are able to communicate with less than 140 characters at a time work because people maintain mental models of each other. The actual linguistic content of most text exchanges is minimal. And because humans have common elements of culture, we tend to react in fairly similar ways. For example, if I watch a movie that makes me sad, I can use my own reaction to intuit that the people sitting around me are feeling the same way. I could even start a conversation with a complete stranger based on our shared experience, using my own thoughts as a starting point. But dogs are not the same as humans, and they certainly don’t have a shared culture like we do. There is no avoiding the fact that when we observe dog behavior, we view it through the filter of the human mind. Unfortunately, much of dog literature says more about the human writer than the dog.

The second flaw is the reliance on wolf behavior to interpret dog behavior, termed lupomorphism. While it is true that dogs and wolves share a common ancestor, that does not mean that dogs are descended from wolves. This is an important distinction. The evolutionary trajectories of wolves and dogs diverged when some of the “wolf-dogs” started hanging out with proto-humans. Those that stuck around became dogs, and those that stayed away became modern wolves. Modern wolves behave differently from dogs, and they have very different social structures. Their brains are different too. Interpreting dog behavior through the lens of wolf behavior is even worse than anthropomorphizing: it’s a human anthropomorphizing wolf behavior and using that flawed impression as an analogy for dog behavior.


Silicon Valley Isn’t a Meritocracy. And It’s Dangerous to Hero-Worship Entrepreneurs

Silicon Valley Isn’t a Meritocracy. And It’s Dangerous to Hero-Worship Entrepreneurs

In a cultural context where idealists have linked social media to democracy, egalitarianism, and participation, the tech scene in Silicon Valley considers itself to be exceptional. Supporters speak glowingly of a singularly meritocratic environment where innovative entrepreneurs disrupt fusty old industries and facilitate sweeping social change.

But if the tech scene is really a meritocracy, why are so many of its key players, from Mark Zuckerberg to Steve Jobs, white men? If entrepreneurs are born, not made, why are there so many programs attempting to create entrepreneurs? If tech is truly game-changing, why are old-fashioned capitalism and the commodification of personal information never truly questioned?

The myths of authenticity, meritocracy, and entrepreneurialism do have some basis in fact. But they are powerful because they reinforce ideals of the tech scene that shore up its power structures and privileges. Believing that the tech scene is a meritocracy implies that those who obtain great wealth deserve it, and that those who don’t succeed do not. The undue emphasis placed on entrepreneurship, combined with a limited view of who “counts” as an entrepreneur, function to exclude entire categories of people from ascending to the upper echelon of the industry. And the ideal of authenticity privileges a particular type of self-presentation that encourages people to strategically apply business logics to the way they see themselves and others.

Taken as a whole, these themes of authenticity, meritocracy, and entrepreneurialism reinforce both a closed system of privilege and one centered almost entirely around the core beliefs of neoliberal capitalism. This does not make technology intrinsically better or worse than any other American business — I’d certainly rather socialize with tech people than bankers. But it does reveal the threadbare nature of digital exceptionalism.

The Myth of Meritocracy

The myth is that anyone can come from anywhere and achieve great success in Silicon Valley if they are skilled. It holds that those who “make it” do so due to their excellent ideas and ability, because the tech scene is a meritocracy where what you do, not who you are, matters.

There is some truth to this statement. To a certain extent, there is a lower barrier to entry in tech than in some other industries. Having a famous father or coming from an old money family would not necessarily be an asset as it might in banking (it wouldn’t necessarily hurt, either). And certainly the highest status in the tech scene comes from one’s job rather than family name, although wealth factors considerably into status.

I frequently heard variations of the saying “everyone wants to take the pretty girl to the dance,” which refers to the tendency venture capitalists have to cluster around popular deals. (The prevalence of this phrase is very revealing of who is in these meetings.) In reality, everyone wants to be in business with young, white, male entrepreneurs with connections to high-status people, a pedigree from certain companies, a well-known mentor. Sharon Vosmek, CEO of the nonprofit Astia that helps fund women entrepreneurs, identified“systematic and hidden biases” in technology funding:

VCs hold clear stereotypes of successful CEOs (they call it pattern recognition, but in other industries they call it profiling or stereotyping.) John Doerr publicly stated that his most successful investments — and the no-brainer pattern for future investments — were in founders who were white, male, under 30, nerds, with no social life who dropped out of Harvard or Stanford.

This formula certainly filters out enormous numbers of people who may be equally skilled.

The myth of meritocracy also ignores the level of privilege that participation in the tech scene involves, as i09 editor Annalee Newitz points out:

Let’s say that most people can have access to computers sometimes but only some people can have access to computers all the time, and then an even smaller group can have access to the net while they’re just out wandering around doing Twitter, right? They’re like, I have my phone and I can say things while I’m walking around where somebody else has to actually go home, to their one computer that they own. So the more that you want to participate in this network of wealth and entrepreneurialism, the more stuff you have to have to participate in it. So there [are] these levels of participation that are enabled by either being wealthier or having the free time to participate.

Certainly, a level of material wealth is necessary to participate in San Francisco tech culture. Very few pointed to the elephant in the room of assumed wealth: “People behave as if we all make kind of the same.” To forge the type of social connections necessary to move into the upper echelons of the tech scene requires being able to take part in group activities, travel to conferences, and work on personal projects. This requires middle- to upper-class wealth, which filters out most people.