The World Passes 400 PPM Threshold. Permanently

The World Passes 400 PPM Threshold. Permanently
By Brian Kahn
Sep 27 2016
http://www.climatecentral.org/news/world-passes-400-ppm-threshold-permanently-20738

In the centuries to come, history books will likely look back on September 2016 as a major milestone for the world’s climate. At a time when atmospheric carbon dioxide is usually at its minimum, the monthly value failed to drop below 400 parts per million.

That all but ensures that 2016 will be the year that carbon dioxide officially passed the symbolic 400 ppm mark, never to return below it in our lifetimes, according to scientists.

Because carbon pollution has been increasing since the start of the Industrial Revolution and has shown no signs of abating, it was more a question of “when” rather than “if” we would cross this threshold. The inevitability doesn’t make it any less significant, though.

September is usually the month when carbon dioxide is at its lowest after a summer of plants growing and sucking it up in the northern hemisphere. As fall wears on, those plants lose their leaves, which in turn decompose, releasing the stored carbon dioxide back into the atmosphere. At Mauna Loa Observatory, the world’s marquee site for monitoring carbon dioxide, there are signs that the process has begun but levels have remained above 400 ppm.

Since the industrial revolution, humans have been altering this process by adding more carbon dioxide to the atmosphere than plants can take up. That’s driven carbon dioxide levels higher and with it, global temperatures, along with a host of other climate change impacts.

“Is it possible that October 2016 will yield a lower monthly value than September and dip below 400 ppm? Almost impossible,” Ralph Keeling, the scientist who runs the Scripps Institute for Oceanography’s carbon dioxide monitoring program, wrote in a blog post. “Brief excursions toward lower values are still possible, but it already seems safe to conclude that we won’t be seeing a monthly value below 400 ppm this year – or ever again for the indefinite future.”

We may get a day or two reprieve in the next month, similar to August when Tropical Storm Madeline blew by Hawaii and knocked carbon dioxide below 400 ppm for a day. But otherwise, we’re living in a 400 ppm world. Even if the world stopped emitting carbon dioxide tomorrow, what has already put in the atmosphere will linger for many decades to come.

“At best (in that scenario), one might expect a balance in the near term and so CO2 levels probably wouldn’t change much — but would start to fall off in a decade or so,” Gavin Schmidt, NASA’s chief climate scientist, said in an email. “In my opinion, we won’t ever see a month below 400 ppm.”

The carbon dioxide we’ve already committed to the atmosphere has warmed the world about 1.8°F since the start of the industrial revolution. This year, in addition to marking the start of our new 400 ppm world, is also set to be the hottest year on record. The planet has edged right up against the 1.5°C (2.7°F) warming threshold, a key metric in last year’s Paris climate agreement.

[snip]

Re: Largest DDoS attack ever delivered by botnet of hijacked IoT devices

[Note: This comment comes from friend David Rosenthal. DLH]

From: “David S. H. Rosenthal” <dshr@abitare.org>
Subject: Re: [Dewayne-Net] Largest DDoS attack ever delivered by botnet of hijacked IoT devices
Date: September 25, 2016 at 1:10:21 PM PDT
To: dewayne@warpspeed.com

The lesson for enterprises is that the DDoS protections they have in
place need to be tweaked to handle higher attack volumes, he says.

Well, yes, but this is an arms race with an asymmetric enemy, so isn’t
going to be won by beefing up defenses. A Web containing only sites
that can afford $100-200K/yr in DDoS defense isn’t worth having. And,
given the economics of the IoT, suggestions about persuading IoT
vendors to improve security are futile.

There’s fundamental problem with the IP architecture, in that it
enables this kind of asymmetric warfare. That is the lesson that
should be learned.

This is yet another reason why we need to evolve to a Decentralized
Internet (not just a Decentralized Web), probably Named Data Networking
(NDN). Although I’m not aware of a major “black hat” analysis of the
various decentralized proposals, the argument is very plausible.

Why can a large number of small, compromised devices with limited
bandwidth upstream bring down a large, powerful Web site, even one
defended by an expensive DDOS mitigation service? Two reasons:

* In today’s centralized Internet, the target Web site will be at one,
or a small number of IP addresses. The network focuses the traffic
from all the compromised devices on to those addresses, consuming
massive resources at the target.
* In today’s centralized Web, the target Web site will be be one tenant
sharing the resources of a data center, so the focused traffic
inflicts collateral damage on the other tenants. It was the cost in
resources and the risk to other customers that caused Akamai to kick
out KrebsOnSecurity.

In NDN, a request for a resource only travels as far as one of the
nearest copies. And in the process it creates additional copies along
the path, so that a subsequent request will travel less far. Thus,
instead of focusing traffic, large numbers of requests defocus the
traffic. They spread the responsibility for satisfying the request out
across the infrastructure instead of concentrating it. By moving the
load caused by bad behavior closer to the bad actors, it creates
incentives for the local infrastructure to detect and prevent the bad
behavior.

David.

PS – according to Krebs, this isn’t the largest DDoS ever seen:

“OVH, a major Web hosting provider based in France, said in a post on
Twitter this week that it was recently the victim of an even more
massive attack than hit my site. According to a Tweet from OVH founder
Octave Klaba, that attack was launched by a botnet consisting of more
than 145,000 compromised IP cameras and DVRs.”

Largest DDoS attack ever delivered by botnet of hijacked IoT devices
Attack proved too draining for Akamai to keep fighting it
By Tim Greene
Sep 23 2016
http://www.networkworld.com/article/3123672/security/largest-ddos-attack-ever-delivered-by-botnet-of-hijacked-iot-devices.html

The United Nations Will Launch Its First Space Mission In 2021

The United Nations Will Launch Its First Space Mission In 2021
By Daniel Oberhaus
Sep 28 2016
http://motherboard.vice.com/read/the-un-announced-its-first-ever-space-mission

Considering that the United Nations Office for Outer Space Affairs (UNOOSA) has been around for over half a century, it might seem a bit strange that the organization has never launched its own space mission. This is finally slated to change in 2021, when the UN plans to send a spacecraft into orbit.

As detailed for a small crowd at the International Astronautical Congress yesterday, the goal of the 2021 UN mission is to make space accessible to developing member states that lack the resources to develop a standalone, national space program.

“One of UNOOSA’s core responsibilities is to promote cooperation and the peaceful uses of outer space, but our work is about more than that,” said Simonetta Di Pippo, the director of UNOOSA. “We have the vision of bringing the benefits of space to humankind, and that means helping developing countries access space technologies and their benefits.”

“We have the vision of bringing the benefits of space to humankind.”

Yesterday’s announcement comes on the heels of a memorandum of understandingsigned last June by the UNOOSA and the Sierra Nevada Corporation, an aerospace company specializing in deploying orbital payloads, such as microsatellites. Considering that UNOOSA is responsible for overseeing the peaceful use of outer space, its partnership with Sierra Nevada makes sense: the corporation’s relationship with the US military is much less robust than other American aerospace companies like Lockheed Martin and Boeing, and instead it directs most of its efforts to commercial ventures.

The mission will make use of Sierra Nevada’s Dream Chaser space plane, a reusable spacecraft that looks like a scaled-down version of NASA’s space shuttles, and can be used to transport both crew and cargo to orbit. The Dream Chaser is still under development by Sierra Nevada, but the company expects to resume test flights in December and begin shuttling cargo to the International Space Station in 2019.

UNOOSA will begin accepting proposals for mission payloads later this year. The program is open to all UN member states, but priority will be given to those without the national resources to develop a space program on their own. According to Di Pippo, UNOOSA is looking for proposals on anything from developing materials that resist corrosion in space to studying climate change and food security.

“While these experiments may seem small to us, if you go to these countries you realize this is perhaps one of the biggest things they’ve ever done,” said Mark Sirangelo, the corporate vice president of Sierra Nevada’s Space Systems. “The young researchers that will be working on this [mission] all around the world will be able to say that they are part of the space community.”

The decisions on which payloads will fly on the UN’s Dream Chaser mission will be made sometime in 2018 in order to give countries time to develop their projects and make sure they can be integrated with the Dream Chaser space plane. If countries lack the scientific expertise to develop their own payloads, UNOOSA has offered technical support in the development process.

The first mission carrying the payloads is expected to launch in 2021 and will be in orbit for 14 days. The exact launch site has yet to be determined, however the Dream Chaser space plane is capable of landing at any airport capable of hosting large commercial planes. Furthermore, because this is a UN initiative, this presents the opportunity to launch and land the Dream Chaser in any UN member country that is receptive to the program.

[snip]

Apple and Android Mark Wi-Fi Territory

Apple and Android Mark Wi-Fi Territory
Sep 28 2016
https://medium.com/@Devicescape/apple-and-android-mark-wi-fi-territory-1f6c5a56041

There may be nothing bigger than an ‘ant’ between Apple’s Wi-Fi Assist feature and Android’s Wi-Fi Assistant, but the two platforms now demonstrate fundamentally opposing attitudes to freely shared Wi-Fi.

Google recently announced it will be enabling Wi-Fi Assistant (previously exclusive to its Project Fi MVNO customers) on all Nexus devices. Wi-Fi Assistant looks to usher relevant Android devices onto shared public Wi-Fi networks where certain thresholds are met, so the net result will be more Wi-Fi connectivity, in particular over shared, open Wi-Fi in public places.

Apple’s Wi-FI Assist, meanwhile, exists to pull devices away from Wi-Fi networks which drop below a certain performance threshold. And with iOS 10 came a new feature which reacts to any open Wi-Fi networks to which the device connects with an ominous “Security Recommendation” flag.

This would appear likely to drive usage of Wi-Fi down — especially public Wi-Fi because, more often than not, home Wi-Fi is now secure by default.

This is nice, clear differentiation.

In addition to connecting to open networks automatically, the Android solution fires up a VPN to secure the traffic, at least within the open network. While there may be some objections around privacy — the VPN termination is at Google — this is a robust way to provide relatively secure access.

But is it really necessary? Certainly as long as the VPN is configured to reject traffic to and from the local network, it protects against local attacks on the device originating from the network. But shared public Wi-Fi networks configured with client isolation (with which devices on the network are permitted to communicate freely with the Internet, but not with one another directly) provide similar protection.

Moreover, for most users of open Wi-Fi, traffic is typically encrypted end-to-end using SSL connections, so the VPN doesn’t offer a significant improvement in defences. Even searches from mobile devices are usually encrypted now.

There are also downsides. Using a VPN can degrade performance, so some applications might suffer. Others, especially video players which see VPNs as invisibility cloaks allowing users to sneak past geographic licensing restrictions, may not work at all.

With iOS 10, meanwhile, Apple has opted for deterrent rather than protection.

When you tap the info icon alongside the network in your device’s scan list, iOS pops up an information panel which, for me at least, is a little confusing (see screenshot).

An open Wi-Fi network does not really expose all network traffic; any traffic encrypted end-to-end will be protected even over an open Wi-Fi connection. And, while the advice to use WPA2 Personal on a home network is good, there’s not much a customer can do about it in Starbucks.

What these contrasting approaches to Wi-Fi have in common is a desire to be seen acting responsibly in light of any security concerns consumers may have. You can’t throw a stick on the internet today without hitting a scare story about unwittingly surrendering your identity over coffee shop Wi-Fi — thanks largely to the click appeal of any kind of fear-mongering, and much diligent feeding of the press by organizations with security products to sell.

In reality the threat posed by open public Wi-Fi usage is far smaller than some would have us believe.

[snip]

Re: I think the human race has no future if it doesn’t go to space

[Note: This comment comes from friend Steve Schear. See: https://en.wikipedia.org/wiki/Spaceship_Earth DLH]

From: Steven Schear <steven.schear@googlemail.com>
Subject: Re: [Dewayne-Net] I think the human race has no future if it doesn’t go to space
Date: September 29, 2016 at 8:37:52 AM PDT
To: dewayne@warpspeed.com

But we’re already in space. Spaceship Earth 🙂

I think the human race has no future if it doesn’t go to space
In this extract from How To Make A Spaceship, the physicist explains why he said yes when offered a seat on Virgin’s SpaceShipTwo and why we need a new generation of explorers
By Stephen Hawking
Sep 26 2016
<https://www.theguardian.com/science/2016/sep/26/i-think-the-human-race-has-no-future-if-it-doesnt-go-to-space&gt;

Musk’s Mars moment: Audacity, madness, brilliance—or maybe all three

Musk’s Mars moment: Audacity, madness, brilliance—or maybe all three
Ars dissects the feasibility of SpaceX’s plan to colonize Mars in the coming decades.
By ERIC BERGER
Sep 28 2016
http://arstechnica.com/science/2016/09/musks-mars-moment-audacity-madness-brilliance-or-maybe-all-three/

Elon Musk finally did it. Fourteen years after founding SpaceX, and nine months after promising to reveal details about his plans to colonize Mars, the tech mogul made good on that promise Tuesday afternoon in Guadalajara, Mexico. Over the course of a 90-minute speech Musk, always a dreamer, shared his biggest and most ambitious dream with the world—how to colonize Mars and make humanity a multiplanetary species.

And what mighty ambitions they are. The Interplanetary Transport System he unveiled could carry 100 people at a time to Mars. Contrast that to the Apollo program, which carried just two astronauts at a time to the surface of the nearby Moon, and only for brief sojourns. Moreover, Musk’s rocket that would lift all of those people and propellant into orbit would be nearly four times as powerful as the mighty Saturn V booster. Musk envisions a self-sustaining Mars colony with at least a million residents by the end of the century.

Beyond this, what really stood out about Musk’s speech on Tuesday was the naked baring of his soul. Considering his mannerisms, passion, and the utter seriousness of his convictions, it felt at times like the man’s entire life had led him to that particular stage. It took courage to make the speech, to propose the greatest space adventure of all time. His ideas, his architecture for getting it done—they’re all out there now for anyone to criticize, second guess, and doubt.

It is not everyday that one of the world’s notables, a true difference-maker, so completely eschews caution and reveals his deepest ambitions like Musk did with the Interplanetary Transport System. So let us look at those ambitions—the man laid bare, the space hardware he dreams of building—and then consider the feasibility of all this. Because what really matters is whether any of this fantastical stuff can actually happen.

The hardware

During his talk, Musk outlined an extremely large new rocket, with a primary structure made from carbon-fiber composites that are lighter and stronger than the aluminum and other metals used in traditional rockets. A staggering 42 Raptor engines, burning liquid oxygen and densified liquid methane, would power the Interplanetary Transport System (ITS) booster to orbit. “It’s a lot of engines,” Musk acknowledged. Presumably the software to integrate all of that power has come a long way since the Soviets tried their 30-engine N1 rocket in the late 60s and early 70s. All four N1 launches were failures.

The expendable variant of the ITS rocket would have an unprecedented lift capacity of 550 metric tons to low Earth orbit (LEO), which is roughly equivalent to 50 full-size yellow school buses. The most powerful rocket flying today, the Delta IV heavy, has a payload-to-LEO capacity of only about 28 metric tons; the most powerful rocket ever to successfully fly, the Saturn V, could haul 140 metric tons to LEO. Musk’s plan relies on a reusable variant of the the ITS rocket (300 tons to orbit), sending it up and landing it back at the launch pad. After accelerating to a staging velocity of 8,650km/h, the booster would use 7 percent of its propellant for a return trip.

[snip]

The Next Industrial Revolution

The Next Industrial Revolution
An interview with the Economist columnist Ryan Avent on his new book about how technology will change the labor force
By DEREK THOMPSON
Sep 6 2016
http://www.theatlantic.com/business/archive/2016/09/the-next-industrial-revolution/498779/

A “crisis of abundance” initially seems like a paradox. After all, abundance is the ultimate goal of technology and economics. But consider the early history of the electric washing machine. In the 1920s, factories churned them out in droves. (With the average output of manufacturing workers rising by a third between 1923 and 1929, making more washing machines was relatively cheap.) But as the decade ended, factories saw they were making many more than American households demanded. Companies cut back their output and laid off workers even before the stock market crashed in 1929. Indeed, some economists have said that the oversupply of consumer goods like washing machines may have been one of the causes of the Great Depression.

What initially looked like abundance was really something more harmful: overproduction. In economics, as in anything, too much of a good thing can be problematic.

That sentiment is one of the central theses of The Wealth of Humans, a new book by the Economist columnist Ryan Avent about how technology is changing the nature of work. In the next few years, self-driving cars, health-care robots, machine learning, and other technology will complement many workers in the office. Counting both humans and machines, the world’s labor force will be able to do more work than ever before. But this abundance of workers—both those made of cells and those made of bits—could create a glut of labor. The machines may render many humans as redundant as so many vintage washing machines.

Once again, what once seems like abundance will instead be over-supply: The machines may invent their makers out of work.

Last week, I spoke with Avent about his book, how his theories might help to explain the 2016 election, and the future of working. The following conversation has been edited for clarity and concision.

Derek Thompson: In classic Economist style, your title, The Wealth of Humans, is doing double or triple duty. First, it’s a play on Adam Smith’s The Wealth of Nations, and indeed there’s a lot of Smith in here. Second, it’s a book about the most common definition of wealth, money, and how it might be earned and distributed in the future. Third, it’s about Merriam-Webster’s second definition of wealth, which is a surfeit, a surplus, and your argument is that we may be entering a world with too many workers. Anything I’m missing?

Ryan Avent: Those were the ones I had in mind. There may be others lurking.

Thompson: There is an ongoing debate about whether technological growth is accelerating, as economists like Erik Brynjolfsson and Andrew McAfee (the authors of The Second Machine Age) insist, or slowing down, as the national productivity numbers indicate. Where do you come down?

Avent: I come down squarely in the Brynjolfsson and McAfee camp and strongly disagree with economists like Robert Gordon, who have said that growth is basically over. I think the digital revolution is probably going to be as important and transformative as the industrial revolution. The main reason is machine intelligence, a general-purpose technology that can be used anywhere, from driving cars to customer service, and it’s getting better very, very quickly. There’s no reason to think that improvement will slow down, whether or not Moore’s Law continues.

I think this transformative revolution will create an abundance of labor. It will create enormous growth in [the supply of workers and machines], automating a lot of industries and boosting productivity. When you have this glut of workers, it plays havoc with existing institutions.

I think we are headed for a really important era in economic history. The Industrial Revolution is a pretty good guide of what that will look like. There will have to be a societal negotiation for how to share the gains from growth. That process will be long and drawn out. It will involve intense ideological conflict, and history suggests that a lot will go wrong.

Thompson: Even I would admit that is a weird time to predict the end of work, considering that the unemployment rate has been at or under 5 percent all year, the private sector in the U.S. has created jobs for record-high 77 consecutive months, and wages are actually rising at their fastest rate since the Great Recession.

So what is the best evidence that your prediction is plausible?

Avent: I would say the best evidence comes from the wage growth numbers. I know we’ve experienced an uptick in recent months, but we’re seven years into the recovery and still well short of the level of nominal wage growth we would expect, even compared to recent disappointing recoveries. In the bigger picture, for a lot of middle-skilled workers, especially men, you have stagnating wages for several decades. Apart from the top 1 percent, a lot of people are having a lousy time.

If you look at the experience of rich countries across the world, you see there is a tradeoff between wage growth, productivity, and employment growth. Employment in Britain is at an all-time high, and wage growth there has underperformed America and most of Europe. This suggests that the main way that employers are using people in countries like the U.K. is to use them to do low-productivity work.

Thompson: There is a familiar story of technology and the labor force that one might call the “we used to” story. We used to work on farms, we used to work in textiles, we used to work in factories … What’s the next chapter of the “we used to” story? What sector currently employing a lot of Americans is the lowest-hanging fruit for disruption?

Avent: Driving is certainly an area where we’ve seen more rapid progress than I would have guessed. Truck drivers, bus drivers, and train drivers have pretty good pay and those account for millions of jobs. Most importantly, there seems to be an interest among companies employing those workers to bring [the tech that would replace humans] forward. In the long run, I’m optimistic for technology to transform health care, but that’s a harder sector to disrupt.

Machine intelligence will be applied in ways we cannot imagine yet. One example is talking. Today, if you have a problem with a car company, you might end up conversing with a bot over the phone. Those are conversations that we thought weren’t automatable that are now. We used to employ a lot of people to talk to people and people have those conversations with bots.

[snip]

Oh great — scientists just confirmed a key new source of greenhouse gases

Oh great — scientists just confirmed a key new source of greenhouse gases
By Chris Mooney
Sep 28 2016
https://www.washingtonpost.com/news/energy-environment/wp/2016/09/28/scientists-just-found-yet-another-way-that-humans-are-creating-greenhouse-gases/

Countries around the world are trying to get their greenhouse gas emissions under control — to see them inch down, percentage point by percentage point, from where they stood earlier in the century. If everybody gets on board, and shaves off enough of those percentage points, we just might be able to get on a trajectory to keep the world from warming more than 2 degrees Celsius above the temperature where it stood prior to industrialization.

But if a new study is correct, there’s a big problem: There might be more greenhouse gases going into the atmosphere than we thought. That would mean an even larger need to cut.

The new paper, slated to be published next week in BioScience, confirms a significant volume of greenhouse gas emissions coming from a little-considered place: Man-made reservoirs, held behind some 1 million dams around the world and created for the purposes of electricity generation, irrigation, and other human needs. In the study, 10 authors from U.S., Canadian, Chinese, Brazilian, and Dutch universities and institutions have synthesized a considerable body of prior research on the subject to conclude that these reservoirs may be emitting just shy of a gigaton, or billion tons, of annual carbon dioxide equivalents. That would mean they contributed 1.3 percent of the global total.

Moreover, the emissions are largely in the form of methane, a greenhouse gas with a relatively short life in the atmosphere but a very strong short-term warming effect. Scientists are increasingly finding that although we have begun to curb some emissions of carbon dioxide, the principal greenhouse gas, we are still thwarted by methane, which comes from a diversity of sources that range from oil and gas operations to cows.

The new research concludes that methane accounted for 79 percent of carbon dioxide equivalent emissions from reservoirs, while the other two greenhouse gases, carbon dioxide and nitrous oxide, accounted for 17 percent and 4 percent.

“There’s been kind of an explosion in research into efforts to estimate emissions from reservoirs,” said Bridget Deemer, the study’s first author and a researcher with Washington State University. “So we synthesized all known estimates from reservoirs globally, for hydropower and other functions, like flood control and irrigation.”

“And we found that the estimates of methane emissions per area of reservoir are about 25 percent higher than previously thought, which we think is significant given the global boom in dam construction, which is currently underway,” she continued.

As Deemer’s words suggest, the study does not single out dams used to generate electricity — it focuses on all reservoirs, including those that are created for other purposes. It drew on studies on 267 reservoirs around the world, which together have a surface area of close to 30,000 square miles, to extrapolate global data.

Reservoirs are a classic instance of how major human alteration’s to the Earth’s landscape can have unexpected effects. Flooding large areas of Earth can set off new chemical processes as tiny microorganisms break down organic matter in the water, sometimes doing so in the absence of oxygen — a process that leads to methane as a byproduct. One reason this happens is that the flooded areas initially contain lots of organic life in the form of trees and grasses.

Meanwhile, as nutrients like nitrogen and phosphorous flow into reservoirs from rivers — being poured in by human agriculture and waste streams — these can further drive algal growth in reservoirs, giving microorganisms even more material to break down. The study finds that for these reasons, reservoirs emit more methane than “natural lakes, ponds, rivers, or wetlands.”

“If oxygen is around, then methane gets converted back to CO2,” said John Harrison, another of the study’s authors, and also a researcher at Washington State. “If oxygen isn’t present, it can get emitted back to the atmosphere as methane.”And flooded areas, he said, are more likely to be depleted of oxygen. A similar process occurs in rice paddies, which are also a major source of methane emissions.

[snip]

Apple logs your iMessage contacts — and may share them with police

Apple logs your iMessage contacts — and may share them with police
By Sam Biddle
Sep 28 2016
https://theintercept.com/2016/09/28/apple-logs-your-imessage-contacts-and-may-share-them-with-police/

Apple promises that your iMessage conversations are safe and out of reach from anyone other than you and your friends. But according to a document obtained by The Intercept, your blue-bubbled texts do leave behind a log of which phone numbers you are poised to contact and shares this (and other potentially sensitive metadata) with law enforcement when compelled by court order.

Every time you type a number into your iPhone for a text conversation, the Messages app contacts Apple servers to determine whether to route a given message over the ubiquitous SMS system, represented in the app by those déclassé green text bubbles, or over Apple’s proprietary and more secure messaging network, represented by pleasant blue bubbles, according to the document. Apple records each query in which your phone calls home to see who’s in the iMessage system and who’s not.

This log also includes the date and time when you entered a number, along with your IP address — which could, contrary to a 2013 Apple claim that “we do not store data related to customers’ location,” identify a customer’s location. Apple is compelled to turn over such information via court orders for systems known as “pen registers” or “trap and trace devices,” orders that are not particularly onerous to obtain, requiring only that government lawyers represent they are “likely” to obtain information whose “use is relevant to an ongoing criminal investigation.” Apple confirmed to The Intercept that it only retains these logs for a period of 30 days, though court orders of this kind can typically be extended in additional 30-day periods, meaning a series of monthlong log snapshots from Apple could be strung together by police to create a longer list of whose numbers someone has been entering.

The Intercept received the document about Apple’s Messages logs as part of a larger cache originating from within the Florida Department of Law Enforcement’s Electronic Surveillance Support Team, a state police agency that facilitates police data collection using controversial tools like the Stingray, along with conventional techniques like pen registers. The document, titled “iMessage FAQ for Law Enforcement,” is designated for “Law Enforcement Sources” and “For Official Use Only,” though it’s unclear who wrote it or for what specific audience — metadata embedded in the PDF cites an author only named “mrrodriguez.” (The term “iMessages” refers to an old name for the Messages app still commonly used to refer to it.)

Phone companies routinely hand over metadata about calls to law enforcement in response to pen register warrants. But it’s noteworthy that Apple is able to provide information on iMessage contacts under such warrants given that Apple and othershave positioned the messaging platform as a particularly secure alternative to regular texting.

The document reads like a fairly standard overview that one might forward to a clueless parent (questions include “How does it work?” and “Does iMessage use my cellular data plan?”), until the final section, “What will I get if I serve Apple with a [pen register/tap and trace] court order for an iMessage account?”:

[snip]

Internal ‘clock’ makes some people age faster and die younger – regardless of lifestyle

Internal ‘clock’ makes some people age faster and die younger – regardless of lifestyle
Study could explain why even with healthy lifestyles some people die younger than others, and raises future possibility of extending the human lifespan
By Hannah Devlin, Science correspondent
Sep 28 2016
https://www.theguardian.com/science/2016/sep/28/internal-clock-makes-some-people-age-quicker-and-die-younger-regardless-of-lifestyle

Scientists have found the most definitive evidence yet that some people are destined to age quicker and die younger than others – regardless of their lifestyle.

The findings could explain the seemingly random and unfair way that death is sometimes dealt out, and raise the intriguing future possibility of being able to extend the natural human lifespan.

“You get people who are vegan, sleep 10 hours a day, have a low-stress job, and still end up dying young,” said Steve Horvath, a biostatistician who led the research at the University of California, Los Angeles. “We’ve shown some people have a faster innate ageing rate.”

A higher biological age, regardless of actual age, was consistently linked to an earlier death, the study found. For the 5% of the population who age fastest, this translated to a roughly 50% greater than average risk of death at any age.

Intriguingly, the biological changes linked to ageing are potentially reversible, raising the prospect of future treatments that could arrest the ageing process and extend the human lifespan.

“The great hope is that we find anti-ageing interventions that would slow your innate ageing rate,” said Horvath. “This is an important milestone to realising this dream.”

Horvath’s ageing “clock” relies on measuring subtle chemical changes, in which methyl compounds attach or detach from the genome without altering the underlying code of our DNA.

His team previously found that methyl levels at 353 specific sites on the genome rise and fall according to a very specific pattern as we age – and that the pattern is consistent across the population. The latest study, based on an analysis of blood samples from 13,000 people, showed that some people are propelled along life’s biological tramlines much quicker than others – regardless of lifestyle.

“We see people aged 20 who are fast agers and we look at them 20 years later and they are still fast agers,” said Horvath. “The big picture here is that this is an innate process.”

The scientists found that known health indicators, such as smoking, blood pressure and weight, were still more valuable in predicting life expectancy in the 2,700 participants who had died since the study began, but that their underlying aging rate also had a significant effect.

In a fictional example, the scientists compare two 60-year-old men, Peter, whose ageing rate ranks in the top 5% and Joe, whose rate is in the slowest 5%. If both are smokers and have stressful jobs, Peter is given a 75% chance of dying in the next 10 years compared to a 46% chance for Joe.

This is not the first time that scientists have observed so-called epigenetic changes to the genome with age, but previously these were put down to wear-and-tear brought about by environmental factors, rather than indicating the ticking of an internal biological clock.

Wolf Reik, a professor of epigenetics at the University of Cambridge who was not involved in the work, said: “It now looks like you get a clock given to you when you’re young. It gets wound up and the pace it’s ticking at is dictated by this epigenetic machinery.”

“I’m sure insurance companies are already quite interested in this kind of thing,” he added.

[snip]