FCC proposal could streamline installation of new fiber internet

[Note:  This item comes from friend Robert Berger.  DLH]

FCC proposal could streamline installation of new fiber internet
One-touch make-ready is so simple the FCC couldn’t mess it up… right?
By Greg Synek
Jul 16 2018

Bottom line: The FCC might finally do something right this year by eliminating government red tape that is greatly slowing down the deployment of high speed fiber internet services. Gaining access to utility poles needed for the last stages of fiber installation could soon become a one step process instead of the painfully slow and expensive repeated visits to each field location. 

Even though the United States is a technological leader in many regards, widespread availability of gigabit internet carried over fiber optic lines is lacking. Last mile installation costs and bureaucratic policies imposed make it impractical to deploy fiber outside of the most densely populated areas.

Due to the high costs of last mile fiber installation, Google Fiber became the first internet service provider to launch wireless gigabit service. As it turns out, obtaining access to utility poles for cable installation is extremely time consuming and difficult since the majority of other utility companies must acknowledge when their installed cables need to be moved or adjusted.

For companies in the midst of deploying fiber, several trips must be made out to utility poles before any real work can be done. A new proposal from the FCC could eliminate numerous required trips and establish a “one-touch make-ready” system. Changes could allow any company to make necessary changes to wires on a utility pole without waiting for third-parties to make the changes themselves.

Broadband deployment could be rapidly accelerated by opening the doors on new installations. There are a handful of common sense rules that go along with the proposal that require safety and reliability to be taken into consideration when moving cables belonging to other service providers.



FCC chairman has ‘serious concerns’ about the Sinclair-Tribune merger and could seek to block the deal

FCC chairman has ‘serious concerns’ about the Sinclair-Tribune merger and could seek to block the deal
By Brian Fung and Tony Romm
Jul 16 2018

Sinclair Broadcasting’s bid to create a conservative television giant appeared to be in doubt on Monday after federal regulators highlighted “serious concerns” with the proposed acquisition of Tribune Media, which is seeking to stretch the boundaries of what is allowed in U.S. broadcast ownership.

The chairman of the Federal Communications Commission, Ajit Pai, said that he intends to send key parts of the $3.9 billion deal to be reviewed by an administrative law judge, which is typically the first step the FCC takes when it seeks to block a deal. The sudden clampdown on the Sinclair deal marks a shift from Pai’s previous moves to deregulate the broadcast industry.

Sinclair’s proposed merger would give the conservative broadcaster unprecedented grip over American TV screens. Its original proposal, if approved, would grant Sinclair access to 72 percent of television households in America, far surpassing a national ownership cap of 39 percent. The FCC’s national ownership cap limits the reach of any one broadcast company, in an effort to ensure enough independent voices can thrive on the airwaves.

To get beneath the cap, Sinclair had proposed spinning off several stations. But a number of the owners of the stations that would be spun off have close ties to Sinclair, which critics said would allow Sinclair to stay in control of the stations it sold — divestitures that appeared designed to evade the FCC’s rules.

Pai said Monday he found those arguments persuasive.

“The evidence we’ve received suggests that certain station divestitures that have been proposed to the FCC would allow Sinclair to control those stations in practice, even if not in name, in violation of the law,” said Pai.

Sinclair said in a statement late Monday that it has been transparent with the regulator. “At no time have we misled the FCC in any manner whatsoever with respect to the relationships or the structure of those relationships proposed as part of the Tribune acquisition,” the company said.

Sinclair signaled that it intends to pursue the deal for Tribune. “We are prepared to resolve any perceived issues and look forward to finalizing our acquisition of Tribune Media. The proposed merger of Sinclair Broadcast Group and Tribune Media will create numerous public interest benefits and help move the broadcast industry forward at a time when it is facing unprecedented challenges.” Sinclair’s stock ended down more than 11 percent Monday.

Sinclair’s political leanings highlight the political undertones to the deregulation Pai has spearheaded. Sinclair is a supporter of President Trump, and the acquisition of dozens of Tribune media stations would give conservative media a wider platform. But Pai has faced criticism from lawmakers and consumer groups for approving policy changes that could benefit Sinclair as it seeks to close its deal.

Pai’s tenure leading the FCC since 2017 has been marked by multiple efforts to relax regulations on TV broadcasters. The agency last year repealed one rule that required local broadcast stations to operate a physical studio in the markets for which they hold a license. In another move, the FCC said it would no longer block media mergers that left fewer than eight remaining independent stations in a market.

In February, Pai reportedly came under investigation by the FCC’s own inspector general to determine whether he inappropriately pushed for rule changes that could help Sinclair’s deal pass regulatory muster. But Pai has also sought to distance himself from Sinclair at times, in one case pushing to fine the company $13.4 million over its alleged failure to inform viewers that the subject of some of its news content had paid for the exposure.

The merger under investigation — proposed last spring — sought to bring Tribune’s 42 TV stations under Sinclair’s roof. Tribune controls stations in seven of the nation’s top 10 markets. The initial merger sought to build the biggest television company in America, with reach into 233 stations across 108 markets.

“In general, this kind of media ownership concentration is dangerous for our democracy,” said Victor Pickard, a media scholar at the University of Pennsylvania’s Annenberg School for Communication. “Americans are still very reliant on local television news.”


The unlikely crime-fighter cracking decades-old murders? A genealogist.

The unlikely crime-fighter cracking decades-old murders? A genealogist.
By Justin Jouvenal
Jul 16 2018

The young couple set out on a trip in 1987, speeding toward Seattle in a gold van, when they crossed paths with a killer. The man raped Tanya Van Cuylenborg and shot her in the head. Jay Cook was beaten and strangled.

The killer left a pair of plastic gloves inside their vehicle, a gesture one detective interpreted as a taunt: You’ll never catch me.

That was true for more than three decades. Investigators spent thousands of hours sifting leads and probing suspects with little to show. But in late April, a former musical theater actor with no background in law enforcement took over the case.

CeCe Moore and her team cracked it in three days.

Moore put the killer’s DNA profile into a public genealogy website to find relatives and then built a family tree that led to a suspect, William Earl Talbott II. The truck driver was charged in Washington state in May.

Since the same technique was used in April to find the man accused of being the Golden State Killer, genetic genealogy has led to a flurry of breakthroughs in the coldest of cases, showing the potential to be a transformative tool for police.

On Sunday, police in Indiana announced that Moore’s team at Reston, Va.-based Parabon NanoLabs had helped identify a man who allegedly sexually assaulted and killed 8-year-old April Tinsley in 1988. The killer had sent chilling messages to others over the years, tacking some of the threats on bicycles belonging to other young girls.

Parabon, the biggest player so far to work in the emerging field, also helped authorities identify a Pennsylvania DJ charged with the 1992 slaying of an elementary school teacher, the prime suspect in the 1981 killing of a real estate agent in Texas, and a Washington man charged in the 1986 rape and killing of a 12-year-old girl.

Another team is working with investigators in California to try to solve the Zodiac Killer case.

And the nonprofit DNA Doe Project has uncovered the identities of a man who mysteriously assumed an 8-year-old’s identity before committing suicide, a woman slain in Ohio in 1981 and two others as it works to put names to the remains of 40,000 Jane and John Does scattered across the country.

The developments are all the more remarkable because genetic genealogy was not pioneered by the FBI or elite forensic experts but by a loose network of citizen scientists and genealogists like Moore and a professional guardian from Florida, who came up with the idea for a genealogy database available for all to search.

But the novel turn to crime-fighting has raised a host of issues: Could the technique finger the wrong person? Who will ensure police use genetic data responsibly? Should authorities rely on a public database that could be hacked or manipulated?

“We’ve got no precedent for doing this type of thing,” said Debbie Kennett, a research associate in the Department of Genetics, Evolution and Environment at University College London. “The people who are doing it are often volunteers.”

The eu­reka moment

The hunt for Van Cuylenborg and Cook’s killer began with the genetic equivalent of a Google search on a Friday night.

Moore’s team took a profile of the killer’s DNA obtained from the crime scene and provided by authorities and uploaded it to GEDmatch, a genetic clearinghouse that allows users to find relatives by comparing their genetic code against more than 1 million others.

GEDmatch’s analysis represents a quantum leap over traditional DNA matching used by law enforcement since the 1980s. In those cases, a lab takes a sample that contains up to 20 short segments of the perpetrator’s genetic code and looks for a match in a state DNA database or the FBI’s Combined DNA Index System, which contains 17.3 million profiles.

The profiles uploaded to GEDmatch contain some 600,000 DNA snippets, allowing the genetic genealogist not only to identify a match but also to determine how closely people are related.

GEDmatch spit out results for Moore early on Saturday after roughly eight hours of crunching data: The killer appeared to share enough DNA with two people to be second cousins.


Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So

Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So
By Steve Lohr
Jun 20 2018

For the past five years, the hottest thing in artificial intelligence has been a branch known as deep learning. The grandly named statistical technique, put simply, gives computers a way to learn by processing vast amounts of data. Thanks to deep learning, computers can easily identify faces and recognize spoken words, making other forms of humanlike intelligence suddenly seem within reach.

Companies like Google, Facebook and Microsoft have poured money into deep learning. Start-ups pursuing everything from cancer cures to back-office automation trumpet their deep learning expertise. And the technology’s perception and pattern-matching abilities are being applied to improve progress in fields such as drug discovery and self-driving cars.

But now some scientists are asking whether deep learning is really so deep after all.

In recent conversations, online comments and a few lengthy essays, a growing number of A.I. experts are warning that the infatuation with deep learning may well breed myopia and overinvestment now — and disillusionment later.

“There is no real intelligence there,” said Michael I. Jordan, a professor at the University of California, Berkeley, and the author of an essay published in April intended to temper the lofty expectations surrounding A.I. “And I think that trusting these brute force algorithms too much is a faith misplaced.”

The danger, some experts warn, is that A.I. will run into a technical wall and eventually face a popular backlash — a familiar pattern in artificial intelligence since that term was coined in the 1950s. With deep learning in particular, researchers said, the concerns are being fueled by the technology’s limits.

Deep learning algorithms train on a batch of related data — like pictures of human faces — and are then fed more and more data, which steadily improve the software’s pattern-matching accuracy. Although the technique has spawned successes, the results are largely confined to fields where those huge data sets are available and the tasks are well defined, like labeling images or translating speech to text.

The technology struggles in the more open terrains of intelligence — that is, meaning, reasoning and common-sense knowledge. While deep learning software can instantly identify millions of words, it has no understanding of a concept like “justice,” “democracy” or “meddling.”

Researchers have shown that deep learning can be easily fooled. Scramble a relative handful of pixels, and the technology can mistake a turtle for a rifle or a parking sign for a refrigerator.

In a widely read article published early this year on arXiv.org, a site for scientific papers, Gary Marcus, a professor at New York University, posed the question: “Is deep learning approaching a wall?” He wrote, “As is so often the case, the patterns extracted by deep learning are more superficial than they initially appear.”

If the reach of deep learning is limited, too much money and too many fine minds may now be devoted to it, said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence. “We run the risk of missing other important concepts and paths to advancing A.I.,” he said.

Amid the debate, some research groups, start-ups and computer scientists are showing more interest in approaches to artificial intelligence that address some of deep learning’s weaknesses. For one, the Allen Institute, a nonprofit lab in Seattle, announced in February that it would invest $125 million over the next three years largely in research to teach machines to generate common-sense knowledge — an initiative called Project Alexandria.

While that program and other efforts vary, their common goal is a broader and more flexible intelligence than deep learning. And they are typically far less data hungry. They often use deep learning as one ingredient among others in their recipe.

“We’re not anti-deep learning,” said Yejin Choi, a researcher at the Allen Institute and a computer scientist at the University of Washington. “We’re trying to raise the sights of A.I., not criticize tools.”

Those other, non-deep learning tools are often old techniques employed in new ways. At Kyndi, a Silicon Valley start-up, computer scientists are writing code in Prolog, a programming language that dates to the 1970s. It was designed for the reasoning and knowledge representation side of A.I., which processes facts and concepts, and tries to complete tasks that are not always well defined. Deep learning comes from the statistical side of A.I. known as machine learning.

Benjamin Grosof, an A.I. researcher for three decades, joined Kyndi in May as its chief scientist. Mr. Grosof said he was impressed by Kyndi’s work on “new ways of bringing together the two branches of A.I.”

Kyndi has been able to use very little training data to automate the generation of facts, concepts and inferences, said Ryan Welsh, the start-up’s chief executive.


Alexa makes decision-making easier than ever—by making your choices for you

Alexa makes decision-making easier than ever—by making your choices for you
By Adam Gerhart
Jul 10 2018

There’s no stopping voice technology. eMarketer forecasts that 57 million US adults will use a smart speaker at least once a month this year, such as Amazon’s Echo or Google Home. This figure that rises to 91 million when you count voice assistants embedded into other devices, like Apple’s Siri.

It doesn’t stop there. Another report says that by 2020, half of all searches will be done by voice. Yet another predicts that voice shopping will grow to $40 billion in 2022. “Not since the smartphone has any tech device been adopted as quickly as the smart speaker,” eMarketer’s report said.

But what does this mean for us, the consumers? Beyond the undeniable truth that voice technology will change how we shop, it’ll also change what we actually buy.

Voice and choice paralysis

Here’s how shopping psychology currently works. Walk into a grocery store, and you’ll find endless choices in front of you. Different kinds of toilet paper, multitudes of batteries, an entire section devoted to yogurt, an entire aisle filled with cereal. That seemingly unlimited choice only gets magnified when you shop online. For example, a search for “paper towels” will give you 30,000 product results on Amazon and endless pages of information on Google. Unless you have really strong feelings about exactly which paper towels you use (and some people really do), the endless selection can feel a bit overwhelming.

The rise of voice shopping will change that drastically, helping you cull down the decisions—but that’s not necessarily a good thing. If you ask Alexa or Google Home to buy you granola bars, they don’t give you a list of options to choose from. Unlike what you’d find in either a brick-and-mortar or online store, you’re given a couple of options at most. While you can ask for more, it’s an extra step you’ll need to actively take.

The result? Unless you’re looking for a very specific product (say, Nature Valley Crunchy bars), you’ll probably go with whatever is served up to you first. And after you’ve ordered something once, it becomes incredibly convenient for you to just re-order the same brand over and over again: Your voice assistant will assume that you simply want to repeat your previous purchase, and present that option to you first. And so the cycle continues. Unless you proactively ask for a different brand by name (or until a time when paid voice search results become a reality, and the first choice could be determined by the highest bidder), you’ll simply wind up with the same products each time.

Welcome to the world of incidental loyalty.

Incidental loyalty

When it comes to shopping for products in voice search, the second-place getter becomes the first-place loser.

For commoditized products that rely on low prices instead of true brand loyalty, the impact will be huge. After all, how many granola brands do you really know by heart? The winners will be brands that have carved out unique niches in the market or those whose products have become synonymous with the category itself (like Kleenex or Q-Tips).

Even the name of a company will influence if you buy it or not. Let’s take two common yogurt brands in the US: Yoplait and Fage. While the former has wider name recognition, the latter can be challenging to pronounce. Is it Fay-gee? Fay-ahe? Fa-gee? (It’s actually Fa-yeh.) This make voice shopping a riskier proposition for this product, not only because consumers may bungle its name, but US-centric voice assistants will have a harder time recognizing the correct pronunciation as well. In a shopping environment ruled by voice assistants, Fage sales could see a significant dip.


When Your Constitutional Rights Are Violated but You Lose Anyway

When Your Constitutional Rights Are Violated but You Lose Anyway
The Supreme Court must close an unjust loophole it created, which allows constitutional misconduct to go unpunished.
By Emma Andersson, Senior Staff Attorney, Criminal Law Reform Project
Jul 11 2018

Beginning in 2010, a Connecticut man, Almighty Supreme Born Allah, spent over six months in solitary confinement. He was alone for 23 hours a day, allowed to shower just three times a week in underwear and leg shackles, and permitted only one 30-minute visit each week with a family member, whom he was not allowed to embrace, let alone touch. Studies have shown that this kind of isolation can result in clinical outcomes similar to those of physical torture, which is why numerous international human rights bodies have condemned the prolonged use of solitary confinement.The twist on this twisted set of punishments?

Allah had not been convicted of a crime when he was put in solitary confinement. He sued, and four federal judges agreed with Allah that this treatment during pretrial detention violated his constitutional rights. And yet, he lost his case because of a rule called qualified immunity that the U.S. Supreme Court created in the 1980s. As William Baude, a constitutional law professor from the University of Chicago, explains, “[t]he doctrine of qualified immunity prevents government agents from being held personally liable for constitutional violations unless the violation was of ‘clearly established’ law.”

To understand how this works you have to start with a federal law called section 1983, which holds state and local government officials liable for money damages in federal court if they have violated constitutional rights. This law has been on the books since 1871, and it was originally enacted to stop law enforcement from ignoring the lynching of newly freed Black citizens.

But while section 1983 was intended to increase accountability for government officials who break the law, the Supreme Court created a giant loophole that undermines that goal, making it virtually impossible for government officials to be held personally liable for wrongdoing. That loophole is qualified immunity, which either the Supreme Court or Congress could fix to ensure constitutional misconduct does not go unpunished.

Since the creation of qualified immunity, the rule has snowballed out of control. As the judges on Allah’s case explained, the rule now allows “all but the plainly incompetent or those who knowingly violate the law” to defeat lawsuits brought by the victims of government overreach. The result, as Justice Sotomayor recently argued in a dissent, is that “palpably unreason­able conduct will go unpunished.”

That’s exactly what happened in Allah’s case.

In 1979, the Supreme Court held in Bell v. Wolfish that under the Constitution’s Due Process Clause, pretrial detainees cannot be punished. Restrictions on a pretrial detainee’s liberty, the court concluded, have to be “reasonably related to a legitimate nonpunitive governmental objective.” If the restriction is “arbitrary or purposeless,” however, “a court may permissibly infer that the purpose of the governmental action is punishment that may not constitutionally be inflicted upon detainees.”

Under Bell, in other words, the government can hold people in jail to make sure they show up to their trial and can also limit exercise time in jail for the sake of keeping order. But the government cannot subject pretrial detainees to harsh conditions just to punish them. And this matters because a staggering 465,000 people are in pretrial detention on any given day in America.

Explaining this rule in Bell, the Supreme Court said:


The inconvenient truth about cancer and mobile phones

The inconvenient truth about cancer and mobile phones
We dismiss claims about mobiles being bad for our health – but is that because studies showing a link to cancer have been cast into doubt by the industry?
By Mark Hertsgaardand Mark Dowie
Jul 14 2018

On 28 March this year, the scientific peer review of a landmark United States government study concluded that there is “clear evidence” that radiation from mobile phones causes cancer, specifically, a heart tissue cancer in rats that is too rare to be explained as random occurrence.

Eleven independent scientists spent three days at Research Triangle Park, North Carolina, discussing the study, which was done by the National Toxicology Program of the US Department of Health and Human Services and ranks among the largest conducted of the health effects of mobile phone radiation. NTP scientists had exposed thousands of rats and mice (whose biological similarities to humans make them useful indicators of human health risks) to doses of radiation equivalent to an average mobile user’s lifetime exposure.

The peer review scientists repeatedly upgraded the confidence levels the NTP’s scientists and staff had attached to the study, fuelling critics’ suspicions that the NTP’s leadership had tried to downplay the findings. Thus the peer review also found “some evidence” – one step below “clear evidence” – of cancer in the brain and adrenal glands.

Not one major news organisation in the US or Europe reported this scientific news. But then, news coverage of mobile phone safety has long reflected the outlook of the wireless industry. For a quarter of a century now, the industry has been orchestrating a global PR campaign aimed at misleading not only journalists, but also consumers and policymakers about the actual science concerning mobile phone radiation. Indeed, big wireless has borrowed the very same strategy and tactics big tobacco and big oil pioneered to deceive the public about the risks of smoking and climate change, respectively. And like their tobacco and oil counterparts, wireless industry CEOs lied to the public even after their own scientists privately warned that their products could be dangerous, especially to children.

Outsiders suspected from the start that George Carlo was a front man for an industry whitewash. Tom Wheeler, the president of the Cellular Telecommunications and Internet Association (CTIA), handpicked Carlo to defuse a public relations crisis that threatened to strangle his infant industry in its crib. This was back in 1993, when there were only six mobile subscriptions for every 100 adults in the United States, but industry executives foresaw a booming future.

Remarkably, mobile phones had been allowed on to the US market a decade earlier without any government safety testing. Now, some customers and industry workers were being diagnosed with cancer. In January 1993, David Reynard sued the NEC America company, claiming that his wife’s NEC phone caused her lethal brain tumour. After Reynard appeared on national television, the story gained ground. A congressional subcommittee announced an investigation; investors began dumping mobile phone stocks and Wheeler and the CTIA swung into action.

A week later, Wheeler announced that his industry would pay for a comprehensive research programme. Mobile phones were already safe, Wheeler told reporters; the new research would simply “revalidate the findings of the existing studies”.

Carlo seemed like a good bet to fulfil Wheeler’s mission. An epidemiologist with a law degree, he had conducted studies for other controversial industries. After a study funded by Dow Corning, Carlo had declared that breast implants posed only minimal health risks. With chemical industry funding, he had concluded that low levels of dioxin, the chemical behind the Agent Orange scandal, were not dangerous. In 1995, Carlo began directing the industry-financed Wireless Technology Research project (WTR), whose eventual budget of $28.5m made it the best-funded investigation of mobile safety to date.