Deported With A Valid U.S. Visa, Jordanian Says Message Is ‘You’re Not Welcome’

Deported With A Valid U.S. Visa, Jordanian Says Message Is ‘You’re Not Welcome’
Feb 24 2017

Yahya Abu Romman, a 22-year-old languages major, had just graduated from university. To celebrate, he planned a six-week trip to the U.S., where his brother, uncles and aunts and more than a dozen cousins have lived for years.

With good grades, an engaging personality and fluency in three languages — English, Arabic and Spanish — he had worked as a nature conservation ranger while studying, and had his pick of jobs with tour companies in Jordan, a strong U.S. ally.

In 2015, Abu Romman was issued a tourist visa at the U.S. embassy in Amman, good for five years. With money from a graduation present, he bought a round-trip ticket and landed at Chicago’s O’Hare International Airport a few days after the start of President Trump’s travel ban on the citizens of seven predominantly Muslim countries.

That’s where the positive impression of the U.S. he’d inherited from his father came to a screeching halt.

“My dad is a graduate from the University of Illinois,” says Abu Romman. “He always told me America is the land of justice, land of opportunities, of generosity. That there are very kind people. And there are. But I think things have changed.”

Abu Romman is a Jordanian citizen, but born in Syria. He’s been to Syria only once since birth — and being born in an Arab country doesn’t automatically confer citizenship there. Instead, citizenship is generally based your father’s nationality. Still, Abu Romman couldn’t persuade the border officer at O’Hare that he wasn’t Syrian.

“He said, ‘Sir, if you were born in Syria, you should have a Syrian passport,’ ” says Abu Romman at his family’s home off a winding street in the Jordanian capital. “I said, ‘Why should I have a Syrian passport? My father is Jordanian. My mother is Jordanian. We all are Jordanian, but it happened to be in Syria where I was born.’ He knocked on the glass next to him, to his colleague. He said, ‘We might have a problem with this.”

The questions moved on to the case of Abu Romman’s brother, who had lived illegally in the U.S. and overstayed a visa before becoming a citizen. Then border guards went through Abu Romman’s phone and found emails he’d sent to flight schools in the U.S. and other countries.

Abu Romman says his dream was to learn to fly, and he was simply asking about scholarships. But the officer wasn’t convinced that he wasn’t planning to stay in the U.S.

“He said, ‘Sir, we’re going to be cancelling your visa,'” says Abu Romman.

He shows me his U.S. visa with the words “Revoked – cancelled by CBP” – Customs and Border Protection — written across it with a red marker.


No publication without confirmation

No publication without confirmation
Proposing a new kind of paper that combines the flexibility of basic research with the rigour of clinical trials.
By Jeffrey S. Mogil & Malcolm R. Macleod
Feb 22 2017

Concern over the reliability of published biomedical results grows unabated. Frustration with this ‘reproducibility crisis’ is felt by everyone pursuing new disease treatments: from clinicians and would-be drug developers who want solid foundations for the preclinical research they build on, to basic scientists who are forced to devote more time and resources to newly imposed requirements for rigour, reporting and statistics. Tightening rigour across all experiments will decrease the number of false positive findings, but comes with the risk of reducing experimental efficiency and creativity.

Bolder ideas are needed. What we propose here is a compromise between the need to trust conclusions in published papers and the freedom for basic scientists to explore and innovate. Our proposal is a new type of paper for animal studies of disease therapies or preventions: one that incorporates an independent, statistically rigorous confirmation of a researcher’s central hypothesis. We call this large confirmatory study a preclinical trial. These would be more formal and rigorous than the typical preclinical testing conducted in academic labs, and would adopt many practices of a clinical trial.

We believe that this requirement would push researchers to be more sceptical of their own work. Instead of striving to convince reviewers and editors to publish a paper in prestigious outlets, they would be questioning whether their hypotheses could stand up in a large, confirmatory animal study. Such a trial would allow much more flexibility in earlier hypothesis-generating experiments, which would be published in the same paper as the confirmatory study. If the idea catches on, there will be fewer high-profile papers hailing new therapeutic strategies, but much more confidence in their conclusions.

The confirmatory study would have three features. First, it would adhere to the highest levels of rigour in design (such as blinding and randomization), analysis and reporting. Second, it would be held to a higher threshold of statistical significance, such as using P values of P < 0.01 instead of the currently standard P < 0.05. Third, it would be performed by an independent laboratory or consortium. This exceeds the requirements currently proposed by various checklists and funders, but would apply only to the final, crucial confirmatory experiment.

Unlike clinical studies, most preclinical research papers describe a long chain of experiments, all incrementally building support for the same hypothesis. Such papers often include more than a dozen separate in vitro and animal experiments, with each one required to reach statistical significance. We argue that, as long as there is a final, impeccable study that confirms the hypothesis, the earlier experiments in this chain do not need to be held to the same rigid statistical standard.

This would represent a big shift in how scientists produce papers, but we think that the integrity of biomedical research could benefit from such radical thinking.


The rise of artificial intelligence is creating new variety in the chip market, and trouble for Intel

The rise of artificial intelligence is creating new variety in the chip market, and trouble for Intel
The success of Nvidia and its new computing chip signals rapid change in IT architecture
Feb 25 2017

“WE ALMOST went out of business several times.” Usually founders don’t talk about their company’s near-death experiences. But Jen-Hsun Huang, the boss of Nvidia, has no reason to be coy. His firm, which develops microprocessors and related software, is on a winning streak. In the past quarter its revenues increased by 55%, reaching $2.2bn, and in the past 12 months its share price has almost quadrupled.

A big part of Nvidia’s success is because demand is growing quickly for its chips, called graphics processing units (GPUs), which turn personal computers into fast gaming devices. But the GPUs also have new destinations: notably data centres where artificial-intelligence (AI) programmes gobble up the vast quantities of computing power that they generate.

Soaring sales of these chips (see chart) are the clearest sign yet of a secular shift in information technology. The architecture of computing is fragmenting because of the slowing of Moore’s law, which until recently guaranteed that the power of computing would double roughly every two years, and because of the rapid rise of cloud computing and AI. The implications for the semiconductor industry and for Intel, its dominant company, are profound.

Things were straightforward when Moore’s law, named after Gordon Moore, a founder of Intel, was still in full swing. Whether in PCs or in servers (souped-up computers in data centres), one kind of microprocessor, known as a “central processing unit” (CPU), could deal with most “workloads”, as classes of computing tasks are called. Because Intel made the most powerful CPUs, it came to rule not only the market for PC processors (it has a market share of about 80%) but the one for servers, where it has an almost complete monopoly. In 2016 it had revenues of nearly $60bn.

This unipolar world is starting to crumble. Processors are no longer improving quickly enough to be able to handle, for instance, machine learning and other AI applications, which require huge amounts of data and hence consume more number-crunching power than entire data centres did just a few years ago. Intel’s customers, such as Google and Microsoft together with other operators of big data centres, are opting for more and more specialised processors from other companies and are designing their own to boot.

Nvidia’s GPUs are one example. They were created to carry out the massive, complex computations required by interactive video games. GPUs have hundreds of specialised “cores” (the “brains” of a processor), all working in parallel, whereas CPUs have only a few powerful ones that tackle computing tasks sequentially. Nvidia’s latest processors boast 3,584 cores; Intel’s server CPUs have a maximum of 28.

The company’s lucky break came in the midst of one of its near-death experiences during the 2008-09 global financial crisis. It discovered that hedge funds and research institutes were using its chips for new purposes, such as calculating complex investment and climate models. It developed a coding language, called CUDA, that helps its customers program its processors for different tasks. When cloud computing, big data and AI gathered momentum a few years ago, Nvidia’s chips were just what was needed.

Every online giant uses Nvidia GPUs to give their AI services the capability to ingest reams of data from material ranging from medical images to human speech. The firm’s revenues from selling chips to data-centre operators trebled in the past financial year, to $296m.

And GPUs are only one sort of “accelerator”, as such specialised processors are known. The range is expanding as cloud-computing firms mix and match chips to make their operations more efficient and stay ahead of the competition. “Finding the right tool for the right job”, is how Urs Hölzle, in charge of technical infrastructure at Google, describes balancing the factors of flexibility, speed and cost.


Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image

Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image
Guessing the location of a randomly chosen Street View image is hard, even for well-traveled humans. But Google’s latest artificial-intelligence machine manages it with relative ease.
By Emerging Technology from the arXiv
Feb 24 2017

Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.

Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.

So it’s easy to think that machines would struggle with this task. And indeed, they have.

Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.

Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.

Their approach is straightforward, at least in the world of machine learning. Weyand and co begin by dividing the world into a grid consisting of over 26,000 squares of varying size that depend on the number of images taken in that location.

So big cities, which are the subjects of many images, have a more fine-grained grid structure than more remote regions where photographs are less common. Indeed, the Google team ignored areas like oceans and the polar regions, where few photographs have been taken.

Next, the team created a database of geolocated images from the Web and used the location data to determine the grid square in which each image was taken. This data set is huge, consisting of 126 million images along with their accompanying Exif location data.

Weyand and co used 91 million of these images to teach a powerful neural network to work out the grid location using only the image itself. Their idea is to input an image into this neural net and get as the output a particular grid location or a set of likely candidates.

They then validated the neural network using the remaining 34 million images in the data set. Finally they tested the network—which they call PlaNet—in a number of different ways to see how well it works.

The results make for interesting reading. To measure the accuracy of their machine, they fed it 2.3 million geotagged images from Flickr to see whether it could correctly determine their location. “PlaNet is able to localize 3.6 percent of the images at street-level accuracy and 10.1 percent at city-level accuracy,” say Weyand and co. What’s more, the machine determines the country of origin in a further 28.4 percent of the photos and the continent in 48.0 percent of them.

That’s pretty good. But to show just how good, Weyand and co put PlaNet through its paces in a test against 10 well-traveled humans. For the test, they used an online game that presents a player with a random view taken from Google Street View and asks him or her to pinpoint its location on a map of the world.


Will Democracy Survive Big Data and Artificial Intelligence?

Will Democracy Survive Big Data and Artificial Intelligence?
We are in the middle of a technological upheaval that will transform the way society is organized. We must make the right decisions now
By Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen van den Hoven, Roberto V. Zicari, Andrej Zwitter
Feb 25 2017

Editor’s Note: This article first appeared in Spektrum der Wissenschaft, Scientific American’s sister publication, as “Digitale Demokratie statt Datendiktatur.”

“Enlightenment is man’s emergence from his self-imposed immaturity. Immaturity is the inability to use one’s understanding without guidance from another.”

—Immanuel Kant, “What is Enlightenment?” (1784)

The digital revolution is in full swing. How will it change our world? The amount of data we produce doubles every year. In other words: in 2016 we produced as much data as in the entire history of humankind through 2015. Every minute we produce hundreds of thousands of Google searches and Facebook posts. These contain information that reveals how we think and feel. Soon, the things around us, possibly even our clothing, also will be connected with the Internet. It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours. Many companies are already trying to turn this Big Data into Big Money.

Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities. Should we also expect these developments to result in smart nations and a smarter planet?

The field of artificial intelligence is, indeed, making breathtaking advances. In particular, it is contributing to the automation of data analysis. Artificial intelligence is no longer programmed line by line, but is now capable of learning, thereby continuously developing itself. Recently, Google’s DeepMind algorithm taught itself how to win 49 Atari games. Algorithms can now recognize handwritten language and patterns almost as well as humans and even complete some tasks better than them. They are able to describe the contents of photos and videos. Today 70% of all financial transactions are performed by algorithms. News content is, in part, automatically generated. This all has radical economic consequences: in the coming 10 to 20 years around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.

It can be expected that supercomputers will soon surpass human capabilities in almost all areas—somewhere between 2020 and 2060. Experts are starting to ring alarm bells. Technology visionaries, such as Elon Musk from Tesla Motors, Bill Gates from Microsoft and Apple co-founder Steve Wozniak, are warning that super-intelligence is a serious danger for humanity, possibly even more dangerous than nuclear weapons. Is this alarmism?

One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next. With this, society is at a crossroads, which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements.

In the 1940s, the American mathematician Norbert Wiener (1894–1964) invented cybernetics. According to him, the behavior of systems could be controlled by the means of suitable feedbacks. Very soon, some researchers imagined controlling the economy and society according to this basic principle, but the necessary technology was not available at that time.

Today, Singapore is seen as a perfect example of a data-controlled society. What started as a program to protect its citizens from terrorism has ended up influencing economic and immigration policy, the property market and school curricula. China is taking a similar route. Recently, Baidu, the Chinese equivalent of Google, invited the military to take part in the China Brain Project. It involves running so-called deep learning algorithms over the search engine data collected about its users. Beyond this, a kind of social control is also planned. According to recent reports, every Chinese citizen will receive a so-called ”Citizen Score”, which will determine under what conditions they may get loans, jobs, or travel visa to other countries. This kind of individual monitoring would include people’s Internet surfing and the behavior of their social contacts (see ”Spotlight on China”).

With consumers facing increasingly frequent credit checks and some online shops experimenting with personalized prices, we are on a similar path in the West. It is also increasingly clear that we are all in the focus of institutional surveillance. This was revealed in 2015 when details of the British secret service’s “Karma Police” program became public, showing the comprehensive screening of everyone’s Internet use. Is Big Brother now becoming a reality? Programmed society, programmed citizens


A warning from Bill Gates, Elon Musk, and Stephen Hawking

[Note: Be sure to checkout some of the videos that are linked to in the article. For instance, the one about ‘AmazonGo’. DLH]

A warning from Bill Gates, Elon Musk, and Stephen Hawking
By Quincy Larson
Feb 19 2017

“The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.” — Stephen Hawking
There’s a rising chorus of concern about how quickly robots are taking away human jobs.

Here’s Elon Musk on Thursday at the the World Government Summit in Dubai:

“What to do about mass unemployment? This is going to be a massive social challenge. There will be fewer and fewer jobs that a robot cannot do better [than a human]. These are not things that I wish will happen. These are simply things that I think probably will happen.” — Elon Musk
And today Bill Gates proposed that governments start taxing robot workers the same way we tax human workers:

“You cross the threshold of job-replacement of certain activities all sort of at once. So, you know, warehouse work, driving, room cleanup, there’s quite a few things that are meaningful job categories that, certainly in the next 20 years [will go away].” — Bill Gates
Jobs are vanishing much faster than anyone ever imagined.

In 2013, policy makers largely ignored two Oxford economists who suggested that 45% of all US jobs could be automated away within the next 20 years. But today that sounds all but inevitable.

Transportation and warehousing employ 5 million Americans

Those self-driving cars you keep hearing about are about to replace a lot of human workers.

Currently in the US, there are:

• 600,000 Uber drivers
• 181,000 taxi drivers
• 168,000 transit bus drivers
• 505,000 school bus drivers

There’s also around 1 million truck drivers in the US. And Uber just bought a self-driving truck company.

As self driving cars become legal in more states, we’ll see a rapid automation of all of these driving jobs. If a one-time $30,000 truck retrofit can replace a $40,000 per year human trucker, there will soon be a million truckers out of work.

And it’s not just the drivers being replaced. Soon entire warehouses will be fully automated.

I strongly recommend you invest 3 minutes in watching this video. It shows how a fleet of small robots can replace a huge number of human warehouse workers.

There are still some humans working in those warehouses, but it’s only a matter of time before some sort of automated system replaces them, too.

8 million Americans work as retail salespeople and cashiers.

Many of these jobs will soon be automated away.

Amazon is testing a type of store with virtually no employees. You just walk in, grab what you want, and walk out.


Most children sleep through smoke alarms, investigator warns

Most children sleep through smoke alarms, investigator warns
Researchers call for alarms with lower tones combined with woman’s voice as they look for families to take part in study
By Damien Gayle
Feb 23 2017

Children are at risk of dying in house fires because they often remain asleep when smoke alarms sound, say researchers.

They are calling for high-pitched buzzers to be replaced with lower tones combined with a woman’s voice.

More than 500 volunteer families are being sought across the UK to join a study testing new fire alarm sounds after initial research showed that more than 80% of children aged between two and 13 did not respond to a traditional alarm when it was sounding.

Dave Coss, a fire investigator and watch commander at Derbyshire fire and rescue service, who is carrying out the study with scientists at the University of Dundee, told BBC Radio 4’s Today programme: “The immediate thing we are saying to people is that if your alarms do go off then obviously you need to go and fetch your children to make sure that they wake up.

“In the long term, what we are looking at here is a sound – and I think I need to stress the fact that it’s a sound and not a new detector – which we could have or could be adapted in the children’s bedroom so that if the smoke alarms do go off this sound would wake the children and give them those extra vital seconds to escape.”

Coss began his research after six children died in Derby in a house fire started by their parents. Mick Philpott was jailed for life with a minimum of 15 years after being convicted of manslaughter, and his wife Mairead was handed a 17-year sentence. The youngsters, aged between five and 13, who died from the effects of smoke, were asleep upstairs when the blaze broke out at the house in the early hours.

Coss, who investigated that fire, said: “One of the problems we had to solve from an investigation point of view was why all the children were [found in] their beds, even though the smoke detector had sounded that time. Initially we though there might be other reasons. So obviously medical reasons were explored, toxicology tests were carried out, just to make sure there was nothing else, and the smoke alarm not waking them up was the only real solution that we could find.”

The suspicions were borne out by research Coss carried out with Professor Niamh NicDaeid, of Dundee’s centre for anatomy and human identification, which repeatedly exposed sleeping children to the sound of industry-standard smoke detectors inside their homes. More than 80% of the 34 children aged between two and 13 did not respond to the alarm. Only two children woke up every time and none of the 14 boys woke up at all.

“When we started to explore why this was happening and we looked at other types of frequencies of sound we found out that a lower frequency sound … combined with a voice – generally a female voice – was much more effective at waking children up, and in actual fact woke up 94% of children that we tested,” NicDaeid told Today.