Uber Says Customers Give Up Right to Sue When They Agree to Use Service

A US appeals court in New York on Friday weighed arguments over whether Uber customers gave up their right to sue the company when they registered for its popular taxi hailing service.

The case could have wider implications for Internet businesses, which often require customers to agree to bring disputes through private arbitration as part of long lists of terms and conditions when they register for services.

Uber Says Customers Give Up Right to Sue When They Agree to Use Service

Theodore Boutrous, arguing for Uber, urged the three-judge 2nd US Circuit Court of Appeals panel to send a class action lawsuit by Connecticut Uber passenger Spencer Meyer over the company’s pricing practices to arbitration, which US District Judge Jed Rakoff refused to do last year.

When users register for Uber on their smartphones, Boutrous said, they are told on the registration screen that by registering, they are agreeing to terms and conditions. Boutrous said a typical smartphone user “can’t miss” the notice, and can easily read the terms and conditions by touching a link.

Jeffrey Wadsworth, arguing for Meyer, said it was not reasonable to expect customers to know they were giving up their right to sue when they agreed to standard terms and conditions from an Internet-based service.

“To register means to put your name on an official list,” he said. “It does not mean you’re engaging in some complex contractual transaction.”

However, Circuit Judge Susan Carney and Reena Raggi both pointed out that providing credit card information, as Uber users do when they sign up, goes beyond merely registering.

In a short response, Boutrous said that no other court had ruled the way Rakoff did when faced with a registration agreement similar to Uber’s.

He also said small differences in the way registration screens are set up should not make a difference.

“We can’t have district judges going on immaterial distinctions here,” he said. “It’s on the screen, right in front of the individual.”

Meyer’s lawsuit, filed in 2015, claims that Uber’s practice of “surge pricing” – raising prices when demand spikes at a particular time and place – violates federal antitrust laws.

In his opinion refusing to send the case to arbitration, Rakoff took broad aim at onlines businesses’ practice of including arbitration agreements in their terms and conditions, saying it threatened consumers’ right to jury trials.

“This most precious and fundamental right can be waived only if the waiver is knowing and voluntary,” he said.

Google smooths Pixel audio issues with latest security patch, but it may limit the volume

If you’re one of the unlucky Pixel owners with audio distortion problems, relief may be on the way. Along with the usual vulnerability plugs, this month’s security patch includes a fix for the widespread issue, and many users have reported that the issue has indeed been cleared uppixel xl back.

First brought to light on Reddit and the Google+ Pixel User Community, the problem affected both models of the Pixel and mostly manifested at high volumes. Users complained of cracking and popping sounds when listening via any audio source  and the distortion was evident when using Google’s apps or third-party ones. Several users who received replacement Pixels encountered similar issues with their new phones, and on Jan. 17, Pixel community manager Orrin informed users that “this is a software issue that we are working to resolve in an upcoming update,” and suggested “to not play your device at max volume.”

According to several reports, it appears that Google has made good on its promise, but your phone might not play as loudly as it did before. Many affected Pixel users are now reporting that after installing the February OTA security patch, audio plays clearly, though maximum volume has been diminished somewhat. As Google+ user Francesco Chirico reported, “The issue is solved in my case with February security patch. I’m not sure that volume level is as high as before but in any case it seems a good volume level.”

However, other users say the hissing and popping persists even after the update. Additionally, the issue has not been cleared up for people running the 7.1.2 beta, though the next update will presumably include the same fix. To check to see if the patch has been installed, go to Settings, scroll down to About phone, and tap System updates.

 I haven’t experienced this issue with my Pixel phone, but it’s good to see Google giving it some attention. It’s taken a bit longer than some users would have liked , but it appears as though Google has isolated the issue and fixed it for the vast majority of users. It would be nice if the volume level could return at some point, but most users seem content with the tradeoff.

This story, “Google smooths Pixel audio issues with latest security patch, but it may limit the volume” was originally published by Greenbot.

ISIS propaganda collected in real time

ISIS propaganda collected in real timeUniversity of Exeter experts will collect large amounts of propaganda put on the internet by Islamic State terrorists in real time to understand how it radicalises people.

The group is well-known for its use of social media to elicit fear and communicate and promote its ideology. Academics will harvest and analyse this content, and use this huge amount of information to understand more about the themes, issues and claims made by ISIS.

It is hoped the findings will strengthen the capabilities of UK intelligence services to combat propaganda initiatives of violent organisations.

Researchers involved in the study will conduct a large-scale, computer-assisted analysis of video and text. They hope to identify how ISIS’ online propaganda encourages individuals to commit to political extremism and violence.

Analysing the propaganda will allow academics to evaluate how ISIS’ online content makes use of polarizing language known to foster intergroup conflict. The academics will examine the language used and the structure of the propaganda to give a clear picture of the arguments made by ISIS in support of terrorism.

Their findings will be shared with policymakers. The study is led by Stephane Baele and Travis Coan from the Department of Politics, and Katharine Boyd from the Department of Sociology.

Dr Baele said: “We are thrilled that this CREST grant allows us to examine ISIS’ online propaganda. We are certainly not the first to work on this crucial issue, but our research has two unique aspects that will significantly enhance our understanding of this complex phenomenon.

“We will make use of powerful, computational techniques to detect, gather, and analyse this propaganda. We will also use our knowledge of cognition and perception to make sense of the data we collected. By combining rigorous methods and in-depth explanations, we ultimately hope to contribute to ongoing and future efforts to stop the appeal of violent organisations.”

This is one of a range of projects set up to address some of the security threats facing the UK funded by the Centre for Research and Evidence on Security Threats (CREST), which is led by Lancaster University.

Director of CREST, Professor Paul Taylor, said: “We were delighted with the outstanding response to our call. Standing out against stiff competition, the successful projects promise innovation, rigour, and results that will make a difference to how we understand and counter security threats. I am looking forward to working with them.”

The secret army of cheerleaders policing China’s internet

If you ever wanted an illustration of why academic research is not just important but vital, then the work of Gary King, professor of sociology at Harvard, could serve as exhibit A. Why? Well, one of the more pressing strategic issues that faces western governments is how to adjust to the emergence of China as a new global superpower. The first requirement for intelligent reorientation is a rounded understanding of this new reality. And while it may be that in the foreign offices and chancelleries of the west officials and policy makers are busily boning up on Chinese industrial and geopolitical strategy (what the hell are they up to in the South China Sea, for example?), I see little evidence that anyone in government has been paying attention to how the Beijing regime seems to have solved a problem that no other government has cracked: namely, how to control, manage and harness the internet for its own purposes.

Strangely, our rulers still seem blissfully unaware of this, which is odd because – as I pointed out ages ago – there’s no longer any excuse for ignorance: Professor King has done most of the heavy lifting required. In a landmark study published in Science in 2014, for example, he and his colleagues reported on a remarkable, fine-grained investigation that they conducted into how the Chinese regime controls the network.

What the research showed was a degree of subtlety and sophistication undreamed of in western coverage of Chinese online censorship. In essence, King et al suggested that almost everything we think we know about the Chinese internet is wrong. For one thing, its users do not cower nervously behind the “great firewall”. On the contrary: online debate and discourse in China is as raucous, untamed and virulent as it is here. And yet the government devotes massive resources (200,000-plus people) to watching and censoring the network. So what are they doing? Answer: censoring some predictable stuff (pornography, Falun Gong, Tiananmen, etc); but much of what we would regard as “political” discourse (criticism of local communist party officials, for example) remains apparently unrestricted. There is, however, one type of discourse that isruthlessly and efficiently suppressed: any kind of social media post that could conceivably lead to collective mobilisation – to people on the streets. And this applies even to posts that are favourable to the government!

What emerged from Professor King’s first foray into Chinese cyberspace was an image of a political regime that had a more nuanced, insightful approach to managing the internet than most of us had assumed. This may be because the regime is, by western standards, overwhelmingly technocratic: something like 80% of the country’s ruling elite have engineering backgrounds. They know that the internet is essential for a modernising economy and they also appreciate that it provides the citizenry with a safety valve – one that also serves as a feedback loop that highlights potential trouble spots (local corruption, for example). But, most of all, they know that it is essential to keep people off the streets, which is why they censor it as they do.

There was, however, one bit of the jigsaw missing: the way the regime harnesses the internet to get its message(s) across. It has long been suspected that large numbers of citizens (up to 2 million) were paid the equivalent of 50 cents per post to insert pseudonymous content into the torrent of real social media posts in which they argue with critics of the regime.

The problem was that there was little empirical evidence for this suspicion or – more importantly – for the regime’s strategic objectives in employing 50 cent bloggers. Professor King and his colleagues have now filled in this blank with the publication of another remarkable paper. The researchers were able to identify the secretive authors of many of these posts and to estimate their volume (488m a year).

However, the most interesting finding is that the phoney posters avoid arguing with sceptics and critics, and indeed avoid discussing controversial topics altogether. So what are they up to, then? Mostly, it seems, “cheerleading for the state, symbols of the regime, or the revolutionary history of the Communist party”. In other words, trying to swamp social media with happy-clappy stuff and thereby dilute conversations about grievances, state shortcomings and other tricky topics. Professor King calls it “strategic distraction”, but really it’s the political equivalent of the LOLcats that keep western youth anaesthetised and off the streets.

King’s other discovery is more mundane. It turns out that the 50 cents angle may be a myth. Most of the phoneys seem to be government employees who contribute part-time outside of their normal jobs and not ordinary citizens doing piecework. Jobsworths, not stooges, in other words. Another beautiful theory vaporised by a banal fact. That’s research for you.

CIA ex-boss: secretive spooks tolerated in UK more than in US

British people are not demanding more transparency from the intelligence services as loudly as Americans, the former director of the US National Security Agency (NSA) and CIA has said.

Michael Hayden played a pivotal, leading role in American intelligence until he was replaced as director of the CIA shortly into the presidency of Barack Obama.

In a wide-ranging talk on the fourth day of the Hay festival, Hayden addressed CIA torture, targeted killings, what he thinks about Edward Snowden and how Facebook is perhaps a greater threat to privacy than government.

Hayden said the security services were changing faster in the US than the UK. “You as a population are far more tolerant of aggressive action on the part of your intelligence services than we are in the United States,” he said.

The US intelligence services would not have validation from the American people unless there was a certain amount of knowledge, an increased transparency, he said.

Hayden talked about the tensions between the need to know and the need to protect.

In his newlypublished book Hayden calls Snowden naive and narcissistic and says he wanted to put him on a “kill list”.

On the next page he said Snowden “highlighted the need for a broad cultural shift” in terms of transparency and what constitutes consent. On Sunday he said there was no contradiction between the two assertions.

“The 2% of what Snowden revealed that had to do with privacy accelerated a necessary conversation. The other 98% was about how the US and foreign governments collected legitimate material … that was incredibly damaging.”

The privacy revelations quickened a conversation which had “hit the beach” in the US but it “has not hit the beach here in Great Britain”.

Hayden was asked about how much information we give to social media companies and whether the public is naive in trusting Mark Zuckerberg and Facebook more than the NSA.

“I have my views on that,” he joked. “Your habits are all geared to protecting privacy against the government because that was always the traditional threat. That is no longer the pattern, it is the private sector … we are going through a cultural adjustment.

“With regard to the 21st-century definition of reasonable privacy, Mark Zuckerberg is probably going to have a greater influence on that than your or my government because of the rules we will embed inside his Facebookapplications.”

On “enhanced interrogation techniques” or torture – which could include waterboarding – Hayden said he personally authorised it only once and it did not, he admitted, work.

But he added the “suite” of usable techniques had been reduced from 13 to six and the interrogator believed he would have got information if that had not been the case. “Was it doomed to failure or was it a failure because we did not do enough?”

Targeted killings were justified, Hayden said, because the US believed it was at war. The UK, he said, referring to the killing of “Jihadi John”, has now “joined the queue”.

Hayden said he believed Islam was going through the crisis that Christianity went through in the 17th century as it was in an internal crisis. “We are not the target, we are collateral damage. What has happened in Paris, in Brussels … is spillage.”

Hayden also touched on Donald Trump, whose pronouncements, he said, had damaged US security.

“The jihadist narrative is that there is undying enmity between Islam and the modern world so when Trump says they all hate us, he’s using their narrative … he’s feeding their recruitment video.”

Blocking ‘fake engagement’ to keep the count honest


When you see that a YouTube video has “16,685 views,” take that with a grain of salt. Not all of those views may have been by human beings.

There are services that will, for a fee, spam a social media site with computer-generated views, likes, comments and other actions to boost a posting’s apparent popularity and draw more attention. Videos with a lot of views, for example, will be featured on YouTube’s opening page.

“Bad actors have been trying to game the system,” said Yixuan Li, a graduate student working with John Hopcroft, the IBM Professor of Engineering and Applied Mathematics in the Department of Computer Science. The problem is not limited to YouTube, the researchers pointed out, noting “Twitter followers, Amazon reviews and Facebook likes are all buyable by the thousand.”

In “a world that counts,” the researchers said, the count should reflect genuine interest.

The good news is that Li, Hopcroft and colleagues at Google have developed a way to recognize and block this “fake social engagement.” Li began the project while interning at Google, and the system is now coming into use on their sites, he said. Li described the system, called “LEAS” (Local Expansion At Scale) in a paper presented at at the 25th International World Wide Web Conference held April 11 to 15 in Montreal.

A tipoff, Li explained, is that the accounts posting the fake hits are in “lockstep,” posting to the same video targets around the same times. LEAS creates a map – officially known as an “engagement relationship graph” – of accounts and links between them, and behavioral similarity over time. It learns by looking at known spamming accounts (called “seeds”), then searches on the engagement graph for sets of accounts similar to the seeds performing orchestrated actions that have very low likelihood of happening spontaneously. It works best, the researchers said, to focus on small “local” sections of the graph.

To evaluate the system, humans manually reviewed postings from accounts LEAS had identified as spammers on YouTube. Some of those accounts had been created very recently but had run up a long list of postings. The comments they posted often amounted to just “good video,” “Yeah,” “Cool” and and other all-purpose bits of text, and identical comments had been posted to several videos. Some of the comments included malicious web links and advertisements.

Linksys bucks trend, will support open source firmware on WRT routers

We’ve previously covered how some router companies are planning to kill their support for open-source firmware updates after June 2. But one company, Linksys, has explicitly stepped forward to guarantee some its devices will remain open source compatible. The June 2 date is from the FCC, which has mandated that router manufacturers prevent third-party firmware loading, in order to ensure that devices cannot be configured to operate in bands that interfere with Doppler weather radar stations.

According to the FCC’s regulations and statements, open source firmware isn’t banned — it just has to be prevented from adjusting frequencies into ranges that conflict with other hardware. The problem is, this is considerably more difficult than just banning open source firmware altogether, which is why some companies have gone the lockdown route. Linksys won’t be retaining firmware compatibility on all its products, but the existing WRT line will remain compatible. Starting on June 2, new routers will store their RF data in a different location from the rest of the data on the router.

The router that started it all

“They’re named WRT… it’s almost our responsibility to the open source community,” Linksys router product manager Vince La Duca told Ars. WRT is a naming convention that dates back more than a decade to 2005’s WRT54G. That router was the first product supported by third-party firmware after Linksys was forced to release the source code for the device under the terms of the General Public License (GPL). This writeup from 2005 examines why third-party firmware became popular for the WRT54G if you feel like taking a walk down memory lane.

That said, we’re definitely seeing open-source firmware support being used as a marketing strategy. Linksys will lock down all devices that aren’t specifically marketed as supporting open-source firmware. If sales of WRT devices spike as a result, other companies will almost certainly invest in creating support of their own. While this would practically fill the niche for open-source compatible devices, it’ll come at the cost of part of what made these devices popular. Until now, projects like DD-WRT or OpenWRT were ways of getting the performance and features of a much more expensive router baked into much cheaper products.

It’s not clear what other manufacturers will do. Making WRT continue to work under the FCC’s guidelines required a three-way collaboration between Marvell, Linksys, and OpenWRT authors, as Ars Technica details. Most companies apparently weren’t prepared to make this kind of transition. It’s not clear when they’ll respond or how enthusiastic they’ll be about making changes to existing products.