Facebook can now sync your Instagram contacts to Messenger

Facebook wants to expand your Messenger contact list with a little help from Instagram. The company has launched a feature in Messenger that pulls in your contacts from Instagram, if you opt to connect your account. The option appears in Messenger’s “People” tab, alongside the existing option to sync your phone’s contacts with Messenger.

The feature was first spotted by Jane Manchun Wong, who posted a screenshot to Twitter.

https://platform.twitter.com/widgets.js

Others outside the U.S. noticed the option as well.

https://platform.twitter.com/widgets.js

We also found the option enabled in our own Messenger app, and have now confirmed with Facebook it’s a full public launch.

When you tap on “Connect Instagram,” Messenger adds contacts from Instagram automatically. In addition, your Instagram username and account also then becomes visible to other people on Messenger.

The result is an expanded social graph of sorts — one that combines the friends and family you know from Facebook, with those you know from Instagram.

Not everyone is thrilled with the feature, however.

As one Twitter user pointed out, it’s not clear that pushing “Connect Instagram” (the button’s title that appeared to some), means Messenger will automatically add your Instagram contacts to Messenger. It seems that you should be given a choice here as to if you want to add them, but that’s not the case.

https://platform.twitter.com/widgets.js

In December 2017, TechCrunch spotted a very similar option to sync Instagram contacts to Messenger in the same People section. However, the option never launched to the public and later disappeared. But the recent re-emergence of the feature is not a continued test — it’s now rolled out, Facebook says.

This is not the first time Facebook has added integrations between its apps.

For example, in 2016 it gave businesses access to a unified inbox of conversations from across its platforms, including Facebook, Instagram and Messenger. Last year, it also tested a cross-app notification feature. There’s even an option to launch Facebook right in Instagram itself, via an icon on your Instagram profile page.

The timing of the launch is notable, given that Instagram’s own Direct Messaging service has become a popular communications service of its own.

Instagram Direct as of April 2017 had 375 million users, and was spun off into its own standalone app last year in select countries outside the U.S. With so many users now messaging through Facebook-owned Instagram, it’s clear that Facebook wants to capitalize on that activity to grow its own Messenger app, too.

from Social – TechCrunch https://techcrunch.com/2018/07/18/facebook-can-now-sync-your-instagram-contacts-to-messenger/
via SEO & Social Media

Advertisements

Reddit expands chat rooms to more subreddits

If you’d rather spend time chatting with strangers who share a hyper-specific interest rather than keeping up with your coworkers’ stale memes on Slack, Reddit is ready for you. The platform has quietly been working on a chat room feature for months now and today it expands beyond its early days as a very limited closed beta.

Plenty of subreddits already make use of a chat room feature, but these live outside of Reddit, usually on Slack or Discord. Given that, it makes sense for Reddit to lure those users back into the engaging on Reddit itself by offering its own chat feature.

I spent a little time hanging out in the /r/bjj (brazilian jiu jitsu) chat as well as the a psychedelics chat affiliated with r/weed to see how things went across the spectrum and it was pretty chill — mostly people asking for general advice or seeking answers to specific questions. In a Reddit chat linked to the r/community_chat subreddit — the hub for the new chat feature — redditors discussed if the rooms would lead to more or less harassment and if the team should add upvotes, downvotes and karma to chat to make it more like Reddit’s normal threads. Of course, what I saw is probably a far cry from what chat will look like if and when some of its more inflammatory subreddits get their hands on the new feature. We’ve reached out to Reddit with questions about if it will allow all subreddits, even the ones hidden behind content warnings, will be offered the new chat functionality.

Chat rooms are meant as a supplement to already active subreddits, not a standalone community, so it’s basically like watching a Reddit thread unfold in realtime. On the Reddit blog, u/thunderemoji writes about why Reddit is optimistic that chat rooms won’t just be another trolling tool:

“I was initially afraid that most people would bring out the pitchforks and… unkind words. I was pleasantly surprised to find that most people are actually quite nice. The nature of real-time, direct chat seems to be especially disarming. Even when people initially lash out in frustration or to troll, I found that if you talk to them and show them you’re a regular human like them, they almost always chill out.

“Beyond just chilling out, people who are initially harsh or skeptical of new things will actually often change their minds. Sometimes they get so excited that they start to show up in unexpected places defending the thing they once strongly opposed in a way that feels more authentic than anything I could say.”

While a few qualitative experiences can only go so far to allay fears, Reddit’s chat does have a few things going for it. For one, moderators add chat rooms. If a subreddit’s mods don’t they don’t think they can handle the additional moderation, they don’t have to activate the feature. (A Wired piece on the thinking behind chat explores some of these issues in more depth.)

In the same post, u/thunderemoji adds that Reddit “made moderation features a major priority for our roadmap early in the process” so that mods would have plenty of tools at their disposal. Those tools include an opt-in process, auto-banning users from chat who are banned from a subreddit, “kick” tools that suspend a user for 1 minutes, 1 hour, 1 day or 3 days, the ability to lock a room and freeze all activity, rate limits and more.

To sign up for chat rooms (mods can add as many as they’d like once approved), a subreddit’s moderators can add their name to a list that lives here. To find chat rooms to explore, you can check for a link on subreddits you already visit, poke around the sidebar in this post by Reddit’s product team or check out /r/SubChats, a dedicated new subreddit collecting active chat rooms that accompany interest and community-specific subreddits.

from Social – TechCrunch https://techcrunch.com/2018/07/18/reddit-chat-rooms/
via SEO & Social Media

Undercover report shows the Facebook moderation sausage being made

An undercover reporter with the UK’s Channel 4 visited a content moderation outsourcing firm in Dublin and came away rather discouraged at what they saw: queues of flagged content waiting, videos of kids fighting staying online, orders from above not to take action on underage users. It sounds bad, but the truth is there are pretty good reasons for most of it and in the end the report comes off as rather naive.

Not that it’s a bad thing for journalists to keep big companies (and their small contractors) honest, but the situations called out by Channel 4’s reporter seem to reflect a misunderstanding of the moderation process rather than problems with the process itself. I’m not a big Facebook fan, but in the matter of moderation I think they are sincere, if hugely unprepared.

The bullet points raised by the report are all addressed in a letter from Facebook to the filmmakers. The company points out that some content needs to be left up because abhorrent as it is, it isn’t in violation of the company’s stated standards and may be informative; underage users and content has some special requirements but in other ways can’t be assumed to be real; popular pages do need to exist on different terms than small ones, whether they’re radical partisans or celebrities (or both); hate speech is a delicate and complex matter that often needs to be reviewed multiple times; and so on.

The biggest problem doesn’t at all seem to be negligence by Facebook: there are reasons for everything, and as is often the case with moderation, those reasons are often unsatisfying but effective compromises. The problem is that the company has dragged its feet for years on taking responsibility for content and as such its moderation resources are simply overtaxed. The volume of content flagged by both automated processes and users is immense and Facebook hasn’t staffed up. Why do you think it’s outsourcing the work?

By the way, did you know that this is a horrible job?

Facebook in a blog post says that it is working on doubling its “safety and security” staff to 20,000, among which 6,500 will be on moderation duty. I’ve asked what the current number is, and whether that includes people at companies like this one (which has about 650 reviewers) and will update if I hear back.

Even with a staff of thousands the judgments that need to be made are often so subjective, and the volume of content so great, that there will always be backlogs and mistakes. It doesn’t mean anyone should be let off the hook, but it doesn’t necessarily indicate a systematic failure other than, perhaps, a lack of labor.

If people want Facebook to be effectively moderated they may need to accept that the process will be done by thousands of humans who imperfectly execute the task. Automated processes are useful but no replacement for the real thing. The result is a huge international group of moderators, overworked and cynical by profession, doing a messy and at times inadequate job of it.

from Social – TechCrunch https://techcrunch.com/2018/07/17/undercover-report-shows-the-facebook-moderation-sausage-being-made/
via SEO & Social Media

Twitter is holding off on fixing verification policy to focus on election integrity

Twitter is pausing its work on overhauling its verification process, which provides a blue checkmark to public figures, in favor of election integrity, Twitter product lead Kayvon Beykpour tweeted today. That’s because, as we approach another election season, “updating our verification program isn’t a top priority for us right now (election integrity is),” he wrote on Twitter this afternoon.

Last November, Twitter paused its account verifications as it tried to figure out a way to address confusion around what it means to be verified. That decision came shortly after people criticized Twitter for having verified the account of Jason Keller, the person who organized the deadly white supremacist rally in Charlottesville, Virginia.

Fast forward to today, and Twitter still verifies accounts “ad hoc when we think it serves the public conversation & is in line with our policy,” Beykpour wrote. “But this has led to frustration b/c our process remains opaque & inconsistent with our intented [sic] pause.”

While Twitter recognizes its job isn’t done, the company is not prioritizing the work at this time — at least for the next few weeks, he said. In an email addressed to Twitter’s health leadership team last week, Beykpour said his team simply doesn’t have the bandwidth to focus on verification “without coming at the cost of other priorities and distracting the team.”

The highest priority, Beykpour said, is election integrity. Specifically, Twitter’s team will be looking at the product “with a specific lens towards the upcoming elections and some of the ‘election integrity’ workstreams we’ve discussed.”

Once that’s done “after ~4 weeks,” he said, the product team will be in a better place to address verification.

 

https://platform.twitter.com/widgets.js

from Social – TechCrunch https://techcrunch.com/2018/07/17/twitter-is-holding-off-on-fixing-verification-policy-to-focus-on-election-integrity/
via SEO & Social Media

Instagram is building non-SMS 2-factor auth to thwart SIM hackers

Hackers can steal your phone number by reassigning it to a different SIM card, use it to reset your passwords, steal your Instagram and other accounts, and sell them for Bitcoin. As detailed in a harrowing Motherboard article today, Instagram accounts are especially vulnerable because the app only offers two-factor authentication through SMS that delivers a password reset or login code via text message.

But now Instagram has confirmed to TechCrunch that it’s building non-SMS two-factor authentication system that works with security apps like Google Authenticator or Duo. They generate a special code that you need to login that can’t be generated on a different phone in case your number is ported to a hacker’s SIM card.

Buried in the Instagram Android app’s APK code is a prototype of the upgraded 2FA feature, discovered by frequent TechCrunch tipster Jane Manchun Wong. Her work has led to confirmed TechCrunch scoops on Instagram Video Calling, Usage Insights, soundtracks for Stories, and more.

When presented with the screenshots, an Instagram spokesperson told TechCrunch that yes, it is working on the non-SMS 2FA feature, saying “We’re continuing to improve the security of Instagram accounts, including strengthening 2-factor authentication.”

Instagram actually lacked any two-factor protection until 2016 when it already had 400 million users. In November 2015, I wrote a story titled “Seriously. Instagram needs two-factor authentication.” A friend and star Instagram stop-motion animation creator Rachel Ryle had been hacked, costing up a lucrative sponsorship deal. The company listened. Three months later, the app began rolling out basic SMS-based 2FA.

But since then, SIM porting has become a much more common problem. Hackers typically call a mobile carrier and use social engineering tactics to convince them they’re you, or bribe an employee to help, and then change your number to a SIM card they control. Whether they’re hoping to steal intimate photos, empty cryptocurrency wallets, or sell desireable social media handles that like @t or @Rainbow as Motherboard reported, there are plenty of incentives to try a SIM porting attack. This article outlines how you can take steps to protect your phone number.

Hopefully as knowledge of this hacking technique becomes more well known, more apps will introduce non-SMS 2FA, mobile providers will make it tougher to port numbers, and users will take more steps to safeguard their accounts. As our identities and assets increasingly go digital, its pin codes and authenticator apps, not just deadbolts and home security systems, that must become a part of our everyday lives.

from Social – TechCrunch https://techcrunch.com/2018/07/17/instagram-2-factor/
via SEO & Social Media

Dems and GOP unite, slamming Facebook for allowing violent Pages

In a rare moment of agreement, members of the House Judiciary Committee from both major political parties agreed that Facebook needed to take down Pages that bullied shooting survivors or called for more violence. The hearing regarding social media filtering practices saw policy staffers from Facebook, Google, and Twitter answering questions, though Facebook absorbed the brunt of the ire. The hearing included Republican Representative Steve King ask “What about converting the large behemoth organizations that we’re talking about here into public utilities?”

The meatiest part of the hearing centered on whether social media platforms should delete accounts of conspiracy theorists and those inciting violence, rather than just removing the offending posts.

The issue has been a huge pain point for Facebook this week after giving vague answers for why it hasn’t deleted known faker Alex Jones’ Infowars Page, and tweeting that “We see Pages on both the left and the right pumping out what they consider opinion or analysis – but others call fake news.” Facebook’s Head of Global Policy Management Monica Bickert today reiterated that “sharing information that is false does not violate our policies.”

As I detailed in this opinion piece, I think the right solution is to quarantine the Pages of Infowars and similar fake newers, preventing their posts or shares of links to their web domain from getting any visibility in the News Feed. But that deleting the Page without instances of it directly inciting violence would make Jones a martyr and strengthen his counterfactual movement. Deletion should be reserved for those that blantantly encourage acts of violence.

Rep Ted Deutch (D-Florida) asked about how Infowars’ claims in YouTube videos that Parkland shooting’s survivors were crisis actors squared with the company’s policy. Google’s Downs explained that “We have a specific policy that says that if you say a well documented violent attack didn’t happen and you use the name or image of the survivors or victims of that attack, that is a malicious attack and it violates our policy.” She noted that YouTube has a ‘three strikes’ policy, it is “demoting low quality content and promoting more authoritative content”, and it’s now showing boxes atop result pages for problematic searches like is the earth flat?’ with facts to dispel conspiracies.

Facebook’s answer was much less clear. Bickert told Deutch that “We do use a strikes model. What that means is that if a Page, or profile, or group is posting content and some of that violates our polices, we always remove the violating posts at a certain point” (emphasis mine). That’s where Facebook became suddenly less transparent.

“It depends on the nature of the content that is violating our policies. At a certain point we would also remove the Page, or the profile, or the group at issue” Bickert continued. Deutch then asked how many strikes conspiracy theorists get. Bickert noted that ‘crisis actors’ claims violate its policy and its removes that content. “And we would continue to remove any violations from the Infowars Page.” But regarding Page-level removals, she got wishy-washy, saying “If they posted sufficient content that it would violated our threshold, then the page would come down. The threshold varies depending on the different types of violations.”

“The Threshold Varies”

Rep Matt Gaetz (R-Florida) gave the conservatives’ side of the same argument, citing two posts by the Facebook Page “Milkshakes Against The Republican Party” that called for violence, including one that saying “Remember the shooting at the Republican baseball game? One of those should happen every week.”

While these posts had been removed, Gaetz asked why the Page hadn’t. Bickert noted that “There’s no place for any calls for violence on Facebook”. Regarding the threshold, she did reveal that “When someone posts an image of child sexual abuse imagery their account will come down right away. There are different thresholds for different violations.” But she repeatedly refused to make a judgement call about whether the Page should be removed until she could review it with her team.

Image: Bryce Durbin/TechCrunch

Showing surprising alignment in such a fractured political era, Democratic Representative Jamie Raskin of Florida said “I’m agreeing with the chairman about this and I think we arrived at the same exact same place when we were taking about at what threshold does Infowars have their Page taken down after they repeatedly denied the historical reality of massacres of children in public school.”

Facebook can’t rely on a shadowy “the threshold varies” explanation any more. It must outline exactly what types of violations incur not only post removal but strikes against their authors. Perhaps that’s something like ‘one strike for posts of child sexual abuse, three posts for inciting violence, five posts for bullying victims or denying documented tragedies occurred, and unlimited posts of less urgently dangerous false information’.

Whatever the specifics, Facebook needs to provide specifics. Until then, both liberals and conservatives will rightly claim that enforcement is haphazard and opaque.

For more from today’s hearing:

from Social – TechCrunch https://techcrunch.com/2018/07/17/facebook-strikes-policy/
via SEO & Social Media

House Rep suggests converting Google, Facebook, Twitter into public utilities

Amidst vague and uninformed questions during today’s House Judiciary hearing with Facebook, Google, and Twitter on social media filtering practices, Representative Steve King (R-Iowa) dropped a bombshell. “What about converting the large behemoth organizations that we’re talking about here into public utilities?”

King’s suggestion followed his inquiries about right-wing outlet Gateway Pundit losing reach on social media and how Facebook’s algorithm worked. The insinuation was that these companies cannot properly maintain fair platforms for discourse.

The Representative also suggested that there may be need for “review” of Section 230 of the Communications Decency Act that protects interactive computer services from being treated as the publisher of content users post on their platforms. If that rule was changed, social media companies could be held responsible for illegal content from copyright infringement or child pornography appearing on their platform. That would potentially cripple the social media industry, requiring extensive pre-vetting of any content they display.

The share prices of the tech giants did not see significant declines upon the Representative’s comments, indicating the markets don’t necessarily fear that overbearing regulation of this nature is likely.

Representative Steve King questions Google’s Juniper Downs

Here’s the exchange between King and Google’s Global Head of Public Policy and Government Relations for YouTube Juniper Downs:

King: “Ms Downs, I think you have a sense of my concern about where this is going. I’m all for freedom of speech, and free enterprise, and for competition and finding a way that competition itself does its own regulation so government doesn’t have to. But if this gets further out of hand, it appears to me that Section 230 needs to be reviewed.

And one of the discussions that I’m hearing is ‘what about converting the large behemoth organizations that we’re talking about here into public utilities?’ How do you respond to that inquiry?”

Downs: “As I said previously, we operate in a highly competitive environment , the tech  industry is incredibly dynamic, we see new entratnts all the time. We see competitorsacross  all of our products at google, and we believe that the framework that governs our services is an appropriate way to continue to support innovation.”

Unfortunately, many of the Representatives frittered away their five minutes each asking questions that companies had already answered in previous congressional hearings or public announcements, allowing them to burn the time without providing much new information. Republican reps focused many questions on whether social media platforms are biased against conservatives. Democrats cited studies saying metrics do not show this bias, and concentrated their questions on how the platforms could protect elections from disinformation.

Image via Social Life N Sydney

Protestors during the hearing held up signs behind Facebook’s Head of Global Policy Management Monica Bickert showing Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg as heads of an octopous sitting upon a globe, but the protestors were later removed.

One surprise was when Representative Jerrold Nadler (D-New York) motioned to cut the hearing for an executive session to discuss President Trump’s comments at the Helsinki press conference yesterday that he said were submissive to Russian president Vladimir Putin. However, the motion was defeated 12-10.

from Social – TechCrunch https://techcrunch.com/2018/07/17/facebook-public-utility/
via SEO & Social Media