Connect with us



People hold up signs at a Stop Asian Hate rally in Chicago on March 27.

Vincent Johnson/Xinhua via Getty

Shirley Wang’s phone wouldn’t stop buzzing as the hurtful tweets flooded in. Earlier that day, the 26-year-old Harvard student posted a thread of tweets about anti-Asian racism, prompting more than 100 replies.

“We must fight anti-Asian racism without fueling anti-Blackness (calls for increased policing are unacceptable),” Wang tweeted on Feb. 14.

While some Twitter users praised Wang for her remarks, online trolls hurled insults at her. “Apologize for corona first,” an anonymous Twitter account replied. Other users told Wang she had a “mental disorder,” was “dumb” or a “Bozo,” with some users adding a clown emoji in their replies. 

Wang reported dozens of tweets to Twitter for harassment — until she simply got tired of clicking the same button over and over. Hours later, she received an influx of emails from Twitter, informing her that most of the tweets she reported didn’t violate the company’s rules. 

“That was in its own weird way almost more upsetting than the tweets themselves,” she said.

Twitter’s response underscores the confusing and inconsistent attempts by social media to stamp out racist and hurtful content — efforts that have fallen short of curbing the spread of anti-Asian rhetoric online even as it emerged as a serious problem a year ago. Social networks, including Facebook, Twitter, TikTok and Google-owned YouTube, all have rules against hateful behavior, violent threats and harassment, but it’s often unclear where they draw the line. 

Twitter‘s hateful conduct policy says it doesn’t allow “targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category.” But some Twitter users who report tweets are finding out their interpretation of Twitter’s rules don’t match up with the views of the social network, which is also trying to promote free expression. CNET also showed Twitter several tweets that targeted Asians, and the company flip-flopped about whether the remarks violated its rules. 

Stay in the know. Get the latest tech stories from CNET News every weekday.

While social media has the ability to connect people with family and friends, it’s increasingly being used to sow division. Social networks have ways for people to mute or block users, but people still struggle to control the hate coming at them online. Advocacy groups such as the Anti-Defamation League say these tech companies need to do more. 

“Even as technology companies insist that they are taking unprecedented steps to moderate hateful content on their social media platforms, the user experience hasn’t changed all that much,” ADL CEO Jonathan Greenblatt said in a statement in March. “Americans of many different backgrounds continue to experience online hate and harassment at levels that are totally unacceptable.” 

Fueling an outcry against anti-Asian hate

In March 2020, CNET found dozens of hateful comments and posts about Asians across social media, including those that used ethnic slurs and perpetuated stereotypes. Since then, this problem appears to have gotten worse as more reports about anti-Asian violence surface. 


A mural in Atlanta, Georgia was painted by the Bad Asian and Civic Walls groups is a remembrance of the eight lives lost at the three spa shootings in the state. 

Megan Varner/Getty Images

The outcry over anti-Asian bias reached another boiling point after the Georgia spa shootings in March, which killed eight people, six of whom were Asian women. While federal investigators say they haven’t found evidence to classify the shootings as a hate crime, the tragedy has sparked more fears about violence against Asians, who have been blamed for the outbreak of the coronavirus. Last week, the White House announced new actions to tackle anti-Asian violence, bias and xenophobia that have existed long before the coronavirus. 

Social networks haven’t released data about how much anti-Asian content they’ve suppressed or removed since the coronavirus outbreak, which first appeared in China in December 2019 and has since infected more than 132 million people around the globe. Stop AAPI Hate, a coalition aimed at addressing anti-Asian hate during the pandemic, received nearly 3,800 reports of harassment, physical assault and acts of discrimination against Asian Americans from March 2020 to February 2021. About 6.8% of those complaints were for online harassment. 

Online hate and harassment aren’t unique to Asians. For many years, social media users who identify as Black, Jewish, transgender or as part of other marginalized groups have also complained that Facebook and Twitter aren’t doing enough to stamp out hate speech, despite having rules against that type of behavior. But the coronavirus pandemic has meant that Asian Americans are dealing with racist comments more often than they have in the past.

The ADL released a survey last month that showed “severe” harassment such as “stalking, physical threats, swatting, doxing or sustained harassment” has been on the rise for Asian Americans. About 17% of Asian Americans said in January they experienced severe online harassment compared with 11% during the same period last year, the largest uptick compared with other groups. About half said they were harassed because of their race.

Perpetuating Asian stereotypes and hate online

The use of anti-Asian rhetoric has also been infused into political speech, making it trickier for social networks to moderate this type of content. Conservatives have accused sites such as Facebook and Twitter of censoring their speech, allegations the companies repeatedly deny.


Twitter has been grappling with anti-Asian hate speech.

Image by Pixabay/Illustration by CNET

Lawmakers and advocacy groups have also slammed former President Donald Trump, who has referred to the coronavirus as the “Chinese virus” and “Kung Flu,” a term that deflects from the global nature of the pandemic and stokes discrimination against Asians.

Trump has denied he was being racist, noting the virus was first discovered in China, but Asian Americans, Democrats and civil rights activists have criticized the use of the term. The World Health Organization and the US Centers for Disease Control and Prevention have said people should avoid referring to any disease using the name of a location. A study from the University of San Francisco found that Twitter users who use #chinesevirus were more likely to “pair it with overtly racist hashtags.” Half of the more than 775,000 hashtags with #chinesevirus included anti-Asian bias.

More than a year after the pandemic started, terms such as “Chinese virus” are still being used on social media. In March, CNET asked Twitter about two tweets targeting Chinese people. One user with the pseudonym “Thefox” said they preferred to use terms like “Chinese sneeze” and “Wuhan flu.” Another Twitter user in February called Chinese people “nasty,” noting they have “eaten wild animals.”

A Twitter spokeswoman said at the time the tweets didn’t violate the site’s rules. The spokeswoman then said after further review the company determined the tweets did go against its rules against hateful conduct and they’re no longer available, highlighting the confusion around content moderation. 

CNET also showed Twitter an anonymous account that tweeted out pornographic images of Asian women and paired the photos with a hashtag that included a racial slur and the word “slut.” One image the user tweeted showed an Asian woman sleeping along with phrases such as “dream of white conquest” and “your race has failed.” Twitter permanently suspended the account after CNET pointed it out. The user had been barred for multiple violations of Twitter’s hateful conduct policy but was trying to evade the ban, violating another one of Twitter’s policies.

Facebook and YouTube sometimes allow users to use the racially insensitive term “Kung Flu.” CNET showed Facebook several posts that used the term, but the social network said they didn’t violate its rules. One image on Facebook’s Instagram site showed two people engaging in martial arts that said “everyone was Kung Flu fighting.” Facebook said it would continue to monitor trends and talk to organizations to make sure they’re drawing the lines of hate speech at the right place. The company said it removes that term in ads when they’re being used to sell products.

On YouTube, comics artist Ethan Van Sciver posted a video on his account in March in which he jokes about killing Chinese people. “Give me a tommy gun and line ’em up against the wall,” he says in the video, which has now been removed from YouTube. A spokesperson for YouTube said the video was removed for violating its hate speech policy and had fewer than 60,000 views after it was removed in less than 24 hours. 

Van Sciver said his remarks about Chinese people were “facetious sarcasm” and that the video was “taken out of context,” noting that “genuine anti-Asian rhetoric is deplorable.””I do not want to hurt Asian people or any people,” he said in an email. While YouTube pulled the video, clips of it still exist on Twitter, some posted by people who’ve denounced Van Sciver’s comments.

Blocking anti-Asian hate terms

TikTok, a short-form video app owned by Chinese company ByteDance, has taken a stronger stance when it comes to curbing the spread of racially insensitive comments targeting Asians. Unlike Facebook, Twitter and YouTube, TikTok has blocked terms such as “Kung Flu” from its search results.


TikTok blocks search results for terms that use anti-Asian rhetoric. 


“No results found. This phrase may be associated with behavior or content that violates our guidelines. Promoting a safe and positive experience is TikTok’s top priority,” a notice on the app states.

In a congressional hearing with Twitter CEO Jack Dorsey and Facebook CEO Mark Zuckerberg last month, Rep. Doris Matsui, a California Democrat, pointed out that Twitter and Facebook still allow hashtags that are harmful to the Asian community. 

Dorsey and Zuckerberg said they have policies against hateful behavior but noted the hashtags also contained counter speech that denounces the use of the terms, making enforcement of their hate speech rules more difficult. “With social media, it travels all around the world and hurts a lot of people,” Matsui told the executives. “We really have to look at how we define hate speech.” 

Manny Chong, a 26-year-old student in Massachusetts who organizes #stopasianhate rallies, has used TikTok to speak out against using racism. Chong said some of his videos have been accidentally flagged for hate speech. He has also received racist comments on the short-form video app, including “ok orientals,””Ching Chong” and “Everybody was Kung Flu fighting,” comments viewed by CNET showed.

The comments, he noted, showcase the problem he’s speaking out against so he doesn’t bother to delete or report most of them. Chong does draw the line when someone is sharing spam or trying to attack other TikTok users in the comments. In March, TikTok said it was releasing new tools that gave users more control over their comments, including the ability to hide recent comments unless they approve them.

“It’s just not worth my mental energy,” Chong said in regards to flagging comments for removal. “I just don’t have the space for negativity.” 

Since dealing with harassment on Twitter, Wang said she’s learned more about muting notifications on the social network, which means she won’t get pinged every time someone replies to a viral tweet. Twitter also allows users to hide replies. 

Initially, Wang felt nervous about tweeting again. Then she realized that would just let the online trolls win.

“Later that day, I made another post,” she said. “I’m just reaffirming my stance that we have to fight anti-Asian racism, [support] Black Lives Matter, and we have to do both in solidarity.”

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


Walmart gives workers off on Thanksgiving.

Walmart workers won’t have to work this Thanksgiving..

Getty Images

Black Friday deals aren’t limited to just the Friday after Thanksgiving. Retailers start their sales on Thanksgiving, but this year, it’ll be different at Walmart

The retail giant said Friday it’s giving workers the day off on Thanksgiving due to their work during the pandemic. It joins Target, which said in January it wouldn’t be open on Thanksgiving

Editors’ top picks

Subscribe to CNET Now for the day’s most interesting reviews, news stories and videos.

“Closing its stores on Thanksgiving Day is an additional way the retailer is thanking associates for their dedication to serving customers and their perseverance throughout the pandemic,” the company said in a press release Friday. 

Walmart also closed its stores on Thanksgiving last year due to the pandemic. 

Continue Reading



Apple reports improved behavior in its supplier chain.

Óscar Gutiérrez/CNET

Apple reported improvements in its manufacturing partners’ operational conduct in 2020 while grappling with the onset of the COVID-19 pandemic. Apple’s annual supply-chain responsibility report for 2021 focuses on a range of topics, including labor and human rights, worker health and safety, and the environment, among other things.  

Apple reported a reduction in major violations of its Code of Conduct among its suppliers and didn’t mention discovering any cases of child labor. It also found no instances of forced labor. Most of the violations Apple reported related to violations of the company’s working-hours policy or labor data falsification.

Stay in the know

Get the latest tech stories with CNET Daily News every weekday.

During the year, Apple conducted 1,121 supplier assessments in 53 countries to ensure compliance with the company’s Code of Conduct. The company also said it conducted 57,618 interviews with supply chain workers to ensure those workers participating in the assessment process weren’t retaliated against.

While Apple’s 113-page report (PDF) didn’t mention uncovering child labor being employed in suppliers’ facilities, the company did find that one facility had “misclassified the student workers in their program and falsified paperwork to disguise violations of our Code, including allowing students to work nights and/or overtime, and in some cases, to perform work unrelated to their major.” Apple said it placed the supplier on probation and stopped doing business with the facility until the issue was corrected.

The report didn’t identify the supplier that was suspended, but in November, Apple reportedly froze any new business contracts with Pegatron, one of its key suppliers, after the Taiwanese company was found to be breaking the company’s supply chain rules by falsifying paperwork and misclassifying workers in order to cover up labor violations.

Apple found a major reduction in violations of its code of conduct in 2020, reporting nine during the year compared with 17 the year earlier and 48 in 2017. Seven of the nine cases in 2020 related to working hours or labor data falsification. Overall, the company reported 93% compliance with its working-hours rules, which require suppliers to restrict work weeks to 60 hours.

Apple also said it rejected 8% of prospective suppliers for code-related risks.

Continue Reading




The Boeing Company reached an agreement with the US Federal Aviation Administration on Thursday that requires it to pay at least $17 million in penalties, after the Chicago-based manufacturer installed equipment with unapproved sensors in hundreds of 737 Max and NG aircraft.

“Keeping the flying public safe is our primary responsibility,” said FAA Administrator Steve Dickson. “That is not negotiable, and the FAA will hold Boeing and the aviation industry accountable to keep our skies safe.”

Stay in the know

Get the latest tech stories with CNET Daily News every weekday.

The settlement comes as the FAA seeks to step up its scrutiny of airline production and safety. In February, the Department of Transportation’s inspector general’s office said the FAA needed to strengthen its aircraft review process and issued a 55-page report detailing how the agency had misunderstood the 737 Max’s MCAS flight control system. Though not related to today’s settlement, that system was ultimately blamed for two crashes that, combined, killed 346 people.

In addition to the penalties, Boeing has agreed to take a number of corrective actions, including measures meant to ensure future compliance with FAA regulations and to reduce the chance that Boeing again submits aircraft with nonconforming parts for airworthiness certification. If Boeing fails to comply within 30 days, the FAA will direct the company to pay up to $10.1 million in additional penalties.

“We take our responsibility to meet all regulatory requirements very seriously,” a Boeing spokesperson told CNET. “These penalties stem from issues that were raised in 2019 and which we fully resolved in our production system and supply chain. We continue to devote time and resources to improving safety and quality performance across our operations. This includes ensuring that our teammates understand all requirements and comply with them in every way.”

Continue Reading


Copyright © 2016-2021 2Fast2Serious magazine.