• Twitter accounts are 80% bots, expert says

    Home » Forums » Newsletter and Homepage topics » Twitter accounts are 80% bots, expert says

    Author
    Topic
    #2477144

    PUBLIC DEFENDER By Brian Livingston More than 80% of the accounts on Twitter are likely to be nothing more than automated bots, according to a study b
    [See the full post at: Twitter accounts are 80% bots, expert says]

    8 users thanked author for this post.
    Viewing 12 reply threads
    Author
    Replies
    • #2477170

      Elon Musk will settle for 6%.

    • #2477171

      When does a tool cross the line and become a bot? And when does a bot become malicious? Can twitter be so influential in culture and only have 20% of the users it claims?

      There are so many questions.

      • #2477209

        When does a tool cross the line and become a bot? And when does a bot become malicious?

        I’ve never had a twitter account, and don’t intend to ever have a twitter account.  I do have a facebook account, and that is limited to only family and friends, people I actually know.

        Can twitter be so influential in culture and only have 20% of the users it claims?

        Twitter currently has 396.5 million users. 20% of that number is 79.3 million users.  That’s still a lot of users.  How many retweets does it take to get one’s attention? 317.2 million bots? How many tweets are retweeted by bots?

        Brian’s example should give one pause: “I tried these services on a Twitter account I created. Continuing to test, for less than $1,000, the account now has nearly 100,000 followers. I once tweeted complete gibberish and paid followers to retweet it. They did. …
        Taking my testing to the next level, over a weekend I wrote a script that automatically creates Twitter accounts. My rather unsophisticated script was not blocked by any [Twitter] countermeasures. I didn’t try to change my IP address or user agent or do anything to conceal my activities.
        If it’s that easy for a person with limited skills, imagine how easy it is for an organization of highly skilled, motivated individuals.”

        I have less trust in twitter than I have in Google, and I have no trust in Google.

        Create a fresh drive image before making system changes/Windows updates, in case you need to start over!
        We all have our own reasons for doing the things that we do. We don't all have to do the same things.

        10 users thanked author for this post.
      • #2477310

        Twitter: number of monetizable daily active users worldwide 2017-2022
        Published by S. Dixon, Aug 3, 2022
        In the last reported quarter, the number of global monetizable daily active users (mDAU) on Twitter amounted to 237.8 million users, up from 229 million mDAU in the previous quarter. Overall, there was an increase in mDAU of over 15 percent from the second quarter of 2021. Additionally, monetizable daily active users in the United States also increased.

        https://www.statista.com/statistics/970920/monetizable-daily-active-twitter-users-worldwide/  

        So say there are really only 20% of those users who are not bots. That’s over 47.5 million active daily users.

        An echo chamber effect which amplifies small numbers has been shown to exist on social media.

        The echo chamber effect on social media

        https://www.pnas.org/doi/10.1073/pnas.2023301118 

        (I admit I have not read nor understood fully this paper.)

        This effect is strong enough that bots expressing an opinion similar to one’s own posts (and enhanced by social media feed algorithms) can amplify feelings including rage and fear. This in turn has been shown to have caused violence on several occasions. The worst so far was the Jan. 6, 2022 riot at the US Capitol.

        It only takes a small amount of snow to start an avalanche. And this is also true in social psychology and political science. Even crowds as large and violent as the mobs of the French Revolution were led by a relatively few individuals. All it takes is a few people (or lots of bots) to give the impression that it’s OK to do outrageous things in the name of “liberty”.

        (In fact, bots may be better at driving the echo chamber effect than real people, as bots can be much more consistent and much more relentless than real people usually are.)

        That’s how 20% (or even 5%) of a social media platform’s active daily users can (and do) create the echo chamber which sets off such tremendous violence in real life. And that’s the risk of allowing bots to have such free rein on social media platforms. One could argue that the current failure to deal effectively with bots is negligence bordering on criminal neglect on the part of social platform operators. (Though, this is probably not a valid legal argument presently in the US.)

        You can read from LeBon, Marx and Engels, and Max Weber for more details as to how exactly these mechnisms have worked historically, even long before the advent of the Internet. The Internet just lets the Sociological Dialectic play out with ferocious intensity and terrifying speed.

        How to manage this powerful new social and political force is the challenge of the next generation(s).

        -- rc primak

        2 users thanked author for this post.
    • #2477270

      Interesting article, but I’d haste to add that automatic logouts should also be implemented during forced 2-step authentication. I agree that it’s an absolute must for anything sensitive, like banking, but too many people will just stay logged in when we make it too difficult to log in.

      Now do an article on fake reviews, please.

      And one on the reason why every app seems to want to be updated pretty much every day. (Spoiler: to increase the number of downloads and make sure it looks like it’s constantly being updated — though how important is each one of those updates?)

      Thanks, Brian! (I really mean it.)

      1 user thanked author for this post.
    • #2477298

      “It is plausible” is a far cry from “It has been proven”. The article is in serious trouble from the very start.

      I went over and read the entire “study” (really an ad for services offered by F5). There is no data offered at all. Never ever has the author of the “study” set electronic footprints on Twitter’s turf. He absolutely is not presenting a study of Twitter. He is offering a professional opinion, based on studying and advising (for a fee) other social media companies. (And we don’t know from the article which ones.)

      It is very dubious, to say the least, to extrapolate from a few cases where you do know what’s really going on  (because you have been offered full access), to a completely unknown (to you) web site or platform. There you have no insight whatsoever into how the site is hosted, how it is secured or what measures are in place to detect and eliminate bot traffic and fake accounts.

      I would like to see the results of an actual test with real-world data on just how many bots can really be detected among the Twitter accounts used in a typical day. Those numbers would be interesting. This opinion piece is not interesting to me.

      But it did seem to get a rise and a belly-laugh out of Elon Musk. Even though the article did nothing to bolster Musk’s case for not aquiring (or not paying anywhere near the asking price for) Twitter.

      At the end of the article (which is NOT a study in the academic sense) comes the sales pitch:

      The only way to fight bots is with highly sophisticated automation of our own.
      https://www.f5.com/company/blog/bot-traffic-percentage-fake-accounts-expert  

      By the way, F5 is in the business of selling bot mitigation technologies.

      I am not saying bots are not a massive problem on social media. I am not saying this problem is without enormous societal and economic impacts. What I am saying is, if you are going to post a number about Twitter, you’d better have Twitter-specific data to back up your claim. Otherwise, I can easily guess who will be on the receiving end of the next lawsuit from Twitter.  And it won’t be Brian.

      -- rc primak

      3 users thanked author for this post.
    • #2477313

      … no insight whatsoever into how the site is hosted, how it is secured or what measures are in place to detect and eliminate bot traffic and fake accounts. I would like to see the results of an actual test with real-world data on just how many bots can really be detected among the Twitter accounts used in a typical day. Those numbers would be interesting.

      Because of the platform’s large user base, the organization has set a daily cap of 2,400 tweets. With Twitter focusing on advertisement sales, the business seems to be on the verge of increasing its profit.”

      Brian’s example should give one pause: “I tried these services on a Twitter account I created. Continuing to test, for less than $1,000, the account now has nearly 100,000 followers. I once tweeted complete gibberish and paid followers to retweet it. They did. … Taking my testing to the next level, over a weekend I wrote a script that automatically creates Twitter accounts. My rather unsophisticated script was not blocked by any [Twitter] countermeasures. I didn’t try to change my IP address or user agent or do anything to conceal my activities. If it’s that easy for a person with limited skills, imagine how easy it is for an organization of highly skilled, motivated individuals.”

      Does twitter really want to reduce its user base by detecting and removing all bots? How would that impact their advertising revenue?

      [T]he Twitter whistleblower also highlighted in his disclosure that Twitter is in no way equipped to fully assess or measure the number of bots and fake accounts that are present on its platform at any given time. Interestingly, Zatko also said that despite their lack of capacity to accurately measure bots, they were never actually motivated to attempt to do so or to even address their lack of know-how.”

      I’ve never had a twitter account, and don’t intend to ever have a twitter account.

      Create a fresh drive image before making system changes/Windows updates, in case you need to start over!
      We all have our own reasons for doing the things that we do. We don't all have to do the same things.

      1 user thanked author for this post.
      • #2477321

        The Twitter Whistleblower story is an intriguing (though not quantitative) peek inside what Twitter really does and does not know and do about bots infiltrating its social media platform:

        Twitter Whistleblower Says Company Lied About Bots On The Platform
        In a scathing report, a Twitter whistleblower has come forward to allege that Twitter is marred with serious privacy and security issues.

        By KRISTI ECKERT | PUBLISHED 3 WEEKS AGO

        https://www.tellmebest.com/twitter-whistleblower-privacy-security-risks/ 

        There is more insight in this article than in the entirety of Brian Livingston’s article and the opinion piece upon which he based it.

        -- rc primak

        2 users thanked author for this post.
    • #2477351

      The world being what it is, there are good, even excellent reasons to have something like Twitter around, as far as function goes. For example, science journalists tweet about recent important discoveries and others chime in with their own tweets commenting on this.

      Government employees responsible for informing the pubic of something important and urgent can use, and I believe this may be happenning, something like Twitter for this.

      The problem is that the profit motive is these days completely out of hand and has pretty much ruined many useful things as, in this case and again for profit, those in charge of this particular medium, according to Brian and others quoted already here, and sadly no big surprise really, may have allowed as a result of the way they run the business, for it to be used for unsavory purposes by unsavory people without putting effective checks on them. And on their many, many bots.

      Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

    • #2477358

       

      Eh. When you have an outlier that is this far out from other estimates, I think it should be taken with a huge grain of salt.

      And, as for the Elon Musk issue: Twitter set things up where that’s a non-issue in the contract. Heck, if there are more bots, that should be a good thing for Musk, since part of his plan for improving Twitter involved cleaning out the bots.

      I don’t see this impacting the contract negotiations between Musk and Twitter. Musk will still eventually have to pay some amount higher than the $1 billion to back out of the deal.

      • #2477415

        If Twitter represented materially that there were x human users, with each user being a potential viewer of advertising, and it was then discovered that there are a lot less potential viewers of advertising than that (bots don’t buy things, after all), that means the deal is based on fraud that materially affects the value of the acquisition, and it’s not valid.

        The courts will unwind it, and if they find found that Twitter did misrepresent the number of actual human users, they will (almost) certainly not hold Musk to the deal as agreed.

        Dell XPS 13/9310, i5-1135G7/16GB, KDE Neon
        XPG Xenia 15, i7-9750H/16GB & GTX1660ti, OpenSUSE Tumbleweed

        4 users thanked author for this post.
        • #2479198

           

          No. Fraud requires willful deceit. Thus it could only be fraud if Twitter knew how many bot users there were. And the whole allegation is that only this new fancy technique was able to figure it out. Hence there would have been no way for Twitter to know, even if it were true

          Plus, again, the number of bots was a point of contention between Twitter and Musk, and they set up the contract with that in mind. Musk’s claim was actually that he thought the bot numbers were higher, and that he could leverage that to make more money. Twitter set it up where him finding that his estimate was wrong would not invalidate the deal.

          1 user thanked author for this post.
    • #2477386

      Then there is this:

      https://www.bloomberg.com/news/articles/2022-05-17/why-elon-musk-and-twitter-ceo-are-sparring-over-bots-quicktake-l39l11he?leadSource=uverify%20wall

      The whys and wherefores of Twitter bots, for those who can’t figure out how could someone create so many accounts that work automatically to do all sort of things in Twitter, good and bad and terrible, trending to awful. With not too much, but still a rather big soupçon of Musk:

      Excerpt:

      On Twitter, bots are automated accounts that can do the same things as real human beings: send out tweets, follow other users, and like and retweet postings by others. Spam bots use these abilities to engage in potentially deceptive, harmful or annoying activity. Spam bots programmed with a commercial motivation might tweet incessantly in an attempt to drive traffic to a website for a product or service. They can be used to spread falsehoods and promote political messages. In the 2016 presidential election, there were concerns that Russian bots helped influence the race in favor of the winner, Donald Trump. Spam bots can also disseminate links to fake giveaways and other financial scams. After announcing his plans to acquire Twitter, Musk said one of his priorities was cracking down on bots that promote scams involving cryptocurrencies.

      And they are not malware, or the result of a malware infection in Twitter: they are provided by Twitter to its users.

      Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

      3 users thanked author for this post.
    • #2477399

      Well, I just re-read Brian’s blog and got curious about the FIDO2 project. So I clicked on the link in Brian’s blog to the FIDO2 Wikipedia page and read about it there. Or tried to.

      And I did not understood the first thing.

      This Wikipedia article is really a brilliant example of how NOT to explain something and do it the easy way:

      (1) You assume that everybody who is anybody already knows what you mean: so no need to really explain: both easy and efficient.

      (2) You discuss something that might be really important and is also about using something to do this important thing. But you say next to nothing about the issue to be weary of and nothing at all about how to use this something that is the solution. Not a pip, nor a squeak.

      (3) You throw, like a cuttlefish, a cloud on the issue, not one of ink but one of acronyms, the more obscure the better.

      And why am I writing this? Because it happens way too often, that’s why. Even here, in AskWoody.

      As to all those bots in Twitter? I understand the skepticism about the article that Brian quoted. However, I also understand that being underhanded up to the point of lunacy is a characteristic not at all uncommon of the way some who are very high up in business, politics, etc. conduct themselves towards everyone else these days. So I am inclined towards believing that something as described in that article is possible and even happening.

      After all, we are aware, also in these days, of a pointless and truly awful war, with an awfulness that is spilling all over the world in various undesirable ways, that is also just the vanity project of someone in Moscow, aren’t we?

      So there you go.

      Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

      2 users thanked author for this post.
    • #2477430

      Ultimately, every website must stop relying on username/password combinations, which are inherently weak, and move to multifactor authentication (MFA) using secure tokens.

      Right, if the goal is to protect the account from other people. But in this scenario, the person abusing the account has the MFA credentials. I imagine that it isn’t that hard to distribute the codes to multiple machines when you have access to the token as would happen in a post farm…

      The only solution would be to limit to 3 logins at once (computer, tablet, phone) when MFA is active. And don’t make it too annoying or people will reduce their use! (But not bots!)

      My 0.02$

      Martin

      2 users thanked author for this post.
    • #2477554

      Susan Bradley Patch Lady

      3 users thanked author for this post.
    • #2478065

      Create a fresh drive image before making system changes/Windows updates, in case you need to start over!
      We all have our own reasons for doing the things that we do. We don't all have to do the same things.

      • #2478117

        Well, this is encouraging news for sure: a good move.

        But I also hope the new legislation has more teeth than fining a company that does not comply $15,000 a day. That comes to 5,475,000 a year (according to my old and most marvelous HP 15C pocket calculator that, after nearly 35 years of frequent use, I have yet to change its battery).

        That’s small change to Meta/Facebook and all the rest of that crowd. The one that welcomes new users with “Nice to subscribe to us, how can we help ourselves at your expense in exchange of enough fabrications, conspiracies and seditious exhortations to keep you permanently in a satisfying state of anti-everything fury?”

        Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

        MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
        Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
        macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

        • #2478118

          But I also hope the new legislation has more teeth than fining a company that does not comply $15,000 a day. That comes to 5,475,000 a year

          Perhaps you missed this: “If companies fail to abide by the law, they risk “penalties of up to $15,000 per violation per day,” enforced by the attorney general or specified city attorneys.”

          That ain’t pocket change.  One hundred violations would be $1,500,000 per day.  1,000 violations would be $15,000,000 per day.

          Create a fresh drive image before making system changes/Windows updates, in case you need to start over!
          We all have our own reasons for doing the things that we do. We don't all have to do the same things.

          • #2478124

            Well, yes, that’s more like it: with a thousand violations per day that should come to $5,475 billions per year. Perhaps something like this could be a bother for Twitter, that is having some money problems, I’ve heard.

            Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

            MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
            Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
            macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

      • #2478194

        The California bill says that “social media” companies must provide report to the state

        (3) A statement of whether the current version of the terms of service defines each of the following categories of content, and, if so, the definitions of those categories, including any subcategories:
        (A) Hate speech or racism.
        (B) Extremism or radicalization.
        (C) Disinformation or misinformation.
        (D) Harassment.
        (E) Foreign political interference.
        Question: If a social media company reported that they have NO policies with regard to these categories, or if it specified that with respect to one or more of these categories, they will take NO action, would that be acceptable under this legislation?
        1 user thanked author for this post.
        • #2478204

          Cybertooth: “Question: If a social media company reported that they have NO policies with regard to these categories, or if it specified that with respect to one or more of these categories, they will take NO action, would that be acceptable under this legislation?

          Let’s see what happens. Too soon to conclude firmly anything and we can’t trust that the article cited includes all the important details.
          But it is starting to smell faintly, at least to me, as if this legislation may have been set up to fail.

          If so, the whole thing would be funny, if it were not so sad.

          Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

          MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
          Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
          macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

        • #2478242

          Question: If a social media company reported that they have NO policies with regard to these categories, or if it specified that with respect to one or more of these categories, they will take NO action, would that be acceptable under this legislation?

          California already has law(s) on the books concerning your list.  The issue has been that those laws have no “teeth” in regard to social media.  The new law has teeth RE your list.  “The bill next requires a detailed report to be compiled by these companies and submitted to the Attorney General on a semiannual basis. The bill also requires the report to contain a “detailed description of content moderation practices” used by the platform.”

          If a social media company reported that they have NO policies with regard to these categories,

          22677. (a) On a semiannual basis in accordance with subdivision (b), a social media company shall submit to the Attorney General a terms of service report. The terms of service report shall include, for each social media platform owned or operated by the company, all of the following:
          (1) The current version of the terms of service of the social media platform.
          (2) If a social media company has filed its first report, a complete and detailed description of any changes to the terms of service since the previous report.
          (3) A statement of whether the current version of the terms of service defines each of the following categories of content, and, if so, the definitions of those categories, including any subcategories:
          (A) Hate speech or racism.
          (B) Extremism or radicalization.
          (C) Disinformation or misinformation.
          (D) Harassment.
          (E) Foreign political interference.
          (4) A detailed description of content moderation practices used by the social media company for that platform, including, but not limited to, all of the following:
          (A) Any existing policies intended to address the categories of content described in paragraph (3).
          (B) How automated content moderation systems enforce terms of service of the social media platform and when these systems involve human review.
          (C) How the social media company responds to user reports of violations of the terms of service.
          (D) How the social media company would remove individual pieces of content, users, or groups that violate the terms of service, or take broader action against individual users or against groups of users that violate the terms of service.
          (E) The languages in which the social media platform does not make terms of service available, but does offer product features, including, but not limited to, menus and prompts.
          (5) (A) Information on content that was flagged by the social media company as content belonging to any of the categories described in paragraph (3), including all of the following:
          (i) The total number of flagged items of content.
          (ii) The total number of actioned items of content.
          (iii) The total number of actioned items of content that resulted in action taken by the social media company against the user or group of users responsible for the content.
          (iv) The total number of actioned items of content that were removed, demonetized, or deprioritized by the social media company.
          (v) The number of times actioned items of content were viewed by users.
          (vi) The number of times actioned items of content were shared, and the number of users that viewed the content before it was actioned.
          (vii) The number of times users appealed social media company actions taken on that platform and the number of reversals of social media company actions on appeal disaggregated by each type of action.
          (B) All information required by subparagraph (A) shall be disaggregated into the following categories:
          (i) The category of content, including any relevant categories described in paragraph (3).
          (ii) The type of content, including, but not limited to, posts, comments, messages, profiles of users, or groups of users.
          (iii) The type of media of the content, including, but not limited to, text, images, and videos.
          (iv) How the content was flagged, including, but not limited to, flagged by company employees or contractors, flagged by artificial intelligence software, flagged by community moderators, flagged by civil society partners, and flagged by users.
          (v) How the content was actioned, including, but not limited to, actioned by company employees or contractors, actioned by artificial intelligence software, actioned by community moderators, actioned by civil society partners, and actioned by users.
          (b) (1) A social media company shall electronically submit a semiannual terms of service report pursuant to subdivision (a), covering activity within the third and fourth quarters of the preceding calendar year, to the Attorney General no later than April 1 of each year, and shall electronically submit a semiannual terms of service report pursuant to subdivision (a), covering activity within the first and second quarters of the current calendar year, to the Attorney General no later than October 1 of each year.
          (2) Notwithstanding paragraph (1), a social media company shall electronically submit its first terms of service report pursuant to subdivision (a), covering activity within the third quarter of 2023, to the Attorney General no later than January 1, 2024, and shall electronically submit its second terms of service report pursuant to subdivision (a), covering activity within the fourth quarter of 2023, to the Attorney General no later than April 1, 2024. A social media platform shall submit its third report no later than October 1, 2024, in accordance with paragraph (1).
          (c) The Attorney General shall make all terms of service reports submitted pursuant to this section available to the public in a searchable repository on its official internet website.”
          I would venture to say that a social media company reporting, “We don’t do that.” is not going to cut it.

          Create a fresh drive image before making system changes/Windows updates, in case you need to start over!
          We all have our own reasons for doing the things that we do. We don't all have to do the same things.

          • #2478321

            The thing is, I don’t see any language in there that suggests that the social media platform would be required to actually institute policies regarding disliked speech.

            Maybe I’m reading it too optimistically, but the sense I get of it is that what’s actually being required in this legislation is for these platforms to be specific and transparent about their content policy and its application. I’ve seen too many instances of people getting de-platformed with only the vaguest reference to “violating our community norms” and no explanation as to what exactly the supposed violation entailed.

             

            1 user thanked author for this post.
            • #2482496

              It isn’t just the platforms themselves which can ban people arbitrarily. Users (or bots) can gang up on a user, reporting all their posts. Moderators then say things like “your fellow users have been complaining about your posts” and POOF! You’re banned for life from that platform. Case in point — Nextdoor.com.

               

              -- rc primak

          • #2482497

            That’s a whole lot of regulations and a LOT of words and reports required. I doubt that this legislation will survive its first few legal challenges.

            -- rc primak

    • #2478434

      Cybertoth: “The thing is, I don’t see any language in there that suggests that the social media platform would be required to actually institute policies regarding disliked speech.

      I agree. Maybe there is more in the new legislation that requires those companies to do that, but, if so, still has not been quoted here. But if there is not such thing in it, then this could well be just a feel-good, crowd-pleasing political move (and one also meant to increase the state of California’s revenues, which could be a good thing). Because, if you are running a company making tons of billions of dollars and choose to ignore this altogether, the resulting few-billion-dollars total annual fines are going to be the cost of doing business while still making plenty of money. Or maybe you could change flag and have the company registered in Chad as Chadian (that could pay taxes to the government there, surely much appreciated by the local politicians in power and, perhaps, even offer well-paying tech jobs to fellow Chadians — while still living anywhere you like, as an Attache for something or other to a Chadian Embassy there). So just forget about this comfort blanket of a bill and keep doing what you are doing.

      I hope this new law goes beyond that, in a good direction.

      Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

      MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
      Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
      macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

      • #2478566

        Or maybe you could change flag and have the company registered in Chad as Chadian … So just forget about this comfort blanket of a bill and keep doing what you are doing.

        This new law does not apply only to companies based in California:

        (e) “Social media platform” means a public or semipublic internet-based service or application that has users in California and that meets both of the following criteria:

        AB-587 Social media companies: terms of service.

        Windows 11 Pro version 22H2 build 22621.1483 + Microsoft 365 + Edge

        1 user thanked author for this post.
        • #2478645

          b: I was just kidding: by that I meant the whole company “changing flag” and no longer being a US one, but instead a Chadian one, based entirely in N’Djamena: lock, stock, and barrel. And if that meant giving up business in California, they still could be making it in other places, such as Germany, or maybe Oklahoma? Endless possibilities right there. To make up for the loss of California, they could also start a cryptomining venture to mine crypto, for example, in Facebook user’s computers. After all, if Norton does that in those of it’s users, why not Facebook in theirs? From Chad? (*) For Zuckerberg, as Attaché for Things in General at the Chadian Embassy in Zürich and enjoying Diplomatic Immunity, what’s not to like? That’s true 21st Century entrepreneurial grit!

          As I just said: endless possibilities.

          (*) At Facebook and Twitter, I would imagine, they could get a lot of lucrative things done with their innumerable bots.

          Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

          MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
          Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
          macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

          • #2478665

            b: I was just kidding: by that I meant the whole company “changing flag” and no longer being a US one, but instead a Chadian one, based entirely in N’Djamena: lock, stock, and barrel. And if that meant giving up business in California, they still could be making it in other places, such as Germany, or maybe Oklahoma? Endless possibilities right there.

            Unless they found some magical method to exclude users in California, the law would still apply.

            Windows 11 Pro version 22H2 build 22621.1483 + Microsoft 365 + Edge

            1 user thanked author for this post.
            • #2478668

              No magic needed: cancel all the subscriptions there, why? because they can, read the EULA, page 20,035; refund money when and only when unavoidable, enforce rock-solid clauses against class actions (EULA, pages 33,231 and 43,901 – 902). Create a subsidiary company to keep doing the social network thing there, independent from FB on paper, but linked by a chain of shell companies to N’Djamena.

              Or maybe try something else:

              https://www.cnbc.com/2021/12/07/treasury-wants-to-crack-down-on-shell-companies-corruption-with-new-rule.html

              Ex-Windows user (Win. 98, XP, 7); since mid-2017 using also macOS. Presently on Monterey 12.15 & sometimes running also Linux (Mint).

              MacBook Pro circa mid-2015, 15" display, with 16GB 1600 GHz DDR3 RAM, 1 TB SSD, a Haswell architecture Intel CPU with 4 Cores and 8 Threads model i7-4870HQ @ 2.50GHz.
              Intel Iris Pro GPU with Built-in Bus, VRAM 1.5 GB, Display 2880 x 1800 Retina, 24-Bit color.
              macOS Monterey; browsers: Waterfox "Current", Vivaldi and (now and then) Chrome; security apps. Intego AV

            • #2482499

              Internet based companies and platforms tend to tailor their rules and policies to the most restrictive places where they have users. That’s why American companies often adhere to the EU rules about privacy, and why many companies have rules which would pass muster in China, even for users who live in much less restrictive places.

              -- rc primak

    Viewing 12 reply threads
    Reply To: Twitter accounts are 80% bots, expert says

    You can use BBCodes to format your content.
    Your account can't use all available BBCodes, they will be stripped before saving.

    Your information: