I guess my first question is: why would taking control of TikTok prevent bad faith state actors? X, for example, has a lot of issues with foreign accounts spreading propaganda. It seems more like a “moderation at scale” issue to me.
It also of ingores the cases where state actors' and some wing of domestic politics have aligned interests (USSR & Communist parties in the early o mid 20th century, or Russia sponsoring/infiltrating rightwing countries in Western Europe & America in the 21st century)
X and Meta do try to uncover and scrub malicious state actors, like the investigation of the 2016 Russia misinformation campaign. Maybe, they could have done more, but there is no reason why they wouldn't put an earnest effort as they have nothing to gain from faking compliance. A social media platform owned by a foreign adversary does have this incentive.
X may be owned by a crazy Elon, but that doesn't change that X today still has no incentive to allow for malicious state actors, especially under government pressure. In fact, they recently exposed that a lot of extremist political accounts were based out of foreign countries.
Do you not understand social media's business model?
The platform’s direct financial incentives are almost identical to malicious state actors’: to foment extreme engagement. It is not a secret to anyone that people engage most actively with outrage.
Content moderation costs money directly, then costs engagement indirectly.
Maybe it seems identical because China doesn't have any grand short term ambitions, but financial incentive is fundamentally very different. Meta may screw over the American people, but America losing it's superpower status would only hurt them.
I can't parse your first sentence or what the relevance is to the discussion.
You said X has no incentive to allow foreign influence ops. Very clearly, not only do they have an incentive to allow them, but they have an additional disincentive to disallowing them (cost).
The fact those aligned incentives originate from different ultimate goals is totally irrelevant for as long as the two are aligned.
Foreign ops makes up a fraction of percent of X's revenue, if that. Any profit they gained from it cancels out with a similar degree of negative attention from the government, so overall they're incentivized to follow the direction of three letter agencies. A less inflammatory algorithm would maybe cost X a couple percent in revenue. If the government really wants to, they could pressure X to change their algorithm as they can easily cause much more pain to X than a couple percent of revenue.
A Chinese owned TikTok simply doesn't follow the same calculus. If the CEO of Bytedance (note different from the CEO of TikTok) gets a order flood the platform with anti-Taiwanese propaganda right before China invades Taiwan, the CEO would have to follow through even if it causes the value of TikTok to zero. The ban was not about how much harm TikTok has done already, it's about how much harm they can do in a worst case scenario.
Uhhh... you seem to imply that TikTok and X operate under different rules, while actually making the argument that they're the same ("if the govt really wanted to, they could successfully pressure X contrary to X's economic incentives")
Beyond that, you're just asserting a bunch of assumptions as if they're fact.
And all of this is irrelevant. I never argued TikTok/X/Meta are the same. The issue I raised is you positioning 2016 enforcement action as evidence of X's current enforcement posture and then suggesting there is some compliance motivation here (there isn't – there's no relevant law to comply with as far as USG is concerned) and suggesting there's no incentive to allow foreign ops (there is, as demonstrated).
they have money and power to gain by faking that compliance, to the extent that if the foreign power gets what they want, meta or twitter gets what they want to, eg. removal of regulation or a ban on regulation of their AI products