THEY THINK WE’RE STUPID: Let’s not prove them right

This is a story about a) tech company arrogance, b) the "shareholder value" philosophy of business, and c) the misrepresentation of shitty business decisions and poor quality control as "technological necessities" or "the way technology works."

One day, all my outgoing email started bouncing and I got these bounce messages saying they had identified spam coming from my email address. They wanted me to click on a link to a page where I would click a button that said "not spam," the explanation being that I would thereby "train their algorithm" to know that I am not a spammer. The outfit with the un-housebroken algorithm was a third party hired by my email provider, called "Mailchannels."

After 3 or 4 such bounces, I contacted their support department and told them my email was still bouncing. They said they would put me on a "whitelist" and that would fix the problem.

Well, it didn't, and I ended up making 3 more support requests where they didn't fix the problem, and finally a support flunky told me the whitelist would never work, because Mailchannels had determined that I was a spammer and although sometimes there are false positives, I would just have to wait for the block to "fall off," and the best thing I could do would be to continue to send emails and then click back on the bounce message in order to further "train" their algorithm.

I responded that a) I was not going to train their effing algorithm, they are charging me for the email service and it's not up to me to provide free labor to improve their product, and b) Mailchannels was publishing false and defamatory claims about me which damaged my reputation and resulted in denial of services from my email provider, and could have resulted in loss of revenue.

I also said that the fact that they told me there were false positives and asked me to "train" their algorithm showed that they knowingly put a defective product on the market, and that their pattern of negligence was an intentional part of their business model which could leave them vulnerable to RICO (organized crime) prosecution (I don't know if that last part is true but LOL).

The email provider also failed to adequately train their support staff to correctly identify and remedy the problem on the first support call. They had all the information but they did not train their staff to operate their system correctly, which further delayed the availability of my contracted-for services, and exposed me to reputational and financial harm. At one point a support person told me "We are not affiliated with Mailchannels." Um, what? Why are you using them to defame me as a spammer? It seems to me that the support rep was not adequately trained, which is related to a larger pattern in the tech world.

This episode illustrates several patterns which are near-ubiquitous among tech companies, to wit:

  1. Profit and revenue as the sole driver of business decisions (Boeing can't make functional airplanes any more because a company started by engineers has been taken over by bean counters).
  2. The misrepresentation of business decisions as inherent features of technology.
  3. The crippling of public awareness of the way things could be, or used to be.

There was a recent story about how Twitter algorithms can't distinguish between white supremacist hate groups and Republican politicians, so they just decided not to moderate white supremacists.

This was mostly presented as an "OMG, Republicans are indistinguishable from white supremacists, LOL" story. But there's another question that has been almost entirely overlooked: Whether algorithms have any place at all in this process. The thinking is "Well, the algorithm doesn't work, 'technology' doesn't do 'X,' so we're stuck with a host of social problems brought on by the advent of new technology."

But nobody says "maybe we shouldn't be using algorithms for editorial functions at all." Maybe we should hire human beings and pay them. The tech companies have propagandized us to the point where we think it is normal and natural that they have replaced a human workforce with automation, before they even had a human workforce. Their enormous profits (Facebook's stock has gone up 45% in 6 months, their revenue in 1Q 2019 was $15 billion) are due to the fact that they are harvesting behavioral data and dumping the toxic waste from that process into the environment, without any meaningful quality control in the product, or pollution control in the manufacturing process. Their "algorithms" are essentially crappy pollution control devices. If they were forced to install better pollution control, it might make a tiny dent in their obscene profit margins.

There is nothing inevitable or necessary about this. We don't let them get away with this with self-driving cars, for example. When a self-driving car kills someone, we don't let them say, "well, the algorithm doesn't prevent that, so we'll just change the product requirements." Or, "that's just the way self-driving cars work, society will have to get used to it."

In the 1960s, industrial pollution got so bad that the Cuyahoga River in Cleveland caught on fire. The dramatic images from that event helped get the Clean Water Act passed. We need a Clean Information Act to get the tech companies to clean up their acts.