Here are the five articles you read in artificial intelligence hell

Here are the five articles you read in artificial intelligence.jpgsignature20e8ac69101ef3c4b1d280c0f896ba2f

The devil went down to Silicon Valley; he was looking to steal a soul. But he ended up taking a consulting gig with Palantir instead.

In the meantime, the algorithm is now responsible for punishing the wicked. And these days the sign above the gates of hell reads “Abandon Open Source,” with an Amazon smile under print.

Those condemned to the eternity of pain and suffering today are now forced to read the five AI articles over and over again.

Which kind of sounds like what it's like to read technical news back here on Earth anyway. Do not believe me? Let's go inside.

Index

    Number one: "This article was written by AI."

    No it wasn't. These articles usually include a text generator such as GPAI-3 at OpenAI. The big idea is that the journalist will pay for access or collaborate with OpenAI to get GPT-3 to generate text from various recommendations.

    The journalist asks something silly like "can AI ever think like a human?" and then GPT-3 uses that stimulus to generate a certain number of results.

    Then the journalists and editors go to work. They select the best answers, mix and match sentences that make the most sense, and then throw away the rest.

    This is tantamount to editing taking Stephen King's collected works, copying / folding a single sentence from each book to a word dock, and then saying that you have a completely new book published from the master of horror.

    In hell, you stand in a long line to read hyperbolic, made - up stories about AI capabilities. And, as your final punishment, you must rewrite them for the next person in line.

    Number two: “An AI company raises $ 100 million for no reason. ”

    I remember reading about an early funding round for an AI company called PredPol. He had raised several million dollars to develop an AI system that was capable of predicting crime before it happened.

    I am sorry. You may not have read that correctly. He says: predict crime before it happens.

    This is something that is not possible. And I do not mean that technology is impossible, I mean that it is not possible within the realms of classical or quantum physics.

    You can see that "crime" is not generated from hot spots like mobs spawning in MMOs every 5 minutes. A first year statistics or physics student understands that no amount of historical data can predict where new crimes will occur. Especially because the past is literally not conscious. But, also, it is impossible to know how many crimes have been committed. Most crimes go unreported.

    PredPol cannot predict crime. It predicts arrest based on historical data. In other words: PredPol tells you where you have already arrested people and then says "try there again." Simply put: it will not work because it cannot work.

    But it raised money and raised money until one day it grew into a multi - billion dollar full - grown company - all for nothing.

    In hell, you have to read funding stories about billion - dollar AI startups that actually do nothing or solve any problems. And you are not allowed to spread.

    Number three: “Facebook's new AI makes everything you hate about Facebook 93.5% better. ”

    There are variations on this one - “Google's AI shows a 72% reduction in racial bias, ”“Amazon's new algorithm 87% better at seeing and removing Nazi products from its store front”- and they are all bunk.

    Big tech's favorite PR company is the mainstream media.

    Facebook, for example, says something like "our new algorithms are 80% more efficient at detecting and removing toxic content in real time," and that's when phone game to begin.

    You'll see half a dozen reputable news outlets print headlines that basically say "Facebook's new algorithms make it 80% more toxic." And that's just not true.

    READ  What are aspect ratios and why are they important for TV and movies?

    If a chef were to tell you that they have used a new cooking method that means 80% less fecal matter is found in the soup they are going to serve, you might not thinking that was a good thing.

    Increasing the efficiency of an algorithm does not lead to a general unilateral increase system effectiveness. And, because statistical correlations are extremely difficult to do when you are you do not have access to the actual data discussed, the people who write these stories just take the word of the technical marketing team for it.

    In hell, you have to read articles about big tech companies that only have values ​​from people who work at those companies and unverifiable statistics.

    Number four: "Ethics to one side, this AI is great!"

    We have all read these stories. They cover the biggest issues in the AI ​​world as if they were writing about the weather.

    The story will be something like "Clearview AI gets new government contracts," and the cover names a politician, a Clearview CEO, and someone representing law enforcement.

    The title of the piece is "Ethics aside, law enforcement agencies say these results are invaluable."

    And then, way down to the end of the article, you will see the mandatory “studies have shown that facial recognition struggles with recognizing some faces. Experts warn against using such technologies until this bias is resolved.

    In Hell, every AI article you read starts with the sentence “This does not work as well for black people or women, but we are just going to move past it as it does not matter. ”

    Number five: "Special: A study shows that over 80% of Bill's named CEOs know what AI is."

    My favorite AI article is for those who want to tell me what non-experts think.

    Here are the headline articles such as “Study: 80% of people think AI will be conscious within ten years”And“75% of mothers think Alexa poses a risk to children.

    These “inspections” are usually led by consulting firms that specialize in doing this type of thing. And they're usually not out doing profit-making studies that some journalists find their work appealing. They get paid to do the "research".

    And by "research," I mean: finding answers on Amazon Mechanical Turk or giving students a gift card on campus to complete a survey.

    These reviews are often purchased and paid for by an AI company as a marketing tool.

    These fields, in my inbox, usually look something like "Hey Tristan, did you hear that 92% of Chiefs don't know what Kubernetes is? Interested in this particular study and in a conversation with Dr. Knows Itall, founder of the Online School for Good AI Learning? They can talk about the challenges of hiring quality IT talent. ”

    Can you see the rubbish?

    In hell, the algorithm tells you that you can read articles covering real computer science study as soon as you finish reading all the vapid research pieces on AI published in mainstream establishments.

    But you're never done are you? There is always another. “What do football fathers think about sex voice assistants?”“What percentage of people think data is a character on Star Trek?”“Will driverless cars become a reality in 2022? This is what Tesla owners think.

    Yes, AI hell is a place full of horrors beyond comprehension. And, in case you haven't found out yet, we are already here. This article has been a guide for you.

    Now if you sign up for Google News, let's get started (Apple News is not available in Hell right now because of legal issues related to the App Store).

    Related Posts

    Deja una respuesta

    Tu dirección de correo electrónico no será publicada.

    Subir

    We use cookies to ensure that we give the best user experience on our website. If you continue to use this site we will assume that you agree. More information