Artificial Intelligence (AI) is becoming increasingly popular among service providers. This includes banks and search engines, this on the basis that it provides the same or a better level of service at a lower cost. This is all very fine until the AI goes wrong.
I have seen two examples of AI failure in the last 24 hours. These include Twitter permanently suspending a user for saying that he had killed a fly, and one of my own shortened Google links being banned for allegedly pointing to a child abuse web site when it actually pointed to a government web site on protecting children (https://goo.gl/1Nfo7 and https://www.esafety.gov.au/education-resources/iparent/online-safeguards/parental-controls). While the impact on the users is annoying in both these cases, there is a significant risk that AI failure will have serious impact on other people, including the risk of death.
In both cases, the AI engines appear to have been working on the words alone. There was no attempt to do a semantic analysis on the text. Had such a semantic analysis been undertaken, neither of these cases would have happened. Semantic analysis tools are available. That the people who implemented to the two AI engines did not see fit to include any semantic analysis shows an obscene failure to discharge their duties.
Unless, and until, those organisations that use AI instead of real intelligence achieve a reduction in the error rates of their AI engines by at least four orders of magnitude, I call a pox of all their houses!