Artificial Or Human Intelligence? Companies Faking AI

April 10, 2020

Artificial intelligence is a hot topic across the board — from enterprises looking to implement AI systems to technology companies looking to provide AI-based solutions. However, sometimes the technical and data-based complexities of AI challenge technology companies to deliver on their promises. Some companies are choosing to approach these AI challenges not by scaling back their AI ambitions, but rather by using humans to do the task that they are otherwise trying to get their AI systems to do. This concept of humans pretending to be machines that are supposed to do the work of humans is called “pseudo AI”, or more bluntly, just faking it.

Faking it Till You Make It with AI

Most notably, a number of companies have been claiming to use artificial intelligence in order to automate parts of their services such as transcription, appointment scheduling, and other personal assistant work, but in reality have been outsourcing this work to humans through labor marketplaces such as Amazon Mechanical Turk. While artificial intelligence is actually used for the solution in part or at all, these companies have not been truthful with their claims of using computers to perform these services. 

CNBC published an article critical of Sophia, the AI robot from Hanson Robotics. When the company was approached by CNBC with a list of questions they wanted to ask, Hanson Robotics responded with a list of very specific questions for Sophia. In a followup video, CNBC questions whether the robot is meant to be research into artificial intelligence or is just a PR stunt. They also provided the responses. Even the owner of Hanson Robotics has gone on record as saying most media encounters are scripted.

However, Hanson Robotics is only one of the more notable pseudo-AI encounters to make the news. Popular calendar scheduling services X.AI and Clara Labs were found to both be using humans to schedule appointments and calendar items rather than purely artificial intelligence solutions. The above linked article quotes human workers who wished that the AI system did actually work as promised because the work was such boring drudgery. No doubt while these companies were the unfortunate ones to get unwanted media attention, there are no doubt many others who are pursuing the approach of using humans as a “stop gap” when the AI systems are deficient.

What’s wrong with Pseudo-AI?

It isn’t unheard of though for companies in the tech industry to fake some or part of their services, especially when they are starting out. While this may work for some tech areas, does it work for AI? Some would say no.

The entire premise of AI is that it can accomplish feats that previously only humans were able to do. So faking AI capabilities actually undermines the very essence of what AI is promising to be able to do. AI is still a very emerging field and new artificial intelligence technology is being developed every day. Emerging companies are pitching technology solutions whose premise is that they can accomplish technically challenging tasks. But rather than delivering on these promises, these companies are simply performing technology-enabled outsourcing to humans, and often at very low wages.

Incidents like this have the potential to create a lot of problems in the tech industry. One of the biggest is the potential for an AI winter if people consistently see products that are being faked regularly. People will be less likely to want to invest in AI technology. AI winters, which are periods of decline of interest and funding in AI, have previously been triggered by substantial overpromising and underdelivering of AI capabilities. The increasing or widespread use of Pseudo-AI to cover up AI deficiencies could lead to disenchantment with AI as a whole, and contribute to widespread AI pullback.

Another big issue with Pseudo-AI approaches is the potential for breaches of privacy and confidentiality. A computer that processes information in isolation can safeguard data to various extents, but putting random humans in the loop is a recipe for potential data privacy breaches. 

For example, AI solutions that process information in regulated industries such as healthcare, finance, or government systems can be compromised by humans that are not allowed to view confidential or private information. In one instance, the Health Insurance Portability and Accountability Act (HIPAA) was legislated in the United States to help insure patient privacy. If a company lies about using computers to schedule appointments that contain private patient data, it isn’t clear whether or not they are able to uphold the standards set forth by HIPAA for patient information privacy and security. This could get companies employing Pseudo-AI approaches in a lot of hot water. 

Besides these issues, we need to also consider the basic ethical considerations of faking artificial intelligence. If you aren’t disclosing your company’s use of humans, there is a major ethical issue at hand. Is it okay to lie to your customers and the public in general?

Schmelzer, R. (2020, April 4). Artificial Or Human Intelligence? Companies Faking AI.

Retrieved from

Recent posts