Sign In / Sign up
AI, Machine Learning and Quantum Computing: scary, isn’t it?

AI, Machine Learning and Quantum Computing: scary, isn’t it?

// December 17, 2019

I have bad news and good news. Bad news – the vast majority of population is totally ignorant about AI[1] and tech in general. The good thing is, more and more people are starting to realise that AI is no longer just a futuristic buzzword; a surprising magnitude of our daily habits, hobbies, routine manners and experiences are getting a whole lot easier thanks to AI. Naturally, AI comes with its risks and dangers: AI do fail and AI can be used maliciously.

AI is making mistakes

Yes, AI is not perfect.

Google Photos is still learning

Google Photos made big headlines in 2015 by incorrectly labeling black people as ‘gorillas’. In that case, the problem lied (and still lies) in the imperfection of AI. Almost five years ago in the specific case of the Google Photos, the margin of error was so large dogs were identified as horses, clocks as cups, etc. Of course, some mistakes are just more noticeable (or more offensive) than others. [1]

Amazon’s recruiting tool does not like women

Amazon, another big name, also had a similar blunder once. The company had been developing and actively using AI recruiting tool to help the HR staff in reviewing applicants’ CVs and selecting potentially matching candidates. In short, that tool was based on comparison of newly submitted applications with those submitted during past 10 years, particularly with those of successfully working employees. Since the majority of Amazon’s employees were men, the above-mentioned machine learning-based tool – as you may guess – “did not like” female applicants, penalising them all because of including keywords “women” and “women’s”. Even though in general terms the algorithm itself may have worked as they were supposed to, as a result of biased dataset the AI made mistakes, which led to unfair discrimination. [2]

UBER and transgender drivers

The last but not less interesting one is Uber’s case. The ridesharing giant has been using state-of-the-art facial recognition software to ensure that the drivers were not cheating and were using their own Uber driver’s account. Drivers occasionally were asked to take a selfie in order to check if driver’s photo matched with the one Uber had in their database (security feature called “Real time ID check”). Unfortunately, transgender drivers ran into a problem – during different stages of their transition towards another gender, Uber’s security feature was no longer able to recognise driver’s face, and thus their accounts would be suspended. In order to reverse the suspension, trans drivers would have to drive to Uber support center to verify their identity and upload new photos. However, the were no guaranties it would not happen again in few months. Due to this oversight in Uber’s software, transgender drivers were unfairly prevented from working. [3]

To make a long story short, AI mistakes generally stem either from imperfect algorithms or from use of faulty or incomplete data sets for machine learning. AI cannot avoid human errors when they are made by its creator – on the contrary, human errors magnify exponentially, as the same error is repeated thousands upon thousands of times.

AI mistakes may be corrected by using technical stuff – improving the code, algorithms and datasets.

AI may be used for bad

The use of AI for unethical purposes is, in my opinion, a far more serious issue.

AI is extremely powerful – if anyone with enough capital and malicious intent puts the hands on an AI who is able, let’s say, rather accurately guess person’s sexual preferences [4], consequences may be dire. As an even more disturbing example, some governments across the world already use AI-powered facial recognition tech to control their citizens via social score (credit) systems [5]. Also, AI can be used to create very convincing “deep fake” videos of celebrities and politicians [6]. Thus, when putting too much trust into it, we are running a risk to end up into kind of “black mirror” world.

New technologies, such as quantum computing will dwarf all machines so far engaged in running the AI / ML algorithms, and this is clearly one of the most spectacular leaps in calculation and processing capacities of the humankind. Just imagine – in order to train the ML driven autonomous car to navigate a particular type of streets, it took years to collect a required dataset. With a quantum computer it will no longer be a problem – AI will be able to instantly train itself against the set of synthetic data. Last October, Google has allegedly reached a quantum supremacy, where a quantum computer is able to perform a calculation that is impossible for a classical one [7].

AI and the use of it clearly has certain issues related to human rights, non-discrimination and equal opportunities perspective, and it was understood as early as  Isaac Asimov’s Three Laws of Robotics[1], came to the light in 1940s.

The law still has the chance to intervene. Recently, European Commission has issued Ethics Guidelines for Trustworthy Artificial Intelligence [8], US Congress has introduced Federal Algorithmic Accountability Act [9]. These documents include such principles as human intervention, diversity, non-discrimination, protection of privacy, accountability, intended to ensure that AI does not cause any harm to humans. Even in China, where the development of AI has gone largely unregulated (allegedly to help launch China into the forefront of AI, deep learning and machine learning technologies [10]), Chinese government has issued guiding principles for artificial intelligence research and applications [11].

AI has also changed legal face of entire industries such as fintech, adtech, social networking, media, transport and mobility, online pricing algorithms, online platforms, etc.

Law usually lags behind technologies, but some signs that legislators will manage to find a way of regulating it without impeding its development process do exist. On one hand, by over-regulating they risk hindering development. On the other hand, with no regulation we might be approaching one of the doomsday scenarios.

Artificial intelligence is the future. The million (or, to be more accurate, trillion) dollar question is: how does one balance colossal opportunities and threats that may be difficult to predict?

[1] Note: I am using AI and machine learning interchangeably for the sake of simplicity – I realize that they do not necessarily mean the same thing

[1] First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.














Have your own news to share? Contact us via email.