• Print
  • Decrease text size
  • Reset text size
  • Larger text size
07/06/2017

Is regulation of AI algorithms the way forward?

As more and more parts of our lives are made easier or enhanced by technology and artificial intelligence (AI), has anyone stopped to think about what goes on behind the scenes? Of course when we see a process has been automated, and as a result becomes less time-consuming, we are unlikely to consider how this was accomplished, or what complex web of data and programming lie behind it. 

The popularization of Big Data has contributed to new regulations, such as the General Data Protection Regulation (GDPR) stemming from Europe, and the Data Privacy Ordinance (DPO) in HK. While these are advancements to help protect us as contributors or producers of data, what is the impact of these regulations on the means of collection and the uses of these data, if at all, particularly by non-manned means such as algorithms and AI. 

In this article, we will briefly touch on the pros and cons of regulating algorithms in general, and of AI specifically. While no such regulation exists today, as adoption rates of AI grows, it will be interesting to see how the role of regulators plays out. 

Existing Global Regulations

As mentioned, the GDPR and DPO are two regulations regarding data and protection of the data’s producer in terms of collection and use. Other similar regulations are in place around Asia as well:

1. Personal Data Protection Act (PDPA) – Singapore
2. Measures for Punishments Against Infringements on Consumer Rights and Interests – China
3. Act on the Protection of Personal Information – Japan

Although these regulations apply to different countries and regions, they do share common principles:

1. Collection – data should be collected in a lawful way and only as necessary for the purpose for its intended use.
2. Accuracy & Retention – steps should be taken to ensure collected data is accurate and is not kept longer than necessary.
3. Purpose & Use – collected data should only be used for the intent of which it was originally collected with explicit consent from the data subject
4. Security & Management – steps should be taken to safeguard personal data from [unintended / unnecessary] manipulation, theft, etc. 
5. Transparency – data consumers should ensure information regarding how and for what purpose a subject’s data is being collected

While the United States does not have a single regulator or blanket definition of how to regulate personally identifiable information (PII), personal data is governed by regulatory authorities and laws specific to each industry or sector.  An example is the Health Insurance Portability and Accountability Act of 1996 (HIPAA) that is presided over by the Department of Health and Human Services.

Regardless of jurisdiction, the general principles tend to focus on the use or handling of personal data.  This often includes a “right to explanation”, as seen with GDPR, which carries the notion of accountability for AI algorithms, but the requirements are subject to interpretation. 

This means that producers or contributors of data have the right to request a description of the logic or explanations behind the decisions made by AI algorithms. However, since the exact obligations are not defined, companies can opt out of disclosure by citing it is a trade secret. This leaves the mystery behind the algorithms unknown and keeps the end users at the mercy of technology, forced to accept the results because the companies ‘said so’. 

Why consider regulating AI?

There is no doubt that the use of AI algorithms is spreading rapidly and people’s daily lives are increasingly impacted. With this, the novelty begins to wear off, and the population slowly becomes less tolerant of errors. When unsatisfactory outcomes are the result of an algorithm, people begin to question: why did this happen; what other scenarios can cause / result from this; what other information are they collecting from me; is this safe?  

Programmed Bias

As we rely on algorithms to make more and more autonomous decisions, are they actually impartial? Programmers can claim the logic they input would provide factual and neutral outcomes, but as the machine learns over time, some cases have been identified where the outcomes do exhibit bias. As explained by the Harvard Business Review1, machine learning can be equated to parental control over what their child is exposed to – television programs, video games, school environment, etc. 

 

One example noted by The Guardian in an article published in January 20172 points to an algorithm used to sort through medical school applications tended to discriminate against students with non-European-looking names, as well as a pilot that was mistaken for an Irish Republican Army (IRA) leader and subsequently stopped at numerous airports on several occasions. Another instance can be seen in court systems that use COMPAS to help law enforcement determine sentences and parole terms. Algorithms used by this system were reported to hold bias, expecting certain demographics to have a higher chance of repeat offenses.

Moral Concerns

Another point of apprehension is an algorithm’s ability to make decisions on moral judgement. Can algorithms choose the “right” decision? One key product of new technology is the development of self-driving cars. On the surface, these vehicles give the impression of a safer ride as they can predict accidents and improve efficiency of the vehicle using sensors to observe the external environment. However this also imposes the ethical dilemma known as the “trolley problem”. The New York Times3 describes it as “First introduced in 1967 by Philippa Foot, a British philosopher, the trolley problem is a simple if unpleasant ethical thought puzzle.” This is when a vehicle is required to choose between hitting and avoiding a defined group of people. In the example of a car accident – this means whether the algorithm controlling the car chooses to protect the driver or a group of pedestrians. Further, general safety regulations need to be considered – when should the car be more or less assertive or even break rules of the road. A recent example was reported in the United States when a self-driving Uber was involved in a car accident earlier this year4. The self-driving car, a Volvo SUV, drove through a yellow stop light (at a speed slightly below the speed limit) and collided with another non-self-driving car. Should the car have recognized the yellow signal and stopped instead of trying to beat the light?

Technological Errors

Regardless of programming bias or moral dilemmas, algorithms are another piece of technology, and sometimes technology simply just doesn’t work when or how the user wants it to. In the earlier example of the airline pilot being mistaken for a person of interest, the outcome could have been due to a programming bias based on his physical appearance. It could have also simply been a technological error. 

Such outcomes can have lofty negative impacts on the victim’s lives for both the short or long term.  As mentioned earlier, generally, data protection laws allow for impacted persons to challenge the result of the automated decision and ask for the logic or reasoning behind it. However, companies can easily hide behind the “trade secret” veil.

Favoring Regulation

Increase transparency of algorithms and the subsequent outcomes

As noted above, and further supported by Article 15 in GDPR, the public and government authorities already have the right to request information about the algorithms they are impacted by. However, loopholes in what companies are obligated to share often leave inquiries insufficiently answered. By introducing regulations specific to the algorithms themselves, it may help provide the additional clarity that is being requested. This could help to lift the “trade secret” veil enough to satisfy the inquirer without forcing the company in question to give up its competitive edge completely. For example, details such as which specific data points are collected and a clear, but high level business explanation of how it is manipulated and for what purpose could become the standard minimum information companies are required to provide.  

Protecting the user

Imposing regulation on algorithms can also help to protect the end users. Simple requirements such as clearly identifying and documenting a liability scheme – such as an insurance policy, for example – or establishing an external review board would help to instill a greater sense of responsibility on the development side of AI. Users or data contributors to the AI would benefit from a level of legal protection in the event of an error by the algorithm. It would also help to reiterate accountability on behalf of the AI providers as the onus could no longer be deflected onto the AI itself. 

Negative Impacts of Regulation

Tight regulations inhibit creative development

While it may be obvious to see some of the upsides to imposing regulation from a user perspective, the same benefits may not be realized from a development stand point. Implementation of additional reviews or assessment by an official oversight counsel would increase time to market, impacting the producing company’s benefits to realization of the AI. Also, development of the algorithms themselves could become more complex if multiple regulations need to be taken into account during the build. True believers may even go so far as to argue the hindrance of creativity and progress of scientific research and development (as a result of increased oversight). 

Impractical to explain every nuance of algorithms

In line with the increasing complexity of development, applying strict instructions regarding disclosure of AI data collection, purpose, etc. may not be practical. The inherent nature of AI and algorithms is to allow technology to come to independent conclusions. One of the benefits of AI is continued learning and adaptation to incoming data. As the AI learns, it adjusts its statistical references and can make increasingly complex decisions. Attempting to explain every decision or outcome resulting from the use of algorithms may not feasible. Developers may not know themselves why the algorithm behaved a certain way, especially if the behavior is a result of learning over time, and was not anticipated. The effort and resources required to provide a satisfactory response may not be reasonable from a business perspective.  

How to enforce any proposed regulations

Oversight

From the beginning of its life cycle, companies should be held accountable for the algorithms they produce and publish. Internally, algorithm developers can: 

  • Establish forums for discussion and review of the technologies by all stakeholders, as well as flexibility in team dynamics to amend the logic as necessary
  • Implement clearly defined procedures can to field any inquiries or urgent requests
  • Establish regular audits of the logic and its outputs to ensure consistent behaviors and to allow for any updates or corrections as the algorithm learns (ex: adjust for discrimination trends)

Transparency

Understanding that developers may be hesitant to reveal their trade secrets, algorithms impacting the public should provide a minimum level of transparency. A meaningful and layman description of the purpose of the algorithm and the types of data it collects or manipulates should be made easily available to impacted persons. In addition, a clear point of contact should be identifiable in the event of any inquiries or complaints.

Conclusion

The case can easily be made to, or not to, impose additional regulation on technologies depending on one’s perspective. While some may argue that AI is not yet mature enough to be regulated, the business benefits of AI (ex: Forbes cites unstructured data as over 80% of the data enterprises use to make decisions and ultimately gain a competitive edge), and the continued implementation of digital strategies across the board only point to exponential adoption rates in favor of AI and corresponding algorithms going forward. 

This need for oversight can already be seen as a law suit was filed against an online real estate marketplace in the United States. According to the Chicago Tribune5, earlier this year, a home owner is suing the business over the underestimated “appraisal value” of her home listed on the website, which is in turn, is hindering the sale of her home at what she believes is a (higher) fair price. The company claims the estimate is calculated by a proprietary algorithm that references public records and does not endorse the figure as a professional appraisal, merely as a starting point for property valuation. However, the lawsuit argues that the website should “obtain the consent of the homeowner before posting [it] online for everyone to see”, and even more importantly, have a proper license to conduct such valuations in the first place.

Based on this current trajectory, the need for regulation could become a real issue in the next year or so. As long as algorithms remain shrouded under the proprietary shield, instances like this or with even greater impacts, will continue to arise. The decision will need to be made whether current data protection rules are sufficient, or if further controls need to be placed at the collection point, the output point, or some combination of both. 

 

 

 

1https://hbr.org/2016/11/teaching-an-algorithm-to-understand-right-and-wrong
2https://www.theguardian.com/technology/2017/jan/27/ai-artificial-intelli...
3https://www.nytimes.com/2016/06/24/technology/should-your-driverless-car...
4http://www.businessinsider.com/uber-self-driving-car-accident-arizona-po...
5http://www.chicagotribune.com/classified/realestate/ct-re-0514-kenneth-h...

 

0 comment
Post a comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Image CAPTCHA
Enter the characters shown in the image.
Back to Top