Are the Robots Coming? MBIE seeks submissions in relation to use of AI by IPONZ


The Ministry of Business, Innovation and Employment (MBIE) has called for submissions regarding the use of Artificial Intelligence (AI) by IPONZ, including use to make "complex discretionary decisions". In principle, this could include an AI system deciding whether to grant IP rights on New Zealand patent, trade mark, or design applications.

Background

Artificial intelligence systems have undergone rapid development in the last few years. In May 2018, WIPO held a meeting on use of AI by Intellectual Property Offices (IPOs). Presentations given by delegates to the meeting can be found here. Currently, IPOs are using AI systems for classification of patents and trade marks, assisting with prior art searches for patents, searching for figurative trade marks and machine translation of patent specifications.

Thus, currently developed AI systems are used as tools by IPOs, but no IPO is using an AI system to make "complex discretionary decisions", such as whether to register or grant IP rights. At present, New Zealand law allows the Commissioner of Patents, Trade Marks and Designs to delegate decision making to patent, trade mark and design examiners, but the Commissioner does not currently have the power to delegate decision making to an AI system.

In Australia, the Intellectual Property Laws Amendment (Productivity Commission Part 1 and Other Measures) Act 2018 has given the Australian Commissioner of Patents, Designs, Plant Breeder's Rights and Trade Marks carte blanche to use "computerized decision-making". MBIE suggest that a more cautious approach could be taken to amendment of New Zealand's IP laws.

What are "AI systems"?

Defining artificial intelligence is difficult. The following schematic illustrates a commonly used breakdown of AI technologies:

Artificial Intelligence technologies

Some examples of machine learning technologies currently in use by the New Zealand Government include the ROC*ROI tool, which generates a score expressing the probability that an offender will be reconvicted and re-imprisoned for new offending, a School Transport Route Optimiser, and an elective surgery prioritization tool.

Neural networks exhibit more complex behaviour and can be capable of generating new outputs based on a large set of input (training) data. Some examples of developmental neural networks include Skyknit, a neural network which after being trained on knitting patterns, generates its own patterns with intriguing results; and a neural network trained to name paint colours. Neural network technology is in a nascent stage, but shows the potential to take on complex tasks in the future.

The questions posed by MBIE

The Discussion Paper does not give a definition for AI but refers to "AI systems". The following questions are posed:

Question A1: What criteria should an AI system have to meet before IPONZ can delegate power to make discretionary decisions to it?

Question A2: Who should decide what discretionary decisions IPONZ can delegate to an AI system?

Question A3: Should there be a requirement for public consultation before discretionary decisions can be delegated to an AI system?

These questions are very general and apart from question A1, do not address the technical nature of AI systems. This is important because question A3 relates to "public consultation", but most members of the public do not have the technical expertise to understand AI systems.

Use of AI by other Government agencies, and concerns raised

A 2018 survey and report gathered details on AI systems currently in use by NZ Government agencies. The conclusion was that algorithm use was not consistent across Government agencies surveyed, and that there is no consistent approach to capturing and considering the views of key stakeholders, or a consistent approach to transparency.

The authors of a 2019 report commissioned by the New Zealand Law Foundation identified several key areas of concern relating to the New Zealand Government's use of AI, including the following areas which are relevant to discretionary decision-making by AI systems.

Improper delegation and fettering discretion

The report discusses the public law principles which govern statutory discretion. Improper delegation would occur where the Commissioner delegates to an AI system without authority from the legislation, a situation that MBIE wants to avoid.

However, as explained in the 2019 report, improper delegation also covers the situation where a decision-maker has simply rubberstamped the advice of others, e.g. of an AI system, without coming to an independent position of the matter themselves. This could occur if an examiner reviewing decisions made by an AI system becomes complacent and places trust in the AI system rather than in their own training. Another way of describing unthinking reliance on an AI system is an illegal "fettering" of the exercise of a discretionary power.

Transparency

The concept of transparency in Government is seen as desirable. For example, the stated purpose of the Official Information Act 1982 is to increase progressively the availability of official information to the people of New Zealand in order to enhance respect for the law and to promote the good government of New Zealand. For an AI system, its details of operation, including what data it has been trained on, and the logic by which it operates, are technical details which should be transparent to the public. MBIE suggests

  • public consultation or notification before a decision is delegated to an AI system,
  • publication by IPONZ of statistics relating to decisions made by an AI system, including how often decisions are challenged and whether those challenges are upheld,
  • publication of the tasks which an AI system is performing.

These suggestions are at such a level of generality that the AI system is seen as a "black box". MBIE do not suggest that there should be any transparency regarding the technical aspects of the AI system.

One problem with transparency is that the technical aspects of AI can be very difficult to understand, particularly for deep learning systems, which come up with their own methods of decision making. In addition, where proprietary code is used (for example, code protected by trade secrets), disclosure of the code may be a breach of IP rights.

Algorithmic bias

With the increased use of AI systems in all areas of business, the issue of algorithmic bias has become a major concern. Two sources of algorithmic bias are

  • intrinsic bias of the algorithm
  • extrinsic bias of the data.

Extrinsic bias can arise from poor quality training data, including sampling bias (selection bias), and bias imported into the data from humans. A recent well-publicized example of algorithmic bias is a recruitment tool biased against women. Intrinsic bias arises when the algorithm is being built.

An example of algorithmic bias would be an AI tool used to generate patent examination reports, which interrogates the data available from a WIPO pamphlet. This includes inventors' names, which are correlated to race and gender. Inventors' names are used to form queries for prior art searches, but the AI tool has no need to include the inventors' names in an assessment of the patent specification, and doing so may import extrinsic bias into the result. Without proper training in how to mitigate intrinsic and extrinsic bias, the computer programmer is at risk of building such bias into the system.

Bias is an issue for natural justice, and is clearly of concern to users of the IP system.

Addressing concerns

The use of AI systems by IPONZ is inevitable. The first uses are likely to be in employing tools already in use by other IPOs, for example the classification tools used by WIPO.

The 2018 report suggested that New Zealand Government agencies should draw on expertise across government. Other recommendations discussed systems with "humans in the loop" which are perceived as more transparent by the public; and that as AI becomes more powerful and sophisticated, it will be increasingly important to retain human oversight of significant decisions. Sharing best practice between agencies was recommended as a first step in raising the transparency and accountability of government algorithms.

The 2019 Law Foundation report's authors recommend that the New Zealand Government create an independent regulatory agency to oversee use of AI in Government. Such an agency could build expertise in AI-related issues which arise in all areas of law, not just in intellectual property regulation; for example, implications in relation to privacy, data protection, and ethics. In terms of use of AI by IPONZ, such an agency could function in a similar way to the Maori Advisory Committees, to provide specialist advice to the Commissioner.

July 2019


MoST Content Management V3.0.8634