Tuesday, December 12, 2023

What is AI?

 What is Artificial Intelligence? An examination of a Google search will list thousands of definitions, many convoluted and circular, namely defining intelligence as intelligence. As we have noted elsewhere, the problem of not having a clean and clear definition makes it impossible to create laws, yet this never seems to stop Governments, resulting of course in endless litigation and confusion. Our intent herein is not to define AI per se, since we believe that at best it is a work in progress and at worst the wrong words to begin with, but to present some paradigms and elements which may prove useful.

 In a simplistic sense, AI takes some input that is to be examined and provides an output to the putative question provided in the input. It does so by relying on a massive amount of exogeneous information that has been processed by an element called a neural network (NN) for example. The NN has been designed and trained so that any input aligned with the class of trained data can or should produce an answer. Some answers can be presented simply as yes or no, and others more complex and in a text form using a natural language processing system as an adjunct.

 

 Another simple example is shown below. Here we take a pathology slide, not even identifying it by organ, and we seek to identify by organ and malignant status. The input is an image and the output is a classification of N possible organs and M possible states. The system has been “trained” with potentially millions of identified images. 


 However what AI has in common is a form of “learning” from prior data sets and then developing algorithms on handling new data demands to provide answers or actions. What we see is that AI is a concatenation of inputs, data sets, learning algorithms and output mechanisms. In the simplest sense, on can ask a question and receive an answer, if the data set contains the data adequate for learning.

 We examine here the potential extensions of this set of constructs. AI can go from the simplest input/output paradigm to a fully autonomous entity that initiates interactions, gathers information, constructs mechanisms, and provides actions while continuously monitoring its own performance, seeking increased optimization.

 The putative “danger” of an AI system lies in the realm of the autonomous AI entity (AAIE) embodiments. Namely, an AI entity totally independent of any human interaction. Namely, it begs the question; can an AI system become totally independent of any human agency? If so, then what limits can be placed upon its actions? What can be done to enforce such limits?

 We have a clear example of unenforced limits in a small sense with COVID-19. A virus released into the society and its propagation facilitated by an unprepared set of Governments resulting in the death of millions and a near collapse of economies. Autonomous AI systems are many orders of magnitude more deadly to humanity as a whole.

 Our objective herein is to examine AI systems and specifically to consider canonical models demonstrating the putative progression to a fully autonomous AI entity, one capable of independent actions both computationally and physically. The latter model we call the Autonomous AI Entity, AAIE. This is an entity that operates independent of human interaction and makes judgements on its own. Further it has the capability of using and assembling instruments as externalities to effect its intentions.

 We often hear about the fears of AI devoid of any specificities. In order to understand what the risks may be one must understand what evolution can occur and what areas should be limited if any. In many ways it is akin to bio research on new organisms. We know that COVID is a classic example of bio-research gone wild.

 Basically, the fundamental structure of AI as currently understood is some entity which relies on already available information that is used by some processing elements to perform actions. Now, in contrast to what we have argued here, there is that this exogeneous Information set, provided by humans, may become self-organizing in an autonomous mode entity. Namely as we approach an autonomous mode this set of information may be generated by the entity itself, and no longer reflecting any reliance on a human.

1         Evolution of Neural Net Paradigms

The neural net paradigm has been evolving for almost the past fifty years. Simply stated the neural net paradigm assumes a computer entity, that takes a massive amount of exogeneous information to train a network, so that when some input entity is presented, it can produce an output entity that correctly reflects the body of information available to the computer entity. To accomplish this one needs significant amounts of information, memory and processing. Thus, conceptually one had the structure constructs yet it required the development and availability of memory and processing power to take the steps we see today. Thus, NN are not new but only constrained by technology.

In addition, the nature of inputs and outputs is also an evolving area. For the output we may want some natural language processor and say for the input the ability to gather and process images. In fact, the input must eventually gather all types of entities; video, image, taste, smell, touch, voice, etc. In fact, multimedia inputs and outputs will be essential[1].

We use the neural net construct as a place holder. One suspects there may be significant evolutions in these elements. One need look no further that what we have seen in the past 40 years. The driver for the evolutions will be processing complexity as well as computing complexity. One also suspects that there will be significant evolutions in memory for the learning data.

Also, paradigms on human neural processing may open avenues for new architectures. This is a challenging area of research. The biggest risk we face is the gimmick constructs that are currently driving the mad rush.

2         Risk of Autonomy

The risk of autonomy was perceived in broader terms by Wiener in his various writings. The development of AEs is the development of entities that can displace if not annihilate man. We see that AEs can restructure their own environment and that control of AEs may very well be out of the hands of their developers. In fact, the developer may not even be aware of when such an autonomous act occurs.

On has always considered the insights of Shannon and his Information Theory and the broader constructs of Wiener and Cybernetics. One suspects we are leaving the world of Shannon and entering that of Wiener.

3         Parallelism with human intelligence or NOT

If AEs are to be considered intelligent than how would we compare that to human intelligence. Would an AE consider humans just an equivalent primordial slime, an equal, a superior, or just some nuisance inferior species? Can we measure this or is it even measurable.

4         Areas of Greatest Risk

The areas of greatest risk are legion in AI. They range from simple misinformation, to psychological profiling, then influencing and controlling large groups, and finally as full autonomy is obtained, the ability to manipulate their environment.

Without some moral code or ethical framework, AEs can act in whatever manner they so choose, often taking leads from the data input that may have or create themselves.

There have been multiple lists of AI risks[2]. The problem is that all that have been generally available lack any framework for such listing. They generally make statements regarding privacy, transparency, misinformation, legal and regulatory etc. These are for the most part content free sops. One needs, actually demands, the canonical evolution we have presented herein to understand what the long-term risks may be. Having a construct to work with then policies may evolve. 

5         Stability of Autonomous Entities

Autonomous entities, AE, can result in unstable constructs. The inherent feedback may result in the AE in cycling in erratic ways that are fundamentally unstable. This again is a concern that Wiener expressed. Stability of an AE may be impossible. They may be driven by the construct of, “on the one hand but on the otherhand”. This is a construct without a moral fabric, without an underlying code of conduct[3].

6         AI; Policy and Prevention

Isaac Asimov in his robot novels present the three rules of robotics[4]. However, AI is much more than robotics. Robots, in the Asimovian world, were small self-contained anthropomorphic entities. In our construct the AI Autonomous entity is an ever-expanding entity capable are unlimited capabilities. Moreover, these autonomous entities can evolve and expand independent of human interaction or control. Thus, the key question is; what can be done to protect humanity if not all of earthly entities from an overpowering and uncontrollable autonomous entity?[5]

First one must admit the putative capacity of existence for such an entity. Second one must recognize that the creation of these entities cannot be prevented since an adversary may very well do so as a means of a threat or control. Third, creation of such entities may very well be in the hands of technologists who lack and moral foundation and will just do so because they can do it. Thus, it is nearly impossible for this entity to be a priori controlled.

Therefore, at best one can a posteriori control such entities. This requires advanced surveillance and trans-governmental control mechanisms. Namely it can be possible to sense the existence and development of such systems via various distributed network sensing mechanisms. When detected there must be prohibitive actions in place and immediately executable in a trans-border manner.

7         Is an AI Entity the same as a Robot?

The Asimovian Robot is an anthropomorphic entity. In Asimov’s world the robot was a stand-alone creature, one of many, with capabilities limited by its singularity. Robots were just what they were and no more. An AI Entity is a dynamically extensible entity capable of unlimited extension akin to a slime mold, a never-ending extension of the plant. The AI Entity may morph and add to itself what it internally sees a need for and take actions that are solely of its own intent. Thus, there is a dramatic difference between a Robot and an AI Entity. The challenge is that trying to apply the three laws of robotics to an entity that controls its own morphing is impossible.

8         Complexity vs Externality

We have noted herein that the early developments of AI revolve around increased processing and interaction complexity. However, there comes a point when externalities become the dominant factor, namely the ability of the AI entity to interact with its external environment, first with the help of a human, then with existing external entities and then with the ability to create and use its own externalities. This progression then leads to the AAIE which if not properly delimited can result in harms.

9         What is the Use of Canonical Forms?

Canonical Forms have multiple uses. First, they provide structure. Second, they allow for defining issues and elements. Third they are essential if any regulatory structure is imposed. We have seen this in Telecommunications Law where elements and architecture is critical to regulation. However, as in Telecom and other areas, technology evolves and these Canonical Forms may do so likewise. Thus, they are an essential starting point and subject to modification and evolution.

10      Sensory Conversions are Critical

As we have observed previously, the conversion of various sensory data to system processable data is a critical step. The human and other animal sensory system have evolved over a billion years to maximize the survival of the specific species. The specific systems available to AI are still primitive and may suffer significant deficiencies.

However, in a AAIE system, self-evolution may occur at an order of multi magnitudes faster that the evolution we have in our species. What direction that evolution takes is totally uncertain. The effects of that evolution will also determine what an AAIE does as it perceives its environment.

11      Regulatory Proposals in Progress?

A group at MIT has recently made a regulatory proposal for AI[6]. They recognize, albeit rather in a limited manner, that one must define something to regulate it. They thus note:

It is important (but difficult) to define what AI is, but often necessary in order to identify which systems would be subject to regulatory and liability regimes. The most effective approach may be defining AI systems based on what the technology does, such as “any technology for making decisions or recommendations, or for generating content (including text, images, video or audio).” This may create fewer problems than basing a definition on the characteristics of the technology, such as “human-like,” or on technical aspects such as “large language model” or “foundation model” – terms that are hard to define, or will likely change over time or become obsolete. Furthermore, approaches based on definitions of what the technology does are more likely to align with the approach of extending existing laws and rules to activities that include AI.

Needless to say, the definition is so broad that it could include a coffee maker or any home appliance. As we have argued herein, AI inherently contains an element whereby massive data if collected and processed by some means that permits a relationship between an input and output to be posited. Also, and a key factor, is that the relationship between input and posited output is hypothesized by some abstraction of data sets chosen by the designer and potentially modified by the system.

The MIT group then states:

Auditing regimes should be developed as part and parcel of the approach described above. To be effective, auditing needs to be based on principles that specify such aspects as the objectives of the auditing (i.e., what an audit is designed to learn about an AI system, for example, whether its results are biased in some manner, whether it generates misinformation, and/or whether it is open to use in unintended ways), and what information is to be used to achieve those objectives (i.e., what kinds of data will be used in an audit)

This is the rule of the select telling the masses what to believe! It seems academics just can’t get away from this control mechanism. They further note:

For oversight regarding AI that lies beyond the scope of currently regulated application domains, and that cannot be addressed through audit mechanisms and a system similar to that used for financial audits, the federal government may need to establish a new agency that would regulate such aspects of AI. The scope of any such regulatory agency should be as narrow as possible, given the broad applicability of AI, and the challenges of creating a single agency with broad scope. The agency could hire highly qualified technical staff who could also provide advice to existing regulatory agencies that are handling AI matters (pursuant to the bullets above). (Such a task might alternatively be assigned to an existing agency, but any existing agency selected should already have a regulatory mission and the prestige to attract the needed personnel, and it would have to be free of political and other controversies from existing missions that could complicate its oversight of AI.) A self-regulatory organization (like the Financial Industry Regulatory Authority, FINRA, in the financial world) might undertake much of the detailed work under federal oversight by developing standards and overseeing their implementation.

Again, another Federal entity, and as academics do, they assume a base of qualified staff, an oxymoron for any Government entity. As we have noted previously, if you can’t define it, you can’t regulate it. Also, as is all too well known, all regulations have “dark sides”.



[1] See https://www.researchgate.net/publication/344445284_Multimedia_Communications_Revised This is a copy of a draft book I wrote for a course in Multimedia Communications at MIT in 1989. The ideas therein should be integrated into an AI construct.

[3] See https://www.researchgate.net/publication/338298212_Natural_Rights_vs_Social_Justice_DRAFT We have examined this issue in the context of Natural Rights, a fundamental and perhaps biologically and genetically and evolutionarily programmed code of human conduct. Namely we assert that humans have evolved with a genetically programmed code of behavior displayed in what they believe are Natural Rights. These Natural Rights then become limits on unstable and extreme behavior. We further argue that these are evolutionary, not inherent in any creature. They are survival genetic expressions for the species. There is no reason to expect that an AE would in the near term ever assert such rights. Thus it is a basis for human annihilation.

[4] A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

[5] See Watson et al. This describes the work concerning Recombinant DNA. In a sense this is akin to the concerns regarding AI and its dangers. This discusses what it is and how it can be controlled. The concern was that this modified DNA could be sent loose in the environment. In a sense, the work here mirrors what can be done with AI. The problem however is with Recombinant DNA we had highly educated professionals on the research side but in contrast in AI we have a collection of Silicon Valley entrepreneurs.