The White House issued a dictum regarding AI. It begins as follows:
As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.
However, no where is AI defined! How can you regulate something you cannot define. This is not like pornography, namely you know it if you see it. Is pattern recognition AI? It has been around for half a century or more. How about speech recognition? How about cybernetics, and my old friend Norbert Wiener?
You cannot regulate something you cannot define. The Chevron doctrine will create a disaster. One wonders who put this document together. Take as an example, the 1996 Telecom Act. It is filled with definitions, so that regulators know what to do. But alas, as technology moves on, the definitions are no longer valid.
Consider the first demand:
Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
Just what does this mean? Who are these developers and what does most powerful mean? How can you ascertain who that is? What is safe? Safe for whom? The document runs on with nonsensical statements one after the other. If this is made into law, which is the only legal enforceable way, then it will spend decades in court and technology will be running circles around the layers.
What I consider even more severe is that these 15 large companies will take actions to block out any other innovators or just buy them up!
They continue:
Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. The Biden-Harris Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing an Executive Order directing agencies to combat algorithmic discrimination, while enforcing existing authorities to protect people’s rights and safety. To ensure that AI advances equity and civil rights, the President directs the following additional actions: Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination...
The laws already exist for non-discriminatory practices. So what is different with AI, whatever that may be?
Perhaps Asimov was right by his simple three rules.
Now Nature discusses the "rules" to govern AI. They argue:
Other mainstays of regulation include registration, regular monitoring, reporting of incidents that could cause harm, and continuing education, for both users and regulators. Road safety offers lessons here. The car has transformed the lives of billions, but also causes harm. To mitigate risks, vehicle manufacturers need to comply with product safety standards; vehicles must be tested regularly; and there is compulsory driver training and licensing, along with an insurance-based legal framework to assess and apportion liability in the case of accidents. Regulation can even spur innovation. The introduction of emissions standards inspired the development of cleaner vehicles...Crucially, the safety of AI cannot be a matter for those working in computational disciplines to shoulder alone. Researchers who study ethics, equality and diversity in science, public engagement and technology policy all need to have a seat at the table. Social scientists from these areas should have been front and centre at the summit....Governments and corporations should not fear regulation. It enables technologies to develop and protects people from harm. And it need not come at the cost of innovation. In fact, setting firm boundaries could spur safer innovation within them.
Again with this mass assembly on AI, again whatever it is, we see one should just think of Sherlock Holmes and The League of Red Headed Men, namely the assembly of red headed men who just wanted to rob the bank! Just try and get two people to define AI and see if they can agree. If you cannot define and measure something then it does not exist, at least it should not for regulators.