Sunday, March 15, 2026

Experts and Executives

 I have recently seen CEOs, whose profession was as an attorney with no visible technical expertise, espouse the technical competence in their respective companies. Now in my opinion and my experience, attorneys are good at one thing, and generally one thing only, namely defending their clients interest. They are for the most part in my opinion and my experience competent as to the details of their company. My only exception is my late boss and partner, Gus Hauser, who could disembody any technology and repackage it to be infinitely better. But alas that is just one person.

Now my current examples come from the autonomous vehicle craze and the electrical power distribution. In the  vehicle case I would be terrified in such but the lawyers CEO asserts I should have no fear. However facts may contradict that. Second, another lawyer asserts that AI will optimize electrical power distribution networks. Here I know a bit more and engineering them demands two things. First as set of system requirements, most of which are absent from the data sets used by AI. Some data sets are even contradictory. Second, it demands advanced engineering constructs which again may be lacking in past data sets.

Thus can one trust the pontifications of these attorneys? In my opinion and my experience, highly doubtful! 

Beware the Ides of March


As has been noted:

Caesar. Who is it in the press that calls on me?
I hear a tongue, shriller than all the music,
Cry 'Caesar!' Speak; Caesar is turn'd to hear.

Soothsayer. Beware the ides of March.

Caesar. What man is that?

Brutus. A soothsayer bids you beware the ides of March. 

Caesar. Set him before me; let me see his face.

Cassius. Fellow, come from the throng; look upon Caesar.

Caesar. What say'st thou to me now? speak once again.

Soothsayer. Beware the ides of March.

Caesar. He is a dreamer; let us leave him: pass.

 

War Plan Orange


 In the 1930s the US Navy developed a bunch of War Plans. War Plan Orange was for war with Japan. However it assumed an attack on the Philippines not Pearl Harbor. The problem with War Plans is that one rips them up once the war starts! One assumes that the objectives of the current war in Iran are:

1. Neutralize Iran's means and methods of external attack

2. Neutralize Iran's means and methods for nuclear weapons development and delivery

3. Establish an environment for Iran's people to seek an improved non-religious Government

If this is correct, which seems to be the current US Government position, albeit poorly articulated, then one should be able to measure progress.

One can assert for each:

1. External Attack Neutralization: Reasonable success but nor finished

2. Nuclear Abandonment: Uncertain but critical

3. Government Adjustment: Uncertain, but neutralized many of the incumbents

Now the US Government should clearly articulate the goals and objectives and articulate the progress made in each. The Nuclear Abandonment one is the most critical.  

Saturday, March 7, 2026

Collapse of Amazon?

 I just noticed that 85% of my orders have been, lost, permanently delayed, sent elsewhere! Clearly in my opinion and my experience the incompetent head of distribution needs a job at Burger King washing toilets! Imagine an 85% failure rate. And these Bozos want to get into health care! That would reflect an 85% mortality rate, better than Mao's Great Leap Forward! Sell Short!

Saturday, February 21, 2026

Engels and Housing

 For those of the Marxist bent the housing problem was "solved" by Engels, the capitalist protege of Marx. In his work, The Housing Question, he indicated that housing could only be solved by a total elimination of the capitalists, and society controlled by the proletariat. This is truly worth a read, especially if you ever have an interest in the decline and fall of New York City.

EU Nonsense!

 Science published a paper by EU "Experts" regarding the oversight and control of AI. The authors note:

 A global challenge in artificial intelligence (AI) regulation lies in achieving effective risk management without compromising innovation and technical progress. The European Union (EU) Artificial Intelligence Act represents the first regulatory attempt worldwide to navigate this tension in the form of a binding, risk-based framework. In August 2025, obligations for providers of general-purpose AI (GPAI) models under the EU AI Act entered into application. They require providers of the most advanced GPAI models to evaluate possible systemic risks stemming from their models. This raises the regulatory challenge of ensuring that the evaluations provide meaningful risk information without imposing excessive burden on providers. The principle of proportionality, a binding requirement under EU law, requires the regulator to calibrate its actions to their intended objectives. The application of proportionality to model evaluations for AI risk opens opportunities to develop scientific methods that operationalize such calibration within concrete evaluation practices.

 Nowhere is AI defined! Furthermore the terms they use lack specific definitions. The document is nearly incomprehensible. This is now classic EU blather. Luddites more more advanced!