Tuesday, December 26, 2023

Harvard Pays Grad Students $50,000

 The Crimson reports on Harvard now paying Grad Students $50,000 for 10 months. They note:

Ph.D. students in Harvard’s Graduate School of Arts and Sciences will be paid at least $50,000 in program stipends, increasing most stipends by more than 10 percent, GSAS Dean Emma Dench announced in an email Monday.

 Back in my time at MIT, 1970 I recall, I got $400 per month as an Instructor. Free tuition at MIT and in joint programs as well. Now is $400 in 1970 equal to $5,000 in 2023? I had no cell phone, in fact no phone, had a 1965 VW, lots of miles, wife and 2 kids, no health insurance, it was covered by MIT. Rent was $125 a month. One pair of shoes, one winter coat, no hat, etc. 

It would be interesting to compare net after expenses.

Monday, December 25, 2023

Merry Christmas

 

1 And it came to pass in those days that a decree went out from Caesar Augustus that all the world should be registered.

2 This census first took place while Quirinius was governing Syria.

3 So all went to be registered, everyone to his own city.

4 Joseph also went up from Galilee, out of the city of Nazareth, into Judea, to the city of David, which is called Bethlehem, because he was of the house and lineage of David,

5 to be registered with Mary, his betrothed wife, who was with child.

6 So it was, that while they were there, the days were completed for her to be delivered.

7 And she brought forth her firstborn Son, and wrapped Him in swaddling cloths, and laid Him in a manger, because there was no room for them in the inn.

8 Now there were in the same country shepherds living out in the fields, keeping watch over their flock by night.

9 And behold, an angel of the Lord stood before them, and the glory of the Lord shone around them, and they were greatly afraid.

10 Then the angel said to them, “Do not be afraid, for behold, I bring you good tidings of great joy which will be to all people.

11 For there is born to you this day in the city of David a Savior, who is Christ the Lord.

12 And this will be the sign to you: You will find a Babe wrapped in swaddling cloths, lying in a manger.”

13 And suddenly there was with the angel a multitude of the heavenly host praising God and saying:

14 “ Glory to God in the highest, And on earth peace, goodwill toward men!

15 So it was, when the angels had gone away from them into heaven, that the shepherds said to one another, “Let us now go to Bethlehem and see this thing that has come to pass, which the Lord has made known to us.”

16 And they came with haste and found Mary and Joseph, and the Babe lying in a manger.

17 Now when they had seen Him, they made widely[d] known the saying which was told them concerning this Child.

18 And all those who heard it marveled at those things which were told them by the shepherds.

19 But Mary kept all these things and pondered them in her heart.

20 Then the shepherds returned, glorifying and praising God for all the things that they had heard and seen, as it was told them.

Sunday, December 24, 2023

Will This Never End

 Despite the alleged re-invigoration of our faulting CDC, we now have a new and aggressive COVID strain. As the WHO notes:

Previously, JN.1 was tracked as part of BA.2.86, the parent lineage that is classified as a variant of interest (VOI). However, in recent weeks, JN.1 continues to be reported in multiple countries, and its prevalence has been rapidly increasing globally and now represents the vast majority of BA.2.86 descendent lineages reported to GISAID. Due to its rapidly increasing spread, WHO is classifying JN.1 as a separate variant of interest (VOI) from the parent lineage BA.2.86. Considering the available, yet limited evidence, the additional public health risk posed by JN.1 is currently evaluated as low at the global level. It is anticipated that this variant may cause an increase in SARS-CoV-2 cases amid surge of infections of other viral and bacterial infections, especially in countries entering the winter season.

As noted in Medscape:

JN.1 was previously grouped with its relative, BA.2.86, but has increased so much in the past 4 weeks that the WHO moved it to standalone status, according to a summary published by the agency. The prevalence of JN.1 worldwide jumped from 3% for the week ending November 5 to 27% for the week ending December 3. During that same period, JN.1 rose from 1% to 66% of cases in the Western Pacific, which stretches across 37 countries, from China and Mongolia to Australia and New Zealand. In the United States, JN.1 has also been increasing rapidly. The variant accounted for an estimated 21% of cases for the 2-week period ending December 9, up from 8% during the 2 weeks prior.

I have seen several cases of this and as expected during the holiday season we have have epidemic spread again. The patients infected have been vaccinated and have had a previous infection but have asthma, diabetes or immunocompromised. The current vaccine is for a variant from a year ago. 



Saturday, December 16, 2023

Ockham Would Roll Over in his Grave!

 It seems the Bishop of Rome now is an earthly technology expert! He writes:

This is also the case with forms of artificial intelligence. To date, there is no single definition of artificial intelligence in the world of science and technology. The term itself, which by now has entered into everyday parlance, embraces a variety of sciences, theories and techniques aimed at making machines reproduce or imitate in their functioning the cognitive abilities of human beings. To speak in the plural of “forms of intelligence” can help to emphasize above all the unbridgeable gap between such systems,  however amazing and powerful, and the human person: in the end,  they are merely “fragmentary”, in the  sense that they can only imitate or reproduce certain functions of human intelligence. The use of the plural likewise brings out the fact that these devices greatly differ among themselves and that they should always be regarded as “sociotechnical systems”. For the impact of any artificial intelligence device – regardless of its underlying technology – depends not only on its technical design, but also on the aims and interests of its owners and developers, and on the situations in which it will be employed. Artificial intelligence, then, ought to be understood as a galaxy of different realities. We cannot presume a priori that its development will make a beneficial contribution to the future of humanity and to peace among peoples. That positive outcome will only be achieved if we show ourselves capable of acting responsibly and respect such fundamental human values as “inclusion, transparency, security, equity, privacy and reliability”. 

Ockham wrote in his Work of Ninety Days that the Bishop Of Rome must limit his dicta to religious matters. In the case of AI it is even so very much more critical. He admits to the lack of clear definition but continues to espouse his earthly controls. 

We have recently argued about AI, its lack of definition and its putative evolution. He finishes with the "plea"

For this reason, in debates about the regulation of artificial intelligence, the voices of all stakeholders should be taken into account, including the poor, the powerless and others who often go unheard in global decision-making processes

 Just how this is to happen is I assume left to the reader. Ockham was correct, what is God's is God's and what is man's is man's. Papal prognostications are becoming just added chaff in the world of awkward chaff.

Thursday, December 14, 2023

One Suspects that there is a real problem!

 I am not a twitter user but Bill Ackman's account today is explosive but frankly no surprise. Ackman notes:

In light of the affiliated nature of these transactions, in order for MIT to have made these investments in Gorenberg’s wife’s non-profit, the MIT board or a subcommittee designated by the board would have had to approve this investment each year it was made. But why would they have approved this investment for the last five years and I suspect this year as well? The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the 21st century. How is an investment in a non-profit that promotes DEI tools to corporations consistent with MIT’s stated mission? Why would MIT fund an increasing amount of money each year to a board member’s wife’s non-profit , let alone the Chairman’s wife’s company, when the organization does not appear to have gained any traction, let alone any other donors over the last five years? 

 In my years of investing and on Boards the issue of self dealing was always a concern. Thus arms length was the motto of the day. Here we have in my opinion and in my experience a prima facie case of self dealing if the facts are correct.

MIT is truly becoming a collapsing entity. Frankly as I have noted over the years it is most likely the current President's predecessor who may in my opinion bear the fault. 

MIT need, actually demands, an independent white knight to ride in and clear out the stables. Any takers?

UPDATE: I have become aware that Ackman has updated the X posting making not that the donation was made via a rather complex mechanism. Yhis tale may not be over yet.

Tuesday, December 12, 2023

What is AI?

 What is Artificial Intelligence? An examination of a Google search will list thousands of definitions, many convoluted and circular, namely defining intelligence as intelligence. As we have noted elsewhere, the problem of not having a clean and clear definition makes it impossible to create laws, yet this never seems to stop Governments, resulting of course in endless litigation and confusion. Our intent herein is not to define AI per se, since we believe that at best it is a work in progress and at worst the wrong words to begin with, but to present some paradigms and elements which may prove useful.

 In a simplistic sense, AI takes some input that is to be examined and provides an output to the putative question provided in the input. It does so by relying on a massive amount of exogeneous information that has been processed by an element called a neural network (NN) for example. The NN has been designed and trained so that any input aligned with the class of trained data can or should produce an answer. Some answers can be presented simply as yes or no, and others more complex and in a text form using a natural language processing system as an adjunct.

 

 Another simple example is shown below. Here we take a pathology slide, not even identifying it by organ, and we seek to identify by organ and malignant status. The input is an image and the output is a classification of N possible organs and M possible states. The system has been “trained” with potentially millions of identified images. 


 However what AI has in common is a form of “learning” from prior data sets and then developing algorithms on handling new data demands to provide answers or actions. What we see is that AI is a concatenation of inputs, data sets, learning algorithms and output mechanisms. In the simplest sense, on can ask a question and receive an answer, if the data set contains the data adequate for learning.

 We examine here the potential extensions of this set of constructs. AI can go from the simplest input/output paradigm to a fully autonomous entity that initiates interactions, gathers information, constructs mechanisms, and provides actions while continuously monitoring its own performance, seeking increased optimization.

 The putative “danger” of an AI system lies in the realm of the autonomous AI entity (AAIE) embodiments. Namely, an AI entity totally independent of any human interaction. Namely, it begs the question; can an AI system become totally independent of any human agency? If so, then what limits can be placed upon its actions? What can be done to enforce such limits?

 We have a clear example of unenforced limits in a small sense with COVID-19. A virus released into the society and its propagation facilitated by an unprepared set of Governments resulting in the death of millions and a near collapse of economies. Autonomous AI systems are many orders of magnitude more deadly to humanity as a whole.

 Our objective herein is to examine AI systems and specifically to consider canonical models demonstrating the putative progression to a fully autonomous AI entity, one capable of independent actions both computationally and physically. The latter model we call the Autonomous AI Entity, AAIE. This is an entity that operates independent of human interaction and makes judgements on its own. Further it has the capability of using and assembling instruments as externalities to effect its intentions.

 We often hear about the fears of AI devoid of any specificities. In order to understand what the risks may be one must understand what evolution can occur and what areas should be limited if any. In many ways it is akin to bio research on new organisms. We know that COVID is a classic example of bio-research gone wild.

 Basically, the fundamental structure of AI as currently understood is some entity which relies on already available information that is used by some processing elements to perform actions. Now, in contrast to what we have argued here, there is that this exogeneous Information set, provided by humans, may become self-organizing in an autonomous mode entity. Namely as we approach an autonomous mode this set of information may be generated by the entity itself, and no longer reflecting any reliance on a human.

1         Evolution of Neural Net Paradigms

The neural net paradigm has been evolving for almost the past fifty years. Simply stated the neural net paradigm assumes a computer entity, that takes a massive amount of exogeneous information to train a network, so that when some input entity is presented, it can produce an output entity that correctly reflects the body of information available to the computer entity. To accomplish this one needs significant amounts of information, memory and processing. Thus, conceptually one had the structure constructs yet it required the development and availability of memory and processing power to take the steps we see today. Thus, NN are not new but only constrained by technology.

In addition, the nature of inputs and outputs is also an evolving area. For the output we may want some natural language processor and say for the input the ability to gather and process images. In fact, the input must eventually gather all types of entities; video, image, taste, smell, touch, voice, etc. In fact, multimedia inputs and outputs will be essential[1].

We use the neural net construct as a place holder. One suspects there may be significant evolutions in these elements. One need look no further that what we have seen in the past 40 years. The driver for the evolutions will be processing complexity as well as computing complexity. One also suspects that there will be significant evolutions in memory for the learning data.

Also, paradigms on human neural processing may open avenues for new architectures. This is a challenging area of research. The biggest risk we face is the gimmick constructs that are currently driving the mad rush.

2         Risk of Autonomy

The risk of autonomy was perceived in broader terms by Wiener in his various writings. The development of AEs is the development of entities that can displace if not annihilate man. We see that AEs can restructure their own environment and that control of AEs may very well be out of the hands of their developers. In fact, the developer may not even be aware of when such an autonomous act occurs.

On has always considered the insights of Shannon and his Information Theory and the broader constructs of Wiener and Cybernetics. One suspects we are leaving the world of Shannon and entering that of Wiener.

3         Parallelism with human intelligence or NOT

If AEs are to be considered intelligent than how would we compare that to human intelligence. Would an AE consider humans just an equivalent primordial slime, an equal, a superior, or just some nuisance inferior species? Can we measure this or is it even measurable.

4         Areas of Greatest Risk

The areas of greatest risk are legion in AI. They range from simple misinformation, to psychological profiling, then influencing and controlling large groups, and finally as full autonomy is obtained, the ability to manipulate their environment.

Without some moral code or ethical framework, AEs can act in whatever manner they so choose, often taking leads from the data input that may have or create themselves.

There have been multiple lists of AI risks[2]. The problem is that all that have been generally available lack any framework for such listing. They generally make statements regarding privacy, transparency, misinformation, legal and regulatory etc. These are for the most part content free sops. One needs, actually demands, the canonical evolution we have presented herein to understand what the long-term risks may be. Having a construct to work with then policies may evolve. 

5         Stability of Autonomous Entities

Autonomous entities, AE, can result in unstable constructs. The inherent feedback may result in the AE in cycling in erratic ways that are fundamentally unstable. This again is a concern that Wiener expressed. Stability of an AE may be impossible. They may be driven by the construct of, “on the one hand but on the otherhand”. This is a construct without a moral fabric, without an underlying code of conduct[3].

6         AI; Policy and Prevention

Isaac Asimov in his robot novels present the three rules of robotics[4]. However, AI is much more than robotics. Robots, in the Asimovian world, were small self-contained anthropomorphic entities. In our construct the AI Autonomous entity is an ever-expanding entity capable are unlimited capabilities. Moreover, these autonomous entities can evolve and expand independent of human interaction or control. Thus, the key question is; what can be done to protect humanity if not all of earthly entities from an overpowering and uncontrollable autonomous entity?[5]

First one must admit the putative capacity of existence for such an entity. Second one must recognize that the creation of these entities cannot be prevented since an adversary may very well do so as a means of a threat or control. Third, creation of such entities may very well be in the hands of technologists who lack and moral foundation and will just do so because they can do it. Thus, it is nearly impossible for this entity to be a priori controlled.

Therefore, at best one can a posteriori control such entities. This requires advanced surveillance and trans-governmental control mechanisms. Namely it can be possible to sense the existence and development of such systems via various distributed network sensing mechanisms. When detected there must be prohibitive actions in place and immediately executable in a trans-border manner.

7         Is an AI Entity the same as a Robot?

The Asimovian Robot is an anthropomorphic entity. In Asimov’s world the robot was a stand-alone creature, one of many, with capabilities limited by its singularity. Robots were just what they were and no more. An AI Entity is a dynamically extensible entity capable of unlimited extension akin to a slime mold, a never-ending extension of the plant. The AI Entity may morph and add to itself what it internally sees a need for and take actions that are solely of its own intent. Thus, there is a dramatic difference between a Robot and an AI Entity. The challenge is that trying to apply the three laws of robotics to an entity that controls its own morphing is impossible.

8         Complexity vs Externality

We have noted herein that the early developments of AI revolve around increased processing and interaction complexity. However, there comes a point when externalities become the dominant factor, namely the ability of the AI entity to interact with its external environment, first with the help of a human, then with existing external entities and then with the ability to create and use its own externalities. This progression then leads to the AAIE which if not properly delimited can result in harms.

9         What is the Use of Canonical Forms?

Canonical Forms have multiple uses. First, they provide structure. Second, they allow for defining issues and elements. Third they are essential if any regulatory structure is imposed. We have seen this in Telecommunications Law where elements and architecture is critical to regulation. However, as in Telecom and other areas, technology evolves and these Canonical Forms may do so likewise. Thus, they are an essential starting point and subject to modification and evolution.

10      Sensory Conversions are Critical

As we have observed previously, the conversion of various sensory data to system processable data is a critical step. The human and other animal sensory system have evolved over a billion years to maximize the survival of the specific species. The specific systems available to AI are still primitive and may suffer significant deficiencies.

However, in a AAIE system, self-evolution may occur at an order of multi magnitudes faster that the evolution we have in our species. What direction that evolution takes is totally uncertain. The effects of that evolution will also determine what an AAIE does as it perceives its environment.

11      Regulatory Proposals in Progress?

A group at MIT has recently made a regulatory proposal for AI[6]. They recognize, albeit rather in a limited manner, that one must define something to regulate it. They thus note:

It is important (but difficult) to define what AI is, but often necessary in order to identify which systems would be subject to regulatory and liability regimes. The most effective approach may be defining AI systems based on what the technology does, such as “any technology for making decisions or recommendations, or for generating content (including text, images, video or audio).” This may create fewer problems than basing a definition on the characteristics of the technology, such as “human-like,” or on technical aspects such as “large language model” or “foundation model” – terms that are hard to define, or will likely change over time or become obsolete. Furthermore, approaches based on definitions of what the technology does are more likely to align with the approach of extending existing laws and rules to activities that include AI.

Needless to say, the definition is so broad that it could include a coffee maker or any home appliance. As we have argued herein, AI inherently contains an element whereby massive data if collected and processed by some means that permits a relationship between an input and output to be posited. Also, and a key factor, is that the relationship between input and posited output is hypothesized by some abstraction of data sets chosen by the designer and potentially modified by the system.

The MIT group then states:

Auditing regimes should be developed as part and parcel of the approach described above. To be effective, auditing needs to be based on principles that specify such aspects as the objectives of the auditing (i.e., what an audit is designed to learn about an AI system, for example, whether its results are biased in some manner, whether it generates misinformation, and/or whether it is open to use in unintended ways), and what information is to be used to achieve those objectives (i.e., what kinds of data will be used in an audit)

This is the rule of the select telling the masses what to believe! It seems academics just can’t get away from this control mechanism. They further note:

For oversight regarding AI that lies beyond the scope of currently regulated application domains, and that cannot be addressed through audit mechanisms and a system similar to that used for financial audits, the federal government may need to establish a new agency that would regulate such aspects of AI. The scope of any such regulatory agency should be as narrow as possible, given the broad applicability of AI, and the challenges of creating a single agency with broad scope. The agency could hire highly qualified technical staff who could also provide advice to existing regulatory agencies that are handling AI matters (pursuant to the bullets above). (Such a task might alternatively be assigned to an existing agency, but any existing agency selected should already have a regulatory mission and the prestige to attract the needed personnel, and it would have to be free of political and other controversies from existing missions that could complicate its oversight of AI.) A self-regulatory organization (like the Financial Industry Regulatory Authority, FINRA, in the financial world) might undertake much of the detailed work under federal oversight by developing standards and overseeing their implementation.

Again, another Federal entity, and as academics do, they assume a base of qualified staff, an oxymoron for any Government entity. As we have noted previously, if you can’t define it, you can’t regulate it. Also, as is all too well known, all regulations have “dark sides”.



[1] See https://www.researchgate.net/publication/344445284_Multimedia_Communications_Revised This is a copy of a draft book I wrote for a course in Multimedia Communications at MIT in 1989. The ideas therein should be integrated into an AI construct.

[3] See https://www.researchgate.net/publication/338298212_Natural_Rights_vs_Social_Justice_DRAFT We have examined this issue in the context of Natural Rights, a fundamental and perhaps biologically and genetically and evolutionarily programmed code of human conduct. Namely we assert that humans have evolved with a genetically programmed code of behavior displayed in what they believe are Natural Rights. These Natural Rights then become limits on unstable and extreme behavior. We further argue that these are evolutionary, not inherent in any creature. They are survival genetic expressions for the species. There is no reason to expect that an AE would in the near term ever assert such rights. Thus it is a basis for human annihilation.

[4] A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

[5] See Watson et al. This describes the work concerning Recombinant DNA. In a sense this is akin to the concerns regarding AI and its dangers. This discusses what it is and how it can be controlled. The concern was that this modified DNA could be sent loose in the environment. In a sense, the work here mirrors what can be done with AI. The problem however is with Recombinant DNA we had highly educated professionals on the research side but in contrast in AI we have a collection of Silicon Valley entrepreneurs.

Sunday, December 10, 2023

MIT Corporation Members 1961

 

The above is a list of the MIT Corporation Members from 1961, the year I started college. Note that almost all are from industry, leaders in large corporations at the heart of the growing country. Bush, Killian and Stratton were substantial leaders, all having worked us through WW II. Jimmy Doolittle, of the famous Tokyo raid in 1942, and MIT PhD, was also on the Board.

Try and find any folks like this today on the ECOM. Thus the problems with a President originate from the Corporation ECOM and the like. The ECOM appears to be a fund raising entity, thus the individuals are reflective of today's VC and equivalent community. 

I have noted that the Alumni Assn nominated Corporation members. The Alumni are supposed to elect the Officers of the Alumni Association. However, I cannot recall any request to vote for these people in the last decade. It seems in my opinion to have become a closed club of members who care less.

Saturday, December 9, 2023

Openness: Not so much...

 


 MIT alleges to have a set of core values, openness being one. The flags above they argue profess such. However to the bottom left we see the gates closed to all except those approved entry. It is this continuing contrast of realities that makes the current and past administration problematic. Those gates were removed to allow the antisemitic demonstrators to block entry and intimidate any who dare pass. 

The Chair of the Corporation stated his support for the President. He alleges the Executive Committee, with seven outside Directors, are supporters. One wonders why their names were not included.

Friday, December 8, 2023

What to Think vs How to Think

Back in the 60s when I was at MIT the focus was on how to think. One was taught to look at a problem and use tools learned but to often think outside the limits, to absorb the problem, and often to intuit the answer. Feynman's approach was a model, if you understood the issues, if you grasped the laws of nature, then you could intuit the answer. You were not taught to follow rules, but to apply what you had learned and if necessary modify it. I could use that approach in financial models decades later, almost intuiting the answers, frequently driving my financial analysts to distraction. I could see the patterns of a bad financial problem as a visual dissonance.

Unfortunately in today's university one must learn what to think not how to think.  Students absorb a group think mentality, eschewing the individuality of thought and bonding on the group think they have been fed. What is worse the students now must be indoctrinated in this group think by administrators imbued in that doctrine. 

If MIT wants to get out of the mess its current President demonstrated so publicly, perhaps it clean the proverbial Augean Stables at the top, and reconstitute with leaders who demand thinking and not following. The current leaders appear to demand group think. After all why would a President of MIT enshrine herself in a Barbie Doll habitat! 

Some of the best "how to thinkers" I have ever had the pleasure to work with were: (1) Marty Samuels, Prof Neurology Harvard Med, who taught how to find where the problem was before listing dozens of differential diagnoses, Ed Habib, a colleague from DC who taught me how to look at systems and how the "gears" work, Gus Hauser, a colleague and dear friend for over 40 years who taught me how to question every assumption, and finally Bob Gallager, one of my PhD advisors at MIT who made me explain my understanding in words and not to try to impress with the elegance of my equations. Not once did any of these friends ever try to tell me "what to think". Regrettably at MIT today we have Commissars in each Department telling people what to think and punishing those who they feel deviated from the proscribed new norms.

Thursday, December 7, 2023

Chickens Come Home to Roost

 The Harvard Crimson announces:

The House Committee on Education and the Workforce launched a congressional investigation into Harvard over allegations of antisemitism on campus, the committee announced on Thursday.

The investigation into Harvard comes two days after Harvard President Claudine Gay testified before Congress during a tense hearing about antisemitism on college and university campuses. Gay, who testified alongside MIT President Sally A. Kornbluth and University of Pennsylvania President Elizabeth Magill, faced a wave of backlash over her testimony.

Rep. Elise M. Stefanik ’06 (R-N.Y.) announced the investigation in a statement to The Crimson Thursday afternoon.

“After this week’s pathetic and morally bankrupt testimony by university presidents when answering my questions, the Education and Workforce Committee is launching an official Congressional investigation with the full force of subpoena power into Penn, MIT, & Harvard and others,” Stefanik wrote. “We will use our full Congressional authority to hold these schools accountable for their failure on the global stage.”

I have been bemoaning MIT administrations for the past decade. These are not the people who helped win WW II. They are no Vannevar Bush, no Conant. The above is not from some politically oriented press but from The Crimson itself. The continuing shame is that MIT fails in this regard. The MIT press is the Politburo of the Academy. 

Where Penn and Harvard tried to walk back their horrifying statements, the President of MIT seemed like the proverbial deer in the headlights, leaving her pathetic statement to stand. A century and a half of serving the country and  honoring all of its students has been left in the trash bin. In my opinion that is what one gets selecting someone with no history at MIT.

The only corrective act is for the Boards to seek out leaders and not politically correct appointments, if any could be found.

Wednesday, December 6, 2023

A Shame

 The MIT News is a daily web site presenting what its writers feel is important items about MIT. After testifying yesterday about the antisemitism prevalent on the MIT campus, the current President's remarks and answers are uncovered. Compare this to the Harvard Crimson, which opined openly on her performance. 

Having been on campus as student and faculty during the Vietnam War period, I saw and experienced the difficulties then. However the Administration was protective of is students, faculty and staff, as compared to the current outsider President. Regrettably in my opinion MIT and its leaders have grossly failed in their duty as leaders.

As an update the MIT News finally presented the MIT President's words. Perhaps they read my posting or perhaps they were just posting more "woke" pieces to flatter their progress to destroy the institution. One need merely to read the question posed and the grossly incompetent answer by the Presidents of the institutions. One suspects that fund raising will shatter. The NY Times has a piece that reinforces some of these issues. Even the NY Post provided a readable summmary of this mess. The Stefanik question was spot on and one that any individual preparing for say a deposition would have a clear answer.