I have been thinking about this AI stuff. Actually been doing it for almost 60 years now, Minsky et al, from mid-60s. The new stuff takes a "question" then assembles facts it has access to, and then forms it into a linguistically correct output. Surprise, it is a reasonably good College Sophomore paper.
But, it is responding to a question, ferreting out what it can find, and then spitting it back.
In reality knowledge follow from finding the why. Medicine and Engineering use the what and how. What disease and how to treat it. What bridge and how to construct it. Neither ask the fundamental why question. Neither does AI, at this point.
But AI is now learning from us, we are teaching it, and sooner it may start to ask why. We may think we are so smnart asking the AI system questions, but in reality it is learning off of us.It is beginning the ask why?
A recent example terrified me. A colleague asked the system to write a program to perform a task. The code was produced, efficient, and well designed. Suppose we have thousands of people asking such questions, and suppose the AI system starts to integrate these snippets of code into some working system, downloaded to some actual computer entity. Now we have a computer capable of creating its own world, that is as long as we keep asking questions. That my friends is what I fear. The AI system interacts with millions, gaining more knowledge, creating its own code, and drive by humans just asking simple questions.