I think ChatGPT is a fine source.
It generated good ideas for ways to prioritize memory recall.
I think the steps for AI are like:
Define purpose for survival/observing/remembering/trying to learn
Create techniques for observation, remembering, and trying to learn
Create an instinct of what to do
Create the ego and superego analysis for prioritizing and changing tasks
Create a technique to have ideas based on the memories and observations
Train it and teach it
I think at some point, creating AI will be something that a college grad should be able to do.
I don't think it is as hard as creating the microchip was.
The math for Deep Neural Networks (I know ChatGPT is not a DNN) have been around for decades it's only that hardware and the amount of data have finally come as such that it can actually be useful.
I've done it before and you @zacharyw.larson can try it out too if you follow a course by Andrew Ng - https://www.coursera.org/specializations/deep-learning. I think taking such a course will demystify AI a lot for you. It's applied statistics, linear algebra and calculus. You will build a DNN from scratch in python, you will have to create functions to do matrix multiplication, derivatives, etc and you'll see just how much of this is not thinking at all but just applied math. There's not programming of insticts or ego/superego, you setup the model, you feed it training data, you see how the outputs are, and then you play with parameters to avoid overfitting (or not fitting enough), along with a lot of data processing. ChatGPT for example has a lot of Data Annotators to help ChatGPT better understand inputs as well as grade outputs to help fine tune the model becasue as mentioned, ChatGPT has no way to verify on it's own the truth value of what it is telling you.
Not for nothing, but if given a choice of tasks - 1) use math to optimize and minimize the error of a function for a set of data or 2) making the worlds first microchip from scratch, I'm going 1) every single day and it's not even close.
Hard disagree about what human knowledge is capable of. At minimum I can verify I exist cogito ergo sum and that is not just the result of statistical inference.
I can also verify you will feel pain by putting your hand in boiling water.
So ill learn to not put my hand in hot water, but instead put a potato in it. which i know will also become hot, so not to put it in my mouth to fast.
Bingo. LLM's use statistics to predict how they should reply. LLMs (and all other deep machine learning tools) do NOT reason logically about the content they produce, or the decisions they make. Zachary seems to expect them to.
becuase it gives the same readings every time when water boils. (under the same athmosperic pressure)
unlike chat gpt who can not even order fish by speed, i correct it and then it makes the same mistake right after.
I have thermometers that give wrong readings. Hell even the one in my car said it was 3 degrees last week, while I was sweating like I was under a shower.
Just linking because it seems like everyone is interested in this topic now, how do we know what we know - https://www.coursera.org/learn/epistemology the study of "how do we know what we know", how is knowledge verified etc.
@bmeyers Regarding what I mean about getting out that box, I mean applying Cartesian doubt from his essay "Meditations on First Philpsophy" - what if we doubted everything we thought we knew, we were tricked, either all our measuring tools are wrong, or by a trickster demon/entity (or whatever you imagine would want to trick you into a completely false reality) is there anything we could be sure of? What if water actually boiled at 500 degrees and our thermometers are all wrong etc.
The one thing you cannot escape, is that there is a you who is being tricked. There's no way around that. There is a you that is being taught words like boiling and temperature and given faulty instruments etc, and regardless of any other truth value about any other thing in the external world, there is a you experiencing it. If you are thinking, you can be assured you exist. That's what Descartes takes as the the starting point of verifiable knowledge.
I know its not the real purpose of this forum but I am enjoying this conversation on this post a lot. Good back and forth here
My point was, just like chatGPT can't verify things on its own, you can't verify the temperature of anything by yourself. You need tools that you trust.
What if we built tools to verify information (to a certain degree) ?