If we can store data better, then we should.
Being wrong is still an opportunity to learn.
If we make an AI similar to our brains, we need to program it to not like coffee.
If we can store data better, then we should.
Being wrong is still an opportunity to learn.
If we make an AI similar to our brains, we need to program it to not like coffee.
To add to this, I read an article the other day (might be this one or a similar one) that implied neuron behavior is much more complex than their traditionally modeled counterparts.
Edit: something a little less "pop-sci": https://www.sciencedirect.com/science/article/pii/S0896627321005018
It's not about being wrong. god knows the history of philosophy is about being wrong in new and creative ways.
My point is if you assume the Computational theory of mind - Wikipedia you should be aware of the baggage that assumption brings and additionally what sort of limitations it may have. If you are already looking at the brain from the view point that it operates as a computer, you can easily miss out on things where it may act in ways that are not like a computer. The most obvious example of this is the experience of consciousness, which our brains do have and computers (lets put aside the argument that ChatGPT is conscious) in general do not.
Some people and philosophers do assume the computation theory of mind as generally true and it leads them to conclusions like consciousness is an illusion because 1) computers aren't conscious 2) our brains are like computers 3) our consciousness is at best an illusions / a movie to watch in the theater of our mind that we have no control over. I don't personally agree with this, but if you assume the computational theory of mind I can see how you can get to that conclusion.
My only real point is that it's important to be aware of the implications the assumptions we make about ourselves, the metaphors we use, and I also think its a interesting trend throughout human history to try to understand the brain by relating it to whatever the most advanced tech is. Especially interesting now since we are inventing tech with the explicit purpose of trying to work like our mind does, so if we keep looking at our mind through the lens of our most advanced tech, we're about to go down a hall of infinite mirrors.
Can't argue with that. This is a picture that hangs on the wall in my study, so I don't forget that "dendritic proliferation" can be unlearned if the brain is allowed to become idle:
Yeah, that should have been "dendritic proliferation and pruning". /:
My favorite tool for learning new material is quizlet. I make the flashcards, and I go through them staring the ones I can't answer instantly. Then, I go through the marked set removing the star from cards that I can answer instantly, repeating this process until there are no marked cards left. This procedure is really just establishing and reinforcing neural pathways through the slow arduous process of Habbian plasticity. It's a simple and easy to understand procedure that always works, but it takes way too long and way too much effort.
What I want is a Matrix style interface where I can learn anything instantly by simply downloading it into my brain, which I imagine would be awesome until the "other party" slips me some information that changes my voting preferences. lol
No matter what our party affiliations are, we all know they would find a way. Uggh, and marketers would also be developing clever ways to instantly manipulate our purchase preferences. In retrospect, perhaps mother nature knew what she was doing with the slower and more deliberate process, and I would be wise to keep the AI separated from my brain.
I want an AI that will write my json and code for me, and I just check to see that it is right, maybe prompt it a little for corrections haha.
I don't want any neural connection unless it is a passive wireless device worn externally, like on my ear.
I was with you until you started talking about downloading stuff into your brain. No thanks! Index cards and time are good enough for me.
"It's awesome that I know kung fu now, but why do I have this constant nagging urge to look into my car's extended warranty?"
How do you know what you are reading and writing about?
In the book The Beginning of Infinity by David Deutsch David argues that knowledge is created by conjecture (and criticism) and that humans are universal explainers which means they can create knowledge by conjecture.
As far as I know ChatGPT doesn't conjecture new ideas so ChatGPT isn't a universal explainer. I am further guessing that ChatGPT isn't setup to do this but probably could be at which point I believe ChatGPT would be "conscious" although it wouldn't understand any human qualia just like I will never know what it is like to get a million requests an hour for an advanced SQL query.
That is one interpretation of current AI LLM and the Chinese room experiment, is our thinking so much different from just guessing? But I would argue we do have semantics in addition to syntax as opposed to the man who in the chinese room only following rules to produce an output.
If someone said all the knowledge you had is just your brain doing statistical inference based on historical data and coming up with the best answer based on some probabilities, I think that's fair for certain things. Like if you were trying to come up with a punch line to a joke, having a conversation, playing chess, or just doing something creative, I think there is some of that going and I wouldn't push back as long as the claim is not that all human knowledge falls under statistical inference.
With other things though it seems wrong prima facie. I don't "know" the sky is blue, it's my brain's best guess based on probabilities? I think most people would think that's wrong and I would agree, I think there is some knowledge that is self-evident via the qualia humans have that computers don't. Not to mention if you posit that all knowledge is just statistical inference, then how do we know really know the truth value of anything if our checking mechanism is limited to being more statistical inference.
I'll concede I don't know what it's like to get a million requests an hour for Advanced SQL help - but my coworkers are helping me get there
According to Mr. Deutsch problems are inevitable but there are always solutions with the right knowledge. I think humanity will figure it all out eventually.
Living being dont just only have memeories to know things, they can also see and experiment to learn new things, while a computer will never be able to do that.
A computer will never learn to how to make fire by experimenting with dry wood.
Sure if you can program in some "facts"(physics) it could deduct things we havent thought of yet, (used a lot in medicine, where ai discovers clever uses of chemicals) but it can never be sure untill we test it out in the real world. Because our facts about the world are not complete, the computer will never be able to calculate everything.
Not being able to know things for sure, means it can have wierd imaginations (like inventing stuff that never happend)
This seems like it could be a pessimistic view on the problem.
Never is a very long time for starters.
Is fire even useful to a computer who has teams of humans running the power grid and paying to keep the air conditioner on?
Really, though ?
Genetic algorithms look awfully like experimentation.
Wel yes if a super super super computer knows every position of atom in the world (at one point in time), and somehow works faster than light, then it can know and predict everything.
But else no i dont think mechenical comptuers will ever reach the hight of true sentience.
Ofc syntetic robots with cameras and sensors are something else and could do it.
a computer just doesnt work like our brains so idk why we expect it to become like us/ there is nothing wrong with it being different (its probably better this way) we dont have anything close in ai that can understand feelings and robots will never have a need to do something like get food or earn money. so why would they start thinking of something and have a free mind when they dont need to to survive. just like it wont need fire so why would it try to understand why we need it?
There is another book, yes I love recommending books, called Life 3.0 by Max Tegmark. If you have the chance to read it or listen to it on Audio book I highly recommend it. He talks about a lot of these topics.
A poor summary of the book is that we are trying to create sentience but it will not have human like emotions and that creating human like minds is probably a waste of our time anyways.
It's not a question of chance, but of time :X
There is a general video game playing project where machines are given tools to learn with so they can adapt to different games using reinforcing learning techniques. Only started in 2013.
General game playing - Wikipedia]
I would think that you might be right in that it would be unnecessary for machines to experiment with wood though. Seems like they could just read about it.
I think we have the pieces to make real AI today, but have to configure the way that memories are formed, stored, and recalled.
I think part of that is that our memories we recall are driven partially from our survival instincts that computers are not usually programmed to have.
I don't think we are more than 10,000 lines of code from real AI at any moment now.
did you ask that to chat gpt?, thats not a good source for this xd