Anyone testing ChatGPT ability to rewrite JSON to create content?

I think ChatGPT is a fine source.
It generated good ideas for ways to prioritize memory recall.

I think the steps for AI are like:

Define purpose for survival/observing/remembering/trying to learn
Create techniques for observation, remembering, and trying to learn
Create an instinct of what to do
Create the ego and superego analysis for prioritizing and changing tasks
Create a technique to have ideas based on the memories and observations
Train it and teach it

I think at some point, creating AI will be something that a college grad should be able to do.
I don't think it is as hard as creating the microchip was.

Please show us how it is done. I'm skeptical, but you might be right. Anyways, talk is cheap.

2 Likes

it gets basic things wrong, so why would you think its a good source for this?

Setting up something basic ai is not to hard, making it good enough to be usefull though, is a whole whole other level.

3 Likes

The math for Deep Neural Networks (I know ChatGPT is not a DNN) have been around for decades it's only that hardware and the amount of data have finally come as such that it can actually be useful.

I've done it before and you @zacharyw.larson can try it out too if you follow a course by Andrew Ng - https://www.coursera.org/specializations/deep-learning. I think taking such a course will demystify AI a lot for you. It's applied statistics, linear algebra and calculus. You will build a DNN from scratch in python, you will have to create functions to do matrix multiplication, derivatives, etc and you'll see just how much of this is not thinking at all but just applied math. There's not programming of insticts or ego/superego, you setup the model, you feed it training data, you see how the outputs are, and then you play with parameters to avoid overfitting (or not fitting enough), along with a lot of data processing. ChatGPT for example has a lot of Data Annotators to help ChatGPT better understand inputs as well as grade outputs to help fine tune the model becasue as mentioned, ChatGPT has no way to verify on it's own the truth value of what it is telling you.

Not for nothing, but if given a choice of tasks - 1) use math to optimize and minimize the error of a function for a set of data or 2) making the worlds first microchip from scratch, I'm going 1) every single day and it's not even close.

Edit: Meant to reply to @zacharyw.larson

4 Likes

ChatGPT has no way to verify on it's own the truth value of what it is telling you.

Not to keep poking the bear but neither do you (or me or anyone).

Hard disagree about what human knowledge is capable of. At minimum I can verify I exist cogito ergo sum and that is not just the result of statistical inference.

You conjecture you exist; I conjecture you are a figment of my imagination :slight_smile:

We can go down this philosophical hole if you want.

i can verify that water boiles at around 100 degrees celcius

1 Like

And who is it doing the conjecturing? You can't get out of that box. Descartes settled this one centuries ago imo.

How ?

:thermometer:
I can also verify you will feel pain by putting your hand in boiling water.
So ill learn to not put my hand in hot water, but instead put a potato in it. which i know will also become hot, so not to put it in my mouth to fast.

2 Likes

I'm not sure I need to get out of that box.

That's not the right answer.
The right answer is: boil water and declare its temperature is 100 degrees.

Also, why would you trust a thermometer ?

Bingo. LLM's use statistics to predict how they should reply. LLMs (and all other deep machine learning tools) do NOT reason logically about the content they produce, or the decisions they make. Zachary seems to expect them to.

6 Likes

becuase it gives the same readings every time when water boils. (under the same athmosperic pressure)
unlike chat gpt who can not even order fish by speed, i correct it and then it makes the same mistake right after.

Can a thermometer order fishes ?

I have thermometers that give wrong readings. Hell even the one in my car said it was 3 degrees last week, while I was sweating like I was under a shower.

I never claimed a liquid in a glass jar is intelligent.
The one in your car is probably electronic

Just linking because it seems like everyone is interested in this topic now, how do we know what we know - https://www.coursera.org/learn/epistemology the study of "how do we know what we know", how is knowledge verified etc.

@bmeyers Regarding what I mean about getting out that box, I mean applying Cartesian doubt from his essay "Meditations on First Philpsophy" - what if we doubted everything we thought we knew, we were tricked, either all our measuring tools are wrong, or by a trickster demon/entity (or whatever you imagine would want to trick you into a completely false reality) is there anything we could be sure of? What if water actually boiled at 500 degrees and our thermometers are all wrong etc.

The one thing you cannot escape, is that there is a you who is being tricked. There's no way around that. There is a you that is being taught words like boiling and temperature and given faulty instruments etc, and regardless of any other truth value about any other thing in the external world, there is a you experiencing it. If you are thinking, you can be assured you exist. That's what Descartes takes as the the starting point of verifiable knowledge.

I know its not the real purpose of this forum but I am enjoying this conversation on this post a lot. Good back and forth here :slight_smile:

3 Likes

My point was, just like chatGPT can't verify things on its own, you can't verify the temperature of anything by yourself. You need tools that you trust.

What if we built tools to verify information (to a certain degree) ?

1 Like

i can still verify by my self that steaming hot water hurts.

but yes ive said in here before that is we made a synthetic robot with sensors then we could make it truly inteligent