Anyone testing ChatGPT ability to rewrite JSON to create content?

Do we believe that the decision makers know the difference?

nope, everyone seems to think chatgpt is peek ai xd

I shudder at the idea of letting ChatGPT drive my car.

"As a large language model, I cannot allow you to leave the driveway as that could result in harm to you or other people in the case of an accident."

5 Likes

The more general AI's the more this can happen.

That is until we find a very good "sorter ai" which can deligate the general question to a specific ai who knows the question.
(if it detects math it will ask the math ai, if it detects code it will ask the code ai)
and a good "parser ai" which then can turn the answer back into some nice general text.
Or even better an Ai which can google the internet and filter out the answers. (but giving ai internet access is scary (and expensive) xd)

The more specific, the easier to test and control new versions.

Chatgpt devs, right now just doesnt care for you math/code problem, it wants to become sfw because "woke"(or whatever its called) is very important atm for some reason

2 Likes

An interesting philosophical thought experiment lays out an issue that I don't know it will ever get past - the issue of syntax vs semantics as laid out in the Chinese room - Wikipedia (or if you got 30 minutes to kill listen to a podcast about the difference Episode #183 ... Is ChatGPT really intelligent? - YouTube)

ChatGPT is a very good implementation of statistical inference but will it ever be able to make the jump between following a set of rules to produce text output based on text input vs knowing what it's reading and writing about? I don't think so, at least, not in it's current form. Which means that though the outputs may can be good and can be correct ChatGPT itself won't ever be able to verify that directly.

2 Likes

Nope, will not ever happen. Programs must always fall back on a fundamental set of rules provided by the programmer(s). Those rules can not be broken in much the same way that the laws of physics cannot be broken.

1 Like

Programs follow the rules they were written with, but who says that something that approximates "understanding" can't be built on top of those rules? Maybe we can't make AI that is able to experience things, but why can't we build one that is able to examine its own "thought" process? It wouldn't be breaking any rules.

2 Likes

i think there was once a test done like this, but it started optimizing and "learning its own lanague" so they could no longer understand what the ai was thinking and it was terminanted.

(not that they really undersstand the blackbox of ai now anyways)

2 Likes

Oh, you can, but it has to be told how to do that, which again falls back to some fundamental set of rules. Eventually everything has to resolve to a yes/no question (Does this decision fit in rule no. 1, if yes then continue, if no then exit.)

If a brain can do a task, I think a device can be programmed to do it.

I think there is a fallacy in Searle's thought experiment.
When brains are taught to match memories and sounds, the same programming he gives as an example applies.
I know a sentence in Chinese because someone taught me it.
I know English because someone taught me it.
I think his thought experiment is just splitting hairs on how much Chinese the AI knows before it knows Chinese as opposed to a subset of Chinese.

I came to this conclusion after reading the wikipedia page on "Comprehensions of Idioms" and a little on psycholinguistics which I didn't know existed, or had forgot it did.

I think the thought experiment is more about whether the AI has the experience of knowing Chinese. If you teach a person Chinese, the language influences their thought process, and when they translate "that car is red", there's more going on than the raw translation. The person is likely going to be having an experience based on what they're translating. Maybe they'll think of the Porche they always wanted.

The person following instructions in the room isn't going to see the big picture or understand that the instructions they're following are about a red car. In the case of the AI, there's nothing there to "see" a red car or have an emotional reaction to one, it's just a computer following its instructions.

2 Likes

Someone says something about a red car.

That is very interesting, the idea that in our brain's filing cabinet, we pull up our memory of a red car and it includes emotions to some degree. Supposing we think of a car we liked perhaps, that implies the memories might be prioritized in our brains by emotions.

The Comprehension of Idioms indicated that it isn't know if brains pull up literals then the idiom meaning sequentially, simultaneously both, or just the idiom. Idioms could be stored more than one way.

If the computer is instructed to perceive the red car one way all the time it will perceive the car that way. If the computer develops a history with receiving the term red car, it could be predisposed to evaluate more in regards to racing before fixing or purchasing for example.

Seems like somewhere in that flexibility and prioritizing might be the development of emotion.

I could see AI giving off the appearance of emotion, but I don't see how it's possible for it to feel it. I could write a script right now that says:

if userInput == "that car is red":
	currentThoughts = "racing"
	system.gui.messageBox("I want to go racing in a red car.")

It's giving a human-like output that describes the "thoughts" it's having, but it's clear that the computer doesn't actually want anything. It's just a string that I gave it. I don't see how a more complicated AI version of my script that can tell a story about the car would change that. But I agree that it could evolve more convincing and interesting string outputs.

I think we as humans are programmed in how our brains work by nature and nurture.
We receive a stimulation, pull up a memory and have a response I think.
The memory, I think has been prioritized such that we pull the memory with the strongest response.
In that is emotion, and that feeling is not what I meant. I think feeling is our body reacting to the emotion how I meant to talk about it, right or wrong.


Supposing the way that humans develop emotions is a kind of biological programming.
Humans receive the stimulation, have a response, and then brains and bodies act according to the programming.

I hear spider, I feel a small amount of fear that I will be bitten and get sick.
For me, I pulled up a memory of how one looks and it's potential harm.
I have an emotion in that I feel some fear. I have biological responses to this memory I pull up when I hear spider, that I then feel.
I don't fully understand how that all works.
At some level though, I fear the spider because when I pulled up a memory the first memory had this information associated with it.

If PC instruction 1 were like preserve itself.
Suppose someone said virus, and the PC pulls up data for virus to process it.

The way that the computer stored and prioritizes pulling it's memory, of all the memories in it's filing cabinet for virus, it pulls up virus as a kind of threat, a stimulation with a response that is like fear.
Of all memories it could have pulled up, it pulls up this kind because of the embedded core goal.

The computer pulls up a negative view of virus because in the priority of the stored memories, that self preservation was first this virus term in the form the PC knows, it has evaluated to a threat at some level. That virus is a term, and the definition includes some scenarios it has memorized.

I don't mean that the computer feels anything, but that it has an emotion, or at least a response based on the memory it pulled up by priority.


I think my overall point is that if we analyze how our brains work, because brains are similar to computers, that computers can be programmed to work like brains.

And that some people might say well that is just copy, not actual. I think that definition determines the actual. That if we define how the brain works enough, that the "copies" we make will be real.

i agree somewhat, but living beings will always have something computers dont. Instincts, needs (food, water) pain. Robots wont feel a virus burning them, so they can never feel the same as we do.
Their heart wont pump faster when they are scared, which causes a lot of things for us. We have reflexes (moving our hand away from something hot even before we can think about it)

No they are not. computers are all 1 and 0's. our brains is not.
We also dont really want them to be like our branis, because then we wont have control over them.

Dont get me wrong though, in my posts im sounding quite "negative" about ai as if they are not smart or something. But im sure they will become much much smarter and do things a lot faster than we ever can (they already do that for many things).

But even then there is still a big step between living and computed "brains".

2 Likes

Neurons work quite like an NOR gate, and the voltage is fairly constant. I think intelligence comes from a huge number of threads: all neurons are working in sync and pulsating. I think that when the number of hardware cores reaches 10 million, the right program can produce real intelligence.

1 Like

I'm with Victor on this one in the sense that there is no CPU in a brain. The various lobes and limbic systems all operate simultaneously and somewhat independent of each other. That said, I do draw direct parallels between the way learning models are weighted and Hebbean theory, so I do see AI functionality as being similar to brain functionality. If this is what was meant by "brains are like computers," I agree with you.

I took Human A&P I and II in college, so I can't help myself but to comment on this. Neurons require a specific change in polarity to depolarize, and words like "Fire" or "Signal" are often used interchangeably with depolarization. The firing of a typical neuron in a synaptic chain is the result of neural transmitters being released by neighboring neurons into the synaptic clefts on the affected neuron's dendritic side, and this can only happen if neighboring neurons have themselves fired. Not to be too pedantic, but this seems more like the behavior of an AND gate to me. Nevertheless, I see your point; individual neurons could be viewed as zeros and ones depending upon their synaptic state.

2 Likes

Oh yes we'll have something that (we think) is really intelligent.
But that will still be quite a bit further of than "copies of our brains".

I think it's important to avoid this sort of projection onto our own brains.

2 Likes

Well, maybe not. Individual neurons "fire" when levels of neurotransmitters reach thresholds at the dendrites, with the levels "learned" by dendritic proliferation. Very analog, indeed.

4 Likes