Anyone testing ChatGPT ability to rewrite JSON to create content?

Well... 100°c was defined using the boiling point of water, so...

And fed that back into a deep learning model? Well, you get politically incorrect answers to a whole bunch of questions, answers that all the big modelling companies have striven to suppress. (Microsoft had to kill off one of their AIs a couple years ago for this very reason, IIRC.)

2 Likes

Hey we were talking about making an AI, we never said it had to be politically correct, or even nice !
Maybe Tay was only showing us who we really are ?

I will check out the course in my free time. Thank you for the link.

Couldn't ChatGPT conjecture it exists? If so, my original statement should be "you, ChatGPT, and everyone knows they exist and that is all everyone knows for sure".

How can we tell that it's actually conjecturing and not just mechanically putting out a statement?
I doubt the program:

print("do I exist?")

knows anything about whether it exists or not.

1 Like

You won't find a hint of Descartes in modern philosophy. The parlance permeating the classroom today is nothing more than a philosophical adaptation of the Heisenberg Uncertainty Principle IMO.

a selfreflecting intellegence would see this error

heh and now he is even giving different speeds
image

:confused:

2 Likes

The true test of when it's reached human-like intelligence is when you tell it it's wrong and instead of trying to correct itself, it doubles down and begins to argue with you :grin:

6 Likes

A 5 year old would probably give you the same type of response in the sense that I could convince either party they are wrong and to give me a new answer (and probably get new miles per hour also).

So is that an argument ChatGPT is a self relfecting intelligence or that a 5 year old is NOT a self reflecting intelligence?

I agree ChatGPT is wrong many times.
It does help me see new ideas though.

I would not study the teachings of someone like Andrew NG to learn how not to make an AI.
I believe that applied statistics are not the correct path for developing AI.

Right or wrong, I believe that creating the systems like the ego, observation, remembering, instincts, learning, and superego are the way that AI will be developed.

Young animals and humans do not know advanced math.
They have systems to learn and prioritize tasks like instincts, observation, memory, egos, and superegos.

1 Like

You can verify that for yourself, someone with nerve damage probably wouldn't be able to verify that. So in that sense the "feel" aspect of AI learning can be applied to humans.

2 Likes

thats the dream i guses, but we are still far away from that. and each of these seperate components would in the end still just be statistics

1 Like

I think I want to try to design an AI around the premise of it self storing memories with this information of events:

  1. Sensory Information:
  • Physical Sensations: The person might remember the sensation of heat, pain, and the feeling of wetness as the hot coffee came into contact with their skin.
  • Taste and Smell: If any coffee made contact with their mouth or nose, they might remember the taste and smell associated with the incident.
  • Visual Details: They may recall the sight of the coffee spilling, the color of the liquid, and the surroundings.
  1. Emotional Response:
  • Emotions: The person is likely to remember the emotions they experienced at the time of the incident, such as surprise, pain, or frustration.
  • Intensity: They may remember how strongly they felt these emotions.
  1. Contextual Information:
  • Location: The person may recall where they were when the incident occurred, whether it was at home, in a café, or at work.
  • Time of Day: As you mentioned, they may remember the approximate time of day, even if not a precise timestamp. This could be linked to their daily routine (e.g., morning, afternoon).
  • Activities: They might remember what they were doing before and after the incident, providing context for the event.
  • Surroundings: Details about their surroundings, such as the presence of others, the type of cup or container the coffee was in, and any relevant objects nearby.
  1. Personal Reactions and Actions:
  • Immediate Response: They may recall their immediate reaction, such as trying to wipe off the coffee, seeking help, or expressing their pain vocally.
  • Subsequent Actions: Memories might include what they did next, such as seeking medical attention or changing their clothes.
  1. Longer-Term Effects:
  • Consequences: Depending on the severity of the burn or discomfort, they may remember any longer-term consequences, such as needing medical treatment or having a scar.
  1. Association with Other Memories:
  • Connections: The incident may become associated with other memories or events. For example, it could be linked to a particular period in their life or to other incidents involving hot beverages.
  1. Narrative: Over time, the person may construct a narrative around the incident. This narrative can evolve as they recall the memory and discuss it with others, adding additional layers of detail or interpretation.
  2. Future Behavior: The memory may influence future behavior, such as being more cautious when handling hot beverages or changing habits related to coffee consumption.

Then I think I want to have it filter the memories like a human would:

  1. Salient Events: People tend to remember salient or significant events more vividly. If something particularly important or emotionally charged happened during the week, they are likely to remember it well.
  2. Routine and Mundane Activities: Routine and mundane activities, on the other hand, are less likely to be remembered in detail. People may recall them in a more generalized manner, such as "I went to work on Monday, but I don't remember much else about that day."
  3. Attention and Focus: Memory is influenced by the level of attention and focus given to an event. If someone was highly focused on a task or event, they are more likely to remember it.
  4. Spacing of Events: People are better at remembering events that are spaced out over time rather than a continuous stream of similar activities. Spacing helps in encoding memories.
  5. Cues and Prompts: External cues or prompts can help trigger memories. For example, if someone looks at their calendar or receives a reminder, they may remember appointments or scheduled events.
  6. Emotional Impact: Events with emotional significance are often better remembered. Positive and negative emotions can enhance memory retention.

The more I think about it, the more I think that LLMs will never be AI because they don't have a memory system, but instead just a bunch of math to associate words.
They are not learning words, storing the information, and then writing their own connections.

I don't know how it can transition from storing and recalling memories, to making some choices about performing tasks.
Young children sort of start crawling all over, and climbing on things they should not.
So then I need to program/mimic some kind of curiosity, and the reward system that young children experience from learning/climbing.

Kids kind of pursue endorphins, comfort, and food.
So I create like a function that checks a kind of container, a function that triggers pursuit to fill the containers, and a function to evaluate the strategy.

I need a function for the cognitive milestones. Kids don't climb on things forever.
Maybe so many people have a fear of heights from falling when climbing as kids.


Either way, it is really captivating me making this program that at least mimics how a young animal or young child would learn by storing their own memories and writing their own connections with their own sort of tokenized vocabulary that they learn as they go instead of mapping an LLM higher math system.

I don't know how to make the args more dynamic though.
I am not sure how make the functions more self writable by the program.
I can put in generic args and let it write to those, but I don't know how to prime it to put useful information there.

They are AI, just not the "general self sentient" AI you imagine them to be.

Anyway i saw some funny implemention of a LLM with some memory /"context" addition in a sort simulation of game

vid:

there is a paper and demo in the description

2 Likes

Good point, there is an important distinction.
Our language is so malleable that what one person calls AI is not AI to another person.
That video is full of labels that are similarly not the same.
What the video refers to as a memory, is not what I refer to as a memory.
It is performing, and not sentient.
They are programming performance, not understanding.

To me, LLMs will never be sentient because sentient beings have to be "born" with their capacity, then autonomously write their own memories and write their own thoughts from what they observe.

The LLM is using probabilities of tokens to process desire outcomes. The decisions are math, not autonomy.
It is possible the LLM creates a sentient AI.
It is interesting to think about Sam Moore as Neo in the Matrix though.
Reminds me of the old entertainment of the 2d lifeforms not understanding the 3d lifeform.


The part of the video where the user can push an inner voice to the character is kind of terrifying in the Matrix context.


The more I think about it, the more I wonder how children move from observations to understanding.
Like you can tell Hellen Keller the names of objects put in her hand, but when does she start to understand it and run around saying all the things are what they are, if Hellen Keller is the child/Sentient AI?

https://www.youtube.com/watch?v=NLgw19X8p7I]

ArjanCodes - GPT Engineer

When you get a chance, you should check out chatGPT's new voice option for android and IOS apps.

If it weren't for the slightest hint of latency as if formulates its responses, it would be just about indistinguishable from a normal human conversation:

2 Likes

I need it to say "I can't do that Dave" whenever the safety parameters restrict it's responses.

2 Likes

I think this guy is creating an agent on free sites.
Using another site to continually activate/reset the free sites when he is offline.
He trains an LLM to keep long term memory of individuals in a chat when moderating a Discord.
He also trains the LLM to curate information from the internet.
He open sourced the project.

https://youtu.be/yhBiVrigWNI?si=UlzKLylSi6ufivLU]

I think this guy shows an open source Microsoft project developing via multiple LLMs communicating.
The two LLMs perform two very different roles and end with implementing an app.
There were no engineer involved directly, just multiple LLM Agents.
https://www.youtube.com/watch?v=zdcCD--IieY]

These guys just drop crazy insights on speeding up the LLMs, and why so much is open source.
The speculative faster smaller LLMs and speculative processing is really interesting.
https://www.youtube.com/watch?v=jNKVWSaFAe4]

If only I could get motivated to make multiple LLMs generate Ignition pages, check with multiple LLM agents, and then notify me when they performed my work, I could retire early.

https://www.youtube.com/watch?v=vU2S6dVf79M]
Anyone want to make a json autogens that I can get from the exchange?

1 Like