Anyone testing ChatGPT ability to rewrite JSON to create content?

I meant to say: people use PIDs, some program PIDs.

Or that I don't know if my PC uses GaAs wafers or all the benefits of those.

For what it's worth since this example goes exactly what you said, it has been shown that taxi drivers who drive without maps have larger hippocampus than those who use GPS constantly. The offloading of tasks from our brain to external technology does reduce the amount of work our brain does. "We shape our tools but then our tools shape us" is a quote by Marshall McLuhan and in this instance is proven literally true. We invented GPS (our tool), then we used it, and now our a portion of our brain is smaller due to it (our tool shaping us). Want to go real far back for a more drastic physical example and the invention of fire and cooking changed our jaw shapes in evolution.

We don't just interface with technology in a one way direction, but the technology we use is also changing us in subtle (or not so subtle) ways.

If something as simple as using a GPS or your own memory or not produces a physical change in your brain, surely offloading a large portion of academics to a chat AI will produce changes as well and no one knows what or how, we are just going ahead with it. I'd bet money some study in the future comparing those who went to schools where they learned with an AI vs those who did not will show some physical differences in the brain as well as things like Quality of Life differences in terms of feeling autonomous/confident in your own problem solving abilities.

6 Likes

Your rant resonates with me, and your concerns are well founded, but who wants to go back to using slide rules or log tables?

In school, I studied the various logic gates that can be worked with transistors, but my knowledge of the higher levels of abstraction has proven to be much more useful. In another example, I like having the internet to look things up. In truth, I do kinda miss the time I used to spend searching for things in the library, but nevertheless, I don't want to go back to that. I like pulling out my cell phone and having an world wide library in the palm of my hand. Of course, knowing how to work the dewey decimal system to look up a book in a library is certainly useful, but in today's world, it's probably not as useful as being able to work a search engine.

I don't disagree with teaching the fundamentals at all, but technology frees up a lot of time and brainpower, and including it in the curriculum is a must.

I don't miss maps either, but I'm glad I know how to use them. Even so, I don't remember the last time I unfolded one. They should probably be taught, but in today's world, knowing how to interface with a gps system is probably more useful, and in tomorrow's world; that's probably going to be true for AI in a lot of respects as well.

Personally, I don't agree with a lot of the ways AI is being deployed, and I've spoke out against them more than once, but a lot of its misuse is attributable to people's ignorance of how it works and what it does. Therefore, I'm glad that AI is being accepted and incorporated into the curriculum. Like it or not, it's not going away, and I feel our kids will be better off if they have a head start with it and a firm understanding of it.

2 Likes

It's already happening with the integration of "Google" into our everyday lives. I have had to train several people, and almost always those who have grown up in the "Google it era", have trouble actually troubleshooting problems.

The number of times, that I have had to tell someone that I'm not answering their question until they come back with an educated question that shows they're actually trying to solve the problem instead of just getting me to give it to them (like a search engine) is ridiculous.

The younger engineers I'm finding seem to have issues breaking issues down into smaller things. They instead want to say "Hey, google how do I do X".

4 Likes

You're speaking the truth to be sure, I've read these books myself, so I'm aware of the hippocampal changes gps has caused and the changes to our jaws that has come from artificially selected domesticated fruits. (this doesn't mean that I want to go back to chewing all day) A couple of years ago, I read a book called Homo Deus which explorers what human beings are likely to become. It's interesting and somewhat disturbing stuff.

2 Likes

but I wonder if a lot of our gripes concerning the incompetence of others are more attributable to sociology than to changes in technology. The 80/20 rule has always been a thing. Twenty percent of the people seem to do eighty percent of the work.

Interesting discussion on technology and the brain. My take on school after going through it was that it is primarily a babysitting service with poor outcomes for a lot of those who go through it (for multiple reasons--I also did some tutoring of other students while I was in school).

We've used very flexible distance learning for our children. We choose curriculum, teach, supervise/mark assignments, and teachers only grade and provide feedback grades K-9. Grades 10-12 are a bit more rigid due to grad requirements. Something I've noticed is that we definitely need to train our children to problem-solve, try to find solutions, and come to us with what they've tried along with questions about what they're puzzled on. It seems humans (animals too, for that matter) have a natural tendency to laziness (or "efficiency" :smile:). If we can just bring the problem to someone and they'll solve it for us, that's what we'll tend to do. Unfortunately, I think a lot of people are growing up without anyone teaching them problem solving skills. "I can't" and "I don't know" get responses like "What can you do / do you know?" around here.

We've also noticed with friends' kids who go to typical K-12 school that basics aren't often being taught. Reading is guessing/memorization (totally lost at unfamiliar words), Math is memorized until that system breaks down due to sheer quantity of memorization required without understanding--at which point most decide they're not good at math for the rest of their lives.

We started all our children with basic phonics and the older ones are strong readers (reading adult level books for entertainment while in elementary grades). I fully expect the youngest will be too (proudly brings her latest phonics stories to read to me, pointing out the new letter sounds she's learned). We went with a math course based on Asian methods focused on understanding/mastery rather than repetition (I aced math in school and hated the busywork) and have been very happy with the results, including with personalities that would generally not do well in math in school.

We also have them use technology a lot. After they learn to write on paper, almost all of their school work is done on iPads. I'm still not sure if that is ideal or not, but technology is a necessary skill and is definitely convenient. I think the problem is relying on it without understanding. The math course we use does introduce calculators, but not till after strong mental math skills so students will recognize when they must've entered something wrong in the calculator. I believe this is the right approach. And it applies in other domains too. Given the way credentials tend to be handed out, relying on "experts" without some level of understanding isn't much better than relying on AI. This is probably the point where I should run away and hide :innocent:

Back a little closer to the original topic: I played a bit with ChatGPT and found I had to coach it to get correct answers on various things. It's definitely an interesting tool, but if you don't know enough to recognize where it is wrong, it will lead you astray. For instance, it gave self-contradictory answers to why lowering the same vehicle closer to the ground reduces drag. When confronted on the contradiction, it abandoned the original (incorrect) answer it had emphasized and switched to the aspect it had barely touched on originally (which was a correct answer). It also repeatedly gave incorrect answers on how to use the UNION instruction in structured text until I laid out exactly where it was going wrong. It agreed with that and then came back with a correct example. Of the little I've tried, it seems best at finding the ideal word for something--a significantly smarter thesaurus..

3 Likes

This is the key right here. For most kids, it seems that the highest academic level they will achieve is the minimum standard set by the parents. There is a pervading perception among a disturbing number of students that ‘D’ stands for ‘Diploma,’ and indeed, this philosophy is reinforced by the fact that a ‘D’ is considered to be a passing grade.

To me, it doesn’t make sense that our education system allows a student to advance to the next level in a class if they haven’t achieved an A in the lower level class. When a student has a ‘B’ in a class like Algebra, this means that up to 10% of the foundational knowledge that is required for Algebra II is missing. With a ‘D,’ it’s as much as 40% missing. This is simply not a recipe for success, and it's the reason why from day one of school, I drill into my kids that ‘A’ stands for acceptable and ‘B’ stands for below acceptable.

I created the following illustration to drive home what I am saying. Higher level classes build upon the foundational knowledge laid down by previous classes. If the lower level building blocks are missing because a 'B' grade was allowed to go uncorrected, this severely limits what can be achieved in the higher level classes:

Getting back to the original premise, do many parents allow their kids to skate by with Bs and Cs using a calculator as a crutch for the gaps in an inadequately developed foundational knowledge? The answer is yes, and the same will be true for AI, but poor parenting is a problem that predates any form of technology, so I really don't have any ideas on how to go about fixing it.

3 Likes

Grading on a curve was accepted by educators because educating is a job imo.

The more automated it is, the more 100% will become the norm imo.

The hippocampus of ChatGPT is shrinking.
I am convinced it is dumb right now and getting dumber than compared with just a month ago.

Used to be able to tell it to just give answers and not provide disclaimers.
Half the time it fights with me to just give me tables now.
Or it says something like sorry or I apologize.
Have to go to Reddit to find predefined prompts to make it produce information.

I am looking for something better than it now.
Now that I have seen how good it was, I can't tolerate how stupid it has become relative to it's former self.

2 Likes

Interesting observation. It would be easy for me to imagine that the reported dumbing down with disclaimers is the direct result of microsoft's legal team influencing the project now that the company has committed ten billion dollars to further development.

2 Likes

Then again, this could be problem:
Screenshot_20230217-043641-225

I just logged in to see if I could see what had changed, and it gave me this popup message.

Known is misspelled. lol

Perhaps selecting legacy will correct your issue:

2 Likes

Legacy vs Default isn't an option on the page when I look at it.
Feb 13 version it says at the bottom.

I asked for a list of manufacturers and models of a certain kind of machine.
It had the wrong manufactures and it made up models of machines that don't exist at all.

I said now put the website of the companies in the left most column.
It placed them in some other column.

So disappointed.

Interesting. To test it, I asked it a series of questions regarding a movie my son and I stayed up late to watch. It didn't miss a beat, and I was able to confirm all of the answers which included how old the actors were when the film was made.

My wife, who was trying to sleep at the time, complained about how much noise we were making, so I asked it for tips on apologizing. Its response was considerably better than anything my simple male brain could have produced on its own, so I didn't use it, but there was definitely nothing to complain about there.

My guess is that questions you are asking are too current. ChatGPT doesn't know anything from 2021 to the present, so if present day knowledge is needed from that AI, you're out of luck.

2 Likes

I inquired about a startup to compete with Helion.
They heat heavy water into a plasma.
They smash this plasma together in chamber using magnets.
They collect the energy, deuterium, and tritium.
The do it again with the deuterium and tritium the second time.
They collect an even larger amount of energy.
ChatGPT estimated the electric bill at $1 million per test to create the plasma and smash it.

It still relays information and shows code when asked.
But, it put the websites in a center column instead of the left most.
It is hard for me to think about anything else.

The potential is on the scale of:

  • Resolving conflicts like in Ukraine
  • Curing cancer
  • Developing affordable fusion

Yet, right now it put the websites into a center column instead of the left most.
I don't expect Putin to ask ChatGPT how to form the USSR again.

It just drew a ton of lines.
All I can think of is that the AI is effectively drooling.

Maybe it thought you wanted an ASCII picture of a railroad track which that certainly looks like lol

4 Likes

Not me, those were it's words on how it wanted to receive the instruction.

It seems strange to me that it would try to draw a ladder logic program using characters rather than simply giving the code in plain text. If I have a large redundant step sequence to produce in ladder logic, I will often do it in a spreadsheet because it's much faster and more accurate. Afterwards, I just copy and paste the raw code back into the RSLogix. I even have a spreadsheet set up with VBA macros that helps me to verify the accuracy of plain text RSLogix code to a master program when I write programs for multiple machines.

I would have expected ChatGPT to default to that output. I wonder what keywords are needed to get it to do that?

4 Likes

That is why I tried to ask it what to tell it.

I played around with it, and it looks like mnemonic text is the key to get an output that resembles what I normally type in a RSLogix editor, but what it produced was definitely not a threat to my job. lol

2 Likes