Anyone testing ChatGPT ability to rewrite JSON to create content?

The more I read about this stuff, the more I share this sentiment. We could have some fun writing a dystopian novel using the times we're living in as a basis for our source material

Example:

In the 2020s big tech companies united in policing the narrative to protect the public from misinformation, but despite their best efforts to block the misleading content and to remove the offensive contributors, election after election revealed that around half the population remained hopelessly misinformed. Ultimately, the problem was attributable to the keyboard interface. With a keyboard inputting the data, there was no way to prescreen the information for accuracy. Sure, diligent users or bots could flag the material after the fact, and a moderator could later review the materials and take the necessary actions, but in the interim, many innocent people were exposed and perhaps even affected by what they read or watched.

Finally, researchers developed a suitable replacement for the antiquated mechanical input devices. Using powerful AI that was trained using carefully screened MRI scans, the semantic decoder was able to translate brain patterns directly into content, but most importantly, it could detect and intercept misinformation while it was still forming in the user's brain...

3 Likes

I still haven't seen an AI that will write code for Rockwell Automation ladder logic, or read it.
The formatting seems like a giant hurdle.
I hope these AIs cause all plcs to switch to structured text.

I've thought about this too. Surely there is a way to train a model on ladder logic though

Given that Rockwell PLC ladder logic is just text under the hood. I'm sure it can be done, and I think you can find people on YouTube demonstrating basic ladder generation from ChatGPT.

2 Likes
bst xic 2B nxb xio 2B bnd OTE The_Question

I think it would probably take me longer to ask ChatGPT to write the logic, proof it, and copy it than to just write the logic to begin with.

3 Likes

I didn't know it was text under the hood.
I will have to check for those tutorials, thanks.

I think there are some things in ladder logic that I don't precisely understand, like when a function has a control or other semi-generic names of very specific parts.

Write some logic in an offline program, select the rung and press the 'Enter' key.

1 Like

I think the idea is to have it write whole programs, not just a one liner

I actually think that the better approach would be to have it produce boiler plate. There is just too much variation in a typical process for it to really be effective.

In order for the program to be functional you would have to provide I/O configurations and other system variables that are infinitely variable from process to process.

My statement still stands, I think I could write a program manually faster.

idk about JSON but its been pretty useful for making/ combining SQL queries

Now that ChatGPT has links to questions, it might be nice to have a link to a predefined subset of questions that lead to good JSON development.

Or alternatively, a good prompt.

1 Like

Today, ai.com redirected me to x.ai with Elon Musk declared as leader of the team.
Could someone tell me what is going on, and which bot is smarter or if they are the same?


I think Musk just bought the domain, found at least an article that might explain it.
https://analyticsindiamag.com/musk-buys-ai-com-from-openai/]

This week, I came across this article that provides a possible explination for the common perception that chatGPT is getting dumber:

I observed a steep decline from January to February.
It seems odd to me that the model got smarter from December to January, and declined ever since.

WSJ had a video saying that improving one ability degrades other abilities the model had.

Maintaining model abilities before new releases seems critical.
Particularly with self driving vehicles, loss in abilities to gain upgrades could be terrible.

Thinking that a self-driving vehicle would be using a large language model is not sane.

2 Likes

Sorry, should have specified that WSJ video was talking about AI models.
I try really hard to get the specifics. I spent a bunch of time on the post trying to be specific.
Seems I always miss something.
Found the video I think:
https://www.wsj.com/video/series/tech-news-briefing/chatgpt-is-getting-dumber-at-math-what-does-it-mean-for-ais-future/B9DE3A5B-A55F-40B9-92BE-CABE3C19AFD0]

I think they meant more than language models.
I would link it, but I can't seem to open the link I think it is because of the WSJ paywall.

A general language model is quite different from specific tasked ai.
But yes even in those this can happen, but for important ai's they would have tests in place to make sure all the senarions still work.

Fo chatgpt they dont have (as much?) testing on these things, because the ai is incomplete and very general. Also right now they are doing all sorts of things to make the ai "sfw" and prevent it from answering "racist" questions and such. so maths and coding are on the background. They want more% of their sfw tests to pass and are ignoring defects in other fields, so the overall score still goes up, but they are scoring sfw higher atm...

Anyways this ai is never intentend to be used for maths or coding or anything scientific or factual.
It doesnt know if anything is correct or wrong. It just spits out pretty text

6 Likes

I think all AI have to be concerned with data drift.
https://www.fiddler.ai/blog/drift-in-machine-learning-how-to-identify-issues-before-you-have-a-problem]

It would seem odd to me if language models didn't become a part of ai devices in the same way that processors have ALUs.
I think right now the language models are elaborate lookup tables, but I also think that of the other AI we have today too.

There is still a big difference between a general text ai and one that would be used for specific tasks.

Making a chat bot to sell you something on your webpage is a lot easier than making one that answers anything.
The one on your webpage will look for specific words like "buy, cola, fries" but a general one does not do that.

1 Like

I think I see what you are saying.
I re-watched the WSJ video, and they are talking about the LLM.

I was reading into it that this AI drift would apply to all AI.
I think you are saying that it is particularly worse for a general LLM than it is for for AI with specific tasks.