The more I read about this stuff, the more I share this sentiment. We could have some fun writing a dystopian novel using the times we're living in as a basis for our source material
Example:
In the 2020s big tech companies united in policing the narrative to protect the public from misinformation, but despite their best efforts to block the misleading content and to remove the offensive contributors, election after election revealed that around half the population remained hopelessly misinformed. Ultimately, the problem was attributable to the keyboard interface. With a keyboard inputting the data, there was no way to prescreen the information for accuracy. Sure, diligent users or bots could flag the material after the fact, and a moderator could later review the materials and take the necessary actions, but in the interim, many innocent people were exposed and perhaps even affected by what they read or watched.
Finally, researchers developed a suitable replacement for the antiquated mechanical input devices. Using powerful AI that was trained using carefully screened MRI scans, the semantic decoder was able to translate brain patterns directly into content, but most importantly, it could detect and intercept misinformation while it was still forming in the user's brain...
I still haven't seen an AI that will write code for Rockwell Automation ladder logic, or read it.
The formatting seems like a giant hurdle.
I hope these AIs cause all plcs to switch to structured text.
Given that Rockwell PLC ladder logic is just text under the hood. I'm sure it can be done, and I think you can find people on YouTube demonstrating basic ladder generation from ChatGPT.
I didn't know it was text under the hood.
I will have to check for those tutorials, thanks.
I think there are some things in ladder logic that I don't precisely understand, like when a function has a control or other semi-generic names of very specific parts.
I actually think that the better approach would be to have it produce boiler plate. There is just too much variation in a typical process for it to really be effective.
In order for the program to be functional you would have to provide I/O configurations and other system variables that are infinitely variable from process to process.
My statement still stands, I think I could write a program manually faster.
Today, ai.com redirected me to x.ai with Elon Musk declared as leader of the team. Could someone tell me what is going on, and which bot is smarter or if they are the same?
I observed a steep decline from January to February.
It seems odd to me that the model got smarter from December to January, and declined ever since.
WSJ had a video saying that improving one ability degrades other abilities the model had.
Maintaining model abilities before new releases seems critical.
Particularly with self driving vehicles, loss in abilities to gain upgrades could be terrible.
A general language model is quite different from specific tasked ai.
But yes even in those this can happen, but for important ai's they would have tests in place to make sure all the senarions still work.
Fo chatgpt they dont have (as much?) testing on these things, because the ai is incomplete and very general. Also right now they are doing all sorts of things to make the ai "sfw" and prevent it from answering "racist" questions and such. so maths and coding are on the background. They want more% of their sfw tests to pass and are ignoring defects in other fields, so the overall score still goes up, but they are scoring sfw higher atm...
Anyways this ai is never intentend to be used for maths or coding or anything scientific or factual.
It doesnt know if anything is correct or wrong. It just spits out pretty text
It would seem odd to me if language models didn't become a part of ai devices in the same way that processors have ALUs.
I think right now the language models are elaborate lookup tables, but I also think that of the other AI we have today too.
There is still a big difference between a general text ai and one that would be used for specific tasks.
Making a chat bot to sell you something on your webpage is a lot easier than making one that answers anything.
The one on your webpage will look for specific words like "buy, cola, fries" but a general one does not do that.
I think I see what you are saying.
I re-watched the WSJ video, and they are talking about the LLM.
I was reading into it that this AI drift would apply to all AI.
I think you are saying that it is particularly worse for a general LLM than it is for for AI with specific tasks.