Anyone testing ChatGPT ability to rewrite JSON to create content?

I want to see it write some JSON, but earlier I fed it some simple word processing it couldn't complete.

I gave it lines of redundant code lines and asked it to extract strings. It could not. Supposedly, ChatGPT writes papers, but I haven't seen that work well either recently. I am not sure if they dumbed it down due to the worldwide usage or what.

One skill I predict will become a thing to put on resumes (to my disgust) is the ability to talk to these pretrained AI's "the right way" to get get good text answers/images out of them. I try to use short direct sentences. Sometimes it's better too to do it iteratively since you can talk to it over the course of a conversation. Write a function that does x. Now modify it to do x and y. Sometimes this way produces better results.

It still really sucks at math though.

1 Like

Resumes
Resume's are pointless to me. At least for me, because I always know someone who gets me in, someone who says "This person does X to Y degree.".
Their seeming only use now is to verify that candidates were treated relatively equal when compared as a formality.

Getting ChatGPT to give me results
If I say to ChatGPT "I am giving you a list of rows and I want you to give me a list of strings back after the "as" for each row." followed by the list, that is the bar to me for if that AI is useful with 70 rows.
Though, if I had to do more than the 70 rows, I might have spent more time on it.
Or maybe Left() in excel and find & replace in an editor.
Everyone has different expectations of the AI coming out though.
ChatGPT might have eventually have given me the data if I had spent more time.

ChatGPT math
ChatGPT did extract and do some calculations for me the other day over a time period with two sets of data that I didn't supply it.
It took 5 tries to get there as it kept trying to grab the datasets and failing.
In that case though, I was telling it to get data instead of providing any.
It did the math correctly, eventually.

I asked it to perform a query and tell me how long it took once though.
That was when I realized it won't do that yet.

The other day I asked ChatGPT for a list of songs from a Billboard list. I think it failed at that and I gave up before investigating much. I really think it was dumbed down due to the recent viral usage. I don't know though, just guessing.

OpenAI's biggest disappointment for me though is that I can't get Dall E 2 to draw characters and locations in a way to construct a comic book for example. Continuity just isn't something they even considered it seems. So it's like a single clipart or image generator for me so far. While Photor has been better. But none of them will put in the text I want, and that seemed like it might have been easiest thing.

If a person's resume doesn't give me adequate information on why they are qualified for the position they are applying for, I pass.

Then if it does and they make it to an in-person interview/video call they had best be able to expound upon the things the claimed in their resume. Very much a tool that I use to weed out those who actually know what they're doing and those who did a quick search for buzz words.

Writing a good resume is becoming a lost art, sadly.

Perhaps, I'm just old school.

7 Likes

You probably receive more applicants. I didn't know. I am definitely not an authority on resumes.
I only have hired people without applications for small amounts of work or through sites with piece work.
My resume is just from a template from a book that got high ratings.
I think when some were quiet quitting, I was trying to quiet work learn lots and do lots hoping to gain.
I don't think that is something a resume can say.

I plan to test the JSON writing abilities tomorrow.
I have some perspective pages that are designed with left to right scrolling.
Some with up and down scrolling.
Hoping it can do fast conversions if I can find the words to get it to do that in json.
Copying and pasting isn't going to take too long though alternatively.

If I can get it to make some charts in json though, that could be awesome.

They talk about JSON conversions in this article:
β€’ ChatGPT Cheat Sheet

Here are a couple of other things I read this week on ChatGPT:
β€’ An interesting comparison of Claude and ChatGPT
β€’ It's official: Microsoft is giving ChatGPT 10 Billion Dollars: Direct quote from the article: "News of the investment comes less than a week after Microsoft said it’s laying off 10,000 workers."

3 Likes

Thanks for sharing. I need to test the JSON generation soon.


Is IA developing a way to scale up compute for larger datasets inside Ignition?

Is spending on Snowflake/Data Bricks/similar services counter productive?

Is Azure with OpenAI's talent pool likely to be best for scaling up compute for larger datasets?

Is compute/billing the best metric to determine the best cloud data processing option?

If you are dealing with massive datasets to do work on, you should be doing it in chunks for a number of reasons. No matter what software your using, your hardware is only going to be able to store so much data in memory before you run into an issue. Not an Ignition specific issue here just a physical limitation.

If you're talking about like displaying datasets in tables, besides hardware being the same limitation, I don't think it's good UI/UX to show the user a million table rows for instance.

What situation's have you run into where you need larger datasets from Ignition that it wasn't capable of handling?

Situations Snowflake/Databricks are typically used for. I don't have specifics at the moment.

I think the primary example on Snowflake's website was joining weather data with employee performance data.

For me personally, I have been enjoying the performance of SQL and Igntion.

You're welcome. In case anybody following this could use it, I also ran across the following during my reading this week:
ChatGPT Quick Start Guide

The information contained in the periodic IA newsletter and this forum is all I know about what IA is up to. From my reading, I haven't seen any information about this, and I wasn't even aware that this was a problem that needed to be addressed. I have noticed that power tables don't seem to be able to handle nearly as much information as the datagridviews I put into apps using Visual Studio, but my brain can come up with a lot of rational explanations for why that is, so I've never made an issue of it, and let's face it, just because I can get away with dumping 50,000 rows into a datagridview doesn't mean that I should be doing that. Chunking and calling data as needed is better approach anyway.

I've never done that, so I have no idea.

As much as possible, I keep the information in a database, and I only pull in information as needed. In Ignition, I usually limit data pulls to a few thousand rows. If I were displaying the information, there wouldn't be a need to pull in more than that unless somebody bothered to scroll down that far, and I typically add things to the gui that allow users to filter the data, so they don't have to scroll to find things.

This is not something I have ever contemplated, so I'm not in a position to advise you on this.

2 Likes

I asked it what is going on behind the scenes:

When you submit a question to me, a number of processing steps occur "behind the scenes" to generate an answer.

  1. Input preprocessing: The text of the question is cleaned and normalized, for example by converting all the characters to lowercase and removing any punctuation.
  2. Tokenization: The text is then divided into individual tokens, such as words and punctuation, which are easier for the model to process.
  3. Input encoding: The tokens are then transformed into numerical representations, known as embeddings, which can be fed into the model.
  4. Model processing: The encoded inputs are then passed through several layers of a neural network to produce a probability distribution over the possible answers.
  5. Decoding: Finally, the most likely answer is selected and decoded back into text, which is then returned to you as the output.

These steps are executed in real-time, so when you submit a question to me, you receive an answer almost immediately.

me: explain the tokenization and provide a short sentence example

Tokenization is the process of breaking down a sentence, paragraph, or document into smaller units called tokens. Tokens are typically individual words, but can also be punctuation marks, numbers, or symbols.

For example, consider the sentence "Hello, I am ChatGPT." After tokenization, this sentence would be broken down into the following tokens: "Hello,", "I", "am", "ChatGPT.".

Tokenization is a crucial step in many NLP (Natural Language Processing) tasks, as it allows the model to work with individual elements of the text, rather than trying to process the entire text as a single unit. By breaking down the text into smaller, more manageable pieces, the model can more easily analyze and understand the relationships between the individual words and phrases.

me: during model processing, when the probability distribution is assigned to answers, what are typical values for 5 low probability answers and the top highest 5

The probabilities assigned to the answers during model processing are represented as a probability distribution, which sums up to 1.0. The values assigned to each answer reflect the model's confidence that each answer is correct.

The top 5 highest probability answers typically have values close to 1.0, while the values for the 5 lowest probability answers are close to 0.0. The exact values will depend on the specific model and the input data, but in general, a high-probability answer will have a value that is much larger than a low-probability answer.

For example, if the model assigns probabilities of 0.9, 0.8, 0.7, 0.6, and 0.5 to the top 5 answers, these would be considered high-probability answers and are likely to be selected as the final output. On the other hand, if the model assigns probabilities of 0.1, 0.2, 0.3, 0.4, and 0.5 to the 5 lowest probability answers, these would be considered low-probability answers and are unlikely to be selected as the final output.

It's worth noting that the specific values of the probabilities can vary widely depending on the context, the specific model being used, and the input data. The most important thing is the relative ranking of the probabilities, rather than the exact values themselves.

me: how is the length of the response determined?

The length of the response is determined by a combination of factors, including the input data, the specific task the model is performing, and the design of the model itself.

  1. Input data: In some cases, the length of the response is directly influenced by the length of the input data. For example, in question-answering tasks, the length of the response may be limited to the relevant information in the input text.
  2. Task: Different NLP tasks may have different requirements for the length of the response. For example, in text classification, the response is typically a single label or class, while in machine translation, the length of the response will depend on the length of the source text being translated.
  3. Model design: The design of the model can also impact the length of the response. Some models are designed to generate short, concise answers, while others are designed to generate more detailed, multi-sentence responses. For example, a generative model designed for text completion may generate a longer response than a model designed for text classification.

Ultimately, the length of the response is determined by a combination of these factors, and may also be influenced by additional factors such as the specific training data used to train the model and the optimization criteria used during training.

2 Likes

Pretty interesting. If you want to make your own ChatGPT (very) lite to get a basic understanding of what is happening you can always look into Markov Chains.

I did this a few years back, I scraped a bunch of data from a prolific twitter poster, and then I start a sentence and finish it "in their voice" so to speak.

It's much more simple then the DNN/RNN then I'm sure and your dataset will be much smaller than what they use, which you will find is the biggest determining factor if an AI is useful or not, how much quality data it is drawing from. But it's a pretty cool thing to build for yourself to get a very top level surface understanding of what is going on.

2 Likes

I wasn't sure if I would post this, but I laughed out loud when I asked ChatGPT to tell 5 jokes like a few comedians yesterday during lunch.

Of the 25 it wrote me, at least 4 were great.
Becoming a standup comedian or writer has never been easier imo.

Markov Chains | Brilliant Math & Science Wiki.
Thanks @bkarabinchak.psi
I think I saw it once before, but the only application I knew was AI chess.
Though I the website I looked into it at ignored the most popular beach.

I saw this a little while back, and it had me rolling:

4 Likes

My son's high school impressed me this week when they released an official statement regarding the use of AI. Surprisingly, they do not consider using ChatGPT or other forms of AI to be cheating, and they are presently working on encorperating AI into the curriculum to ensure graduates enter higher education or the work force with a proper understanding of how to use it. They compared AI to the calculator that was once considered cheating, but is now a standard part of day to day life both in school and beyond it.

4 Likes

Regarding cheating, there was a professor I heard about who asked for a report to be written.

Then he asked for it to be graded haha.

1 Like

If only ChatGPT could automatically teach me Ignition to Gold Cert and Cert me for free haha.

That is disheartening. Not like the prevalence of the calculator has turned us all into math geniuses. I would argue the opposite. Why learn it if you have something on you all the time that can do it for you? Which while it may let you do the arithmetic calculations perfectly, I do think inhibits peoples ability to interpret math or see problems in a mathematical way because teaching has become rote and just teaches towards the calculator.

Now we have a All Purpose Calculator in ChatGPT.

Sort of related, but I've noticed my younger coworkers (25 or under) are very used to intuitive and nicely designed UI's and are not actually used to working with computer file systems directly, due to them growing up using apps that are pretty mature in terms of UI/UX. I find in a way they're kind of worse at technology because they only used the kind that holds your hand, and I think this will also make that worse as well.

I'm a math tutor on the side and it is upsetting to see people not understanding math but knowing almost from muscle memory what order of buttons to push on a calculator without understanding the why, and I think that that sort of surface level knowledge is now going to be the norm - why know or remember anything if ChatGPT can just give me the answer? Ok luddite rant over.

8 Likes

Not a luddite at all. That would be refusal to learn/use new things. This problem is the refusal to learn/use foundational "old" things. Things that are still true and necessary, but peoples' eyes glaze when they come up.

7 Likes

Debilitating for many, but people get good at whatever they want to get good at I think.

When mapquest came out, many stopped connecting pages in those books DoT puts out.
When smartphones came out, many people stopped reading pages of directions.
When self driving cars some out, we might not look at the GPS.
People who know where they are going don't use GPS.

1 Like