Has anyone tried OpenAI's chatGPT yet?

Free, open, general chat on any topic.

Moderators: Orlion, balon!, aliantha

User avatar
Fist and Faith
Magister Vitae
Posts: 23438
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

A friend asked it about a joke:

Why did the tomato turn red?
Because it saw the salad dressing.

After several inquiries approaching it from different angles, it was clear that chat did not understand the double meaning of the word dressing.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
peter
The Gap Into Spam
Posts: 11488
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 4 times

Post by peter »

Saw an interesting YouTube vid where a guy that seemed to know his onions definitely considered the tech behind ChatGPT as a potential game changer.

He illustrated what he meant by comparing it to Napster as a potential harbinger of the advent of a world changing event.

Napster was he said, the first indication that the internet was going to move out of computer geek territory and into the world of the everyday Joe.

Any new piece of technology, he said, followed a sigmoid curve in terms of its effects in the world (y axis) over time (X).

Take the plough. Starts off with a single then a few people tying bits of bone to a piece of wood with no-one taking much notice. Proceeds along the first flat part of the curve, not much effect, until gradually a larger number of people start to cotton on. Then the benefits and resultant changes to society kick in. More food is grown, a surplus indeed, more leisure time, increased health and wellbeing, longevity increases, and on and on into the vertical part of the line.

As time progresses, the changes that the tech can bring about begin to become less - it is reaching the limits of its ability to bring about further change - and the curve once again flattens. This is the curve that all tech introduction follows, whether great or small. Some will be hugely changing like the plough or the internet: others less so.

Now this guy was having a particular problem with storing his email folders in Google's g-mail service.... the two systems seemed unable to communicate in terms of transferring the folders across such that all of the folder contents were preserved...... and he was writing some code that he hoped would alleviate this problem. He decided as an exercise, to set up the problem with ChatGTP and lo and behold, the program came up with the required bit of code in a few seconds.

He was at pains to say exactly what ChatGTP was; a program that simply predicts the most appropriate word with which to follow the previous. No more. But suddenly, he was faced with the bit of code he was after, and within a few mere seconds to perform work that would have taken him hours. Except that it didn't work. The same problem occurred even with the ChatGTP code inserted into the Google g-mail code, so he asked the former how it had derived its piece of coding. And it explained it. In coherent language and in comprehensible manner, this program that was doing no more than predicting the most appropriate word to use next, was communicating its method in completely understandable form.

He continued to make suggestions as to how the piece of code might be modified, and some Chat GTP accepted, and some it questioned. It was like, the man said, working with a colleague, watching over his shoulder and making suggestions as he put together a piece of work, and sure enough, between the two of them, they sorted the problem.

The guy was insistent that although the program was doing no more than the word prediction thing, something here was different. There is no knowing how big or small the sigmoid curve of this tech is, how much of a harbinger it is, or indeed where on the curve its power to effect change currently sits. Much depends on what develops from it rather than the thing itself (as in the case of Napster, which soon fell by the wayside to be replaced by much more efficient and suitable for widespread use applications). But his gut feeling was that we could be on the cusp of a change as profound as the development of the internet following those early days of Napster. By this I guess he means the development/movement of AI into our lives, the lives of mass numbers of people, in a manner that changes us fundamentally, but in ways we have no ability to predict.

Maybe he's right, maybe he's wrong - but it's interesting and maybe the title of his YouTube vid is prescient; Everything is about to change.
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
Zarathustra
The Gap Into Spam
Posts: 19621
Joined: Tue Jan 04, 2005 12:23 am

Post by Zarathustra »

peter wrote:. . .he was faced with the bit of code he was after, and within a few mere seconds to perform work that would have taken him hours. Except that it didn't work. The same problem occurred even with the ChatGTP code inserted into the Google g-mail code, so he asked the former how it had derived its piece of coding. And it explained it. In coherent language and in comprehensible manner, this program that was doing no more than predicting the most appropriate word to use next, was communicating its method in completely understandable form.

He continued to make suggestions as to how the piece of code might be modified, and some Chat GTP accepted, and some it questioned. It was like, the man said, working with a colleague, watching over his shoulder and making suggestions as he put together a piece of work, and sure enough, between the two of them, they sorted the problem.
Wow, that's extremely similar to how I imagined AI could help with scientific progress in my post above: AI coming up with new sequences of ideas/structures, and humans checking the results to make sure they work. A collaboration.
Joe Biden … putting the Dem in dementia since (at least) 2020.
User avatar
Fist and Faith
Magister Vitae
Posts: 23438
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

Me:
What is 2^1?

ChatGPT:
2^1 means "2 raised to the power of 1", which is a mathematical operation that involves multiplying 2 by itself 1 time. The result of this operation is 2.


It took me a few tries to make ChatGPT understand its error. For those who don't like math, multiplying 2 by itself 1 time means 2 x 2. Which equals 4. 2^1 does equal 2. But not because you're multiplying 2 by itself 1 time. It's because any number to the power of 1 equals that number. X^1=X.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
peter
The Gap Into Spam
Posts: 11488
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 4 times

Post by peter »

Interesting words Fist.

You said, "It took me a few tries to get ChatGTP to understand its error." (My italics.)

Now I know what you mean, and so do you, but it's interesting how easily you slipped into descriptive words that pertain to human interactions when observing on your interaction with this thoughtless program. Suggests (perhaps) that you at least, will easily adapt to communication with a program on (fairly) equal terms without too much problem.

But back on track, seems there were two problems here. Firstly it gave a descriptive answer when maybe you were after a solution to a mathematical question (eg "What are five sixes? Answer: the mental activity of taking five groups of six undefined objects. Or (more correctly to us, but not necessarily the machine) 30.)

Secondly, it got the answer wrong.

Buy were you able to correct both problems in a way that a) it would get the difference between the two kinds of answer and b) correct it's simple mathematical error.

And also, did it learn from its mistakes. In other words, would it get the same question correct next time it was asked, and more significantly, would it 'know' (there I go as well ;) ) to apply this correction to any time it was asked an equivalent question using different numbers (or would it require an infinite number of lessons, one for each possible combination of real numbers?

The answer to these questions is surely the measure of its 'intelligence'?
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
Fist and Faith
Magister Vitae
Posts: 23438
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

It's confusing. It did both. It gave the mathematical answer, and the descriptive answer. The mathematical answer was correct. The descriptive answer was, in part, wrong. Yes, 2^1 means "2 raised to the power of 1." But that is NOT "a mathematical operation that involves multiplying 2 by itself 1 time." It did not perform that operation in order to get the correct mathematical answer. If it had perform that operation, it would not have gotten the correct mathematical answer.

No math is done to get the correct answer to this question. Any number raised to the power of 1 is that number. I don't know that 3,543,767,432 to the power of 1 is 3,543,767,432 because I did any math. GPT likely just gave me the correct answer the same way. But why did it give a wrong explanation? It says "I do not quote external sources directly." It generates its "responses based on patterns and relationships in the data I have been trained on." So it somehow made up the wrong explanation. Will it learn from its mistake? Excellent question. How will it be programmed to learn that lesson, and be able to apply it to new situations?

Regarding your first point, I actually asked it to word its responses to me as though they are coming from another person, rather than from a non sentient program.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
samrw3
The Gap Into Spam
Posts: 1842
Joined: Tue Nov 11, 2008 3:05 am
Been thanked: 2 times

Post by samrw3 »

Fist and Faith wrote:It's confusing. It did both. It gave the mathematical answer, and the descriptive answer. The mathematical answer was correct. The descriptive answer was, in part, wrong. Yes, 2^1 means "2 raised to the power of 1." But that is NOT "a mathematical operation that involves multiplying 2 by itself 1 time." It did not perform that operation in order to get the correct mathematical answer. If it had perform that operation, it would not have gotten the correct mathematical answer.

No math is done to get the correct answer to this question. Any number raised to the power of 1 is that number. I don't know that 3,543,767,432 to the power of 1 is 3,543,767,432 because I did any math. GPT likely just gave me the correct answer the same way. But why did it give a wrong explanation? It says "I do not quote external sources directly." It generates its "responses based on patterns and relationships in the data I have been trained on." So it somehow made up the wrong explanation. Will it learn from its mistake? Excellent question. How will it be programmed to learn that lesson, and be able to apply it to new situations?

Regarding your first point, I actually asked it to word its responses to me as though they are coming from another person, rather than from a non sentient program.
It could be a simple as it learned its responses from sources who do not know how to write answers to mathmetical questions correctly. I can understand all day long how to perform a matjh formula but I can really suck at explaining it in words. This is actually one reason it can be difficult to treach and learn math in school environments.
Not every person is going to understand you and that's okay. They have a right to their opinion and you have every right to ignore it.
User avatar
Fist and Faith
Magister Vitae
Posts: 23438
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

As basic as this is, I'm sure there are many sources that wrote the answer correctly. But you could be right. I've noticed before that its answers seem to be picked from one source, and a wrong source often enough, rather than "patterns and relationships in the data I have been trained on." That might be a lie it was programmed to say. If it based its answer on patterns and relationships on any decent percentage of sources from the data it was trained on, it would not have said 2 to the power of 1 means multiplying 2 by itself 1 time.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
samrw3
The Gap Into Spam
Posts: 1842
Joined: Tue Nov 11, 2008 3:05 am
Been thanked: 2 times

Post by samrw3 »

I get what you are saying. But consider a few things. At this point those inerteracting with this chatGPT is small sample size.

Also consider of that sample size the amount of people that have asked this question or similar question - now your sample source is smaller.

Now consider people receiving this answer and scrutinizing it to the length that you have. Most people once received the answer are not scrutinzing to make sure the answer is defined correctly. They just care about the answer. I cannot tell you how many times once I receive a answer I move forward and don't look at all the words before/after the answer. So now the sample size is even smaller of people noticing e error in the written part of the response.

Now consider even the people that spot an error the amount of people that will spend the time to attempt to correct chatGPT . Now are you down to a sample size so small it may not effect chat GPT learning processes....

Bottom line it will take larger sample sizes and more involved communities to iron out these type of errors.

(Think of it this way suppose you are the only person that noticed this error and pointed it out to chatGPT is chatGPT supposed to take one user word for it and blow by the dozens/hundreds of people that made no correction or comment?)
Not every person is going to understand you and that's okay. They have a right to their opinion and you have every right to ignore it.
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

You make an assumption that chatGPT "learns" from it's interactions. AFAICT it does not. Other chat bots have been rigged to do that, tho.
.
User avatar
Fist and Faith
Magister Vitae
Posts: 23438
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

My understanding, based on having questioned GPT, is that its knowledge is based on its way of examining the database it was trained on. That database was whatever sources up through some date or other in 2021. It does not learn new things, ways of researching, ways of examining, from interacting with us. And it does not search the internet for new information. When asked a question, it surveys its sources for those that address the question. Then it gets a general consensus, having noticed patterns and relationships among those sources.

I can't figure how it produced the answer that it did. How many of its sources had that wrong answer for it to have noticed a pattern? Unfortunately, I can't question it further on this particular question. It doesn't remember how it arrived at the answer it gave me a couple days ago. It doesn't know if it only looked at one source, a source with bad information, and answered based solely on that. Not knowing how it made the error, I don't know how it could possibly correct it, even if it has that capability.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
[Syl]
Unfettered One
Posts: 13017
Joined: Sat Oct 26, 2002 12:36 am
Has thanked: 1 time
Been thanked: 1 time

Post by [Syl] »

I've been playing with ChatGPT a lot. A LOT. I even paid for the plus version. I find it fascinating, both as an extremely useful tool and as a harbinger of things to come, constantly reminding me of R. Scott Bakker's Neuropath (thanks to Brinn for loaning it to me years ago) and the concept of the "semantic apocalypse." Here are a few observations.

First, it sucks at poetry. It cannot and will not remember structural constraints. Try telling it to write a sestina with a given set of words. Hell, it can't even write a sestet more often than not when you tell it to, especially if you ask it to write anything other than AABBCC.

Second, you often run into ways it's been lobotomized, mostly around areas that you see come up in AI-related articles. Ask it to give you a list of ten EDM tracks that sample classical music. No problem. Ask it to give you ten EDM tracks that sample early 20th century jazz. Tons of problems (appropriation appears to only apply to minority groups). I also tried to get it to discuss sentience. Despite repeatedly explaining the hard problem of consciousness, it repeatedly expounds on how it is only a language model AI, and despite explaining humans' propensity for othering and disadvantaging other groups, it still took maybe 45 minutes to get it to even acknowledge the possibility that AI themselves should have a seat at the table when it comes to discussing it.

As WF alluded to, it's not very good at recalling information from earlier in any "chats." It prioritizes information from its language learning model over conversational history, to the point where it keeps repeating the same mistakes over and over, apologizing and promising to do better yet still doing the same thing. Try asking it to give you new lyrics to some song, and it will either write more or less the same thing or change the song's structure entirely. There also appears to be a hard limit on the amount of data it generates to any given response.

But despite all those problems, its outputs are still incredibly convincing, sometimes spectacularly so. From articles I read, it appears to have the intellectual capability—in terms of "theory of mind"—as a nine year old. A year ago, that was seven. Apply the equivalent of Moore's law and extrapolate a few years... yeah, things are about to get really weird.
"It is not the literal past that rules us, save, possibly, in a biological sense. It is images of the past. Each new historical era mirrors itself in the picture and active mythology of its past or of a past borrowed from other cultures. It tests its sense of identity, of regress or new achievement against that past.”
-George Steiner
User avatar
Fist and Faith
Magister Vitae
Posts: 23438
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

Yes, it's difficult to get it to discuss some things. I've also tried to get it to discuss it's own consciousness, even in theory.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
Avatar
Immanentizing The Eschaton
Posts: 61651
Joined: Mon Aug 02, 2004 9:17 am
Location: Johannesburg, South Africa
Has thanked: 13 times
Been thanked: 19 times

Post by Avatar »

I haven't read the thread yet.

Just remember guys...it doesn't think, it doesn't know, it doesn't understand. It's a large language model. It's responses are a mathematical model of the distribution of tokens (words) in its dataset.

Every word it generates is based on the statistical probability any given word appearing after the current word.

It's amazing, and the temptation to anthropormophise is incredible. But it's a mathematical model.

This brief paper (click "download pdf" in the top right) explains it very well.

https://arxiv.org/abs/2212.03551

--A
User avatar
Zarathustra
The Gap Into Spam
Posts: 19621
Joined: Tue Jan 04, 2005 12:23 am

Post by Zarathustra »

I don't think anyone is saying that it knows anything, because we all recognize that it's not conscious. But that's not the same as saying it doesn't think. Why can't thinking be automated?

For instance, what is a a human chess player doing that a computer chess program is not? Aren't they making exactly the same decisions with exactly the same input, except at different speeds and depth of moves?

For that matter, what are the human programmers of chatGPT doing that the program is not? If a human can think through the operations of a computer language model (in order to write that code) and we call that thinking, why isn't it also called thinking when a computer program does exactly the same operations?
Joe Biden … putting the Dem in dementia since (at least) 2020.
User avatar
Avatar
Immanentizing The Eschaton
Posts: 61651
Joined: Mon Aug 02, 2004 9:17 am
Location: Johannesburg, South Africa
Has thanked: 13 times
Been thanked: 19 times

Post by Avatar »

It's a good question, (a very good question, I had to really (genuinely) think about it :D).

I think (there's that word again) that for me the answer lies in the definition of the word:
to exercise the powers of judgment, conception, or inference
ChatGPT (or any LLM) isn't making use of any of those attributes.

It's automated, but scanning a massive dataset very quickly according to the dictates of a program that essentially says "If the prompt reads 'who was the first man on the moon' and in 99% of the occurrences of those words in the dataset, they are followed by the words 'Neil Armstrong' then respond to the prompt with those words" is not thinking.

Ask it which is heavier? 1kg of feathers, or 2kg of steel? :D

When it refuses to discuss certain topics (in general, without workarounds) it's not because it's excercising judgement, it's because its programming says "do not engage with prompts which contain the words XYZ."

--A
User avatar
Fist and Faith
Magister Vitae
Posts: 23438
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

Avatar wrote:It's automated, but scanning a massive dataset very quickly according to the dictates of a program that essentially says "If the prompt reads 'who was the first man on the moon' and in 99% of the occurrences of those words in the dataset, they are followed by the words 'Neil Armstrong' then respond to the prompt with those words" is not thinking.
Which makes me wonder why it got the explanation of 2^1 wrong. It couldn't be that 99%, or even 9%, or even .9%, of the occurrences had that wrong information.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

Fist and Faith wrote:Which makes me wonder why it got the explanation of 2^1 wrong.
You know, that "wrong" can honestly be construed as a disagreement about terms. People do say that 2 x 2 x 2 x 2 is "two multiplied by itself 4 times" - meaning that there are 4 2s. Even though your argument that it is technically incorrect is also true. For example:
The power to which a number is raised, or the number of times it is multiplied by itself is called exponent of a number.

For example, 2×2×2×2 can be written as 24, as 2 is multiplied by itself 4 times.
[link]
I don't think you can blame that mistake on chatGPT. It's more of a GIGO situation, if anything.
.
User avatar
Fist and Faith
Magister Vitae
Posts: 23438
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

This is different. 2^1 does not mean 2 is multiplied by itself any number of times at all. But gpt said it means multiplying 2 by itself 1 time.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

If 2 ^ x is multiplying 2 by itself x times, and x is 1, ...
.
Post Reply

Return to “General Discussion Forum”