Has anyone tried OpenAI's chatGPT yet?

Free, open, general chat on any topic.

Moderators: Orlion, balon!, aliantha

User avatar
Zarathustra
The Gap Into Spam
Posts: 19621
Joined: Tue Jan 04, 2005 12:23 am

Post by Zarathustra »

Wayfriend wrote:AI is in that class of things, along with robotics, telecommunication, computers, electronics, motor vehicles, etc. which change the labor market by reducing industry's dependence on human beings. Given how massively industry invested in telecommunication, computers, electronics, motor vehicles, etc. it is not unreasonable to see AI following the same path.

It won't be "niche". It's already in Alexa, Siri, etc.
What jobs did Alexa and Siri take? I don't know about you, but I never had a hired hand standing around to turn on my lights when I shouted a command. AI makes possible new services that no one ever would have paid another human to do, not to mention no human ever would have done such a tedious job. [Well, we used to have elevator operators, so maybe I'm wrong on the last point. :lol: ]

If a job can be automated, it's not a job fit for humans.

Despite nearly a century of ever increasing automation, our labor participation rate has stayed nearly flat for that entire time. Automation may take some jobs, but somehow we still find ways to employ exactly the same percentage of people.

But these are the sociological/economic/political impacts of AI. I don't know why you keep pushing the conversation into territory that the owners of this site have explicitly and unequivocally forbidden members to discuss. Did we want the Tank tanked? Or did we just want one particular view to be expressed? Or maybe we just wanted certain members to go away? Oh well, if you can discuss these things, so can I. I'm not going anywhere.
Joe Biden … putting the Dem in dementia since (at least) 2020.
User avatar
Fist and Faith
Magister Vitae
Posts: 23439
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

Personally, I don't see wf's comment or your response as political. I assume that, if an AI takes jobs from humans, it will not take from, or spare, only one party. And I don't suspect members of only one party don't want to lose their jobs to AI. Seems like an economic issue.

And I don't see anyone giving either of you any grief about the direction you're taking the conversation.

I disagree that, if a job can be automated, it's not a job fit for humans. I'd rather have a cashier than do it myself. And I don't believe prices will not raise if the number of employees drops because of self-checkouts. And neither of those sentences is political.

Has anyone else asked it to write anything for them? Has anyone else found it to be inaccurate any way? Is anyone else chatting with it at all?
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
Wosbald
A Brainwashed Religious Flunkie
Posts: 6084
Joined: Sat Feb 07, 2015 1:35 am
Been thanked: 2 times

Post by Wosbald »

+JMJ+

ThreadReaderApp: Thread #20230211035517
Image


Image


Struck by how some of the folks defending ChatGPT don't seem to recognize what parts of the writing process are the useful, important parts and also have very little sense of how those skills are taught.

"AI can incompetently perform the important parts" — not a strong sell!

A lot of it is missing that the process of outlining and organizing is a key part of the analysis and that the analysis — the thinking behind the writing — is actually what is valuable in a written product.

The goal is to think and then communicate thoughts.

ChatGPT cannot think or analyze — the reason it "hallucinates" is that it — and I keep stressing this — doesn't know anything about its topic and is just stringing words together in plausible arrangements.

"The AI lets me write something without thinking about it" (by say, feeding it some facts and a predetermined conclusion, letting it sort out the rest) defeats the purpose of writing analysis where someone is supposed to have thought about something.

Ask yourself: how valuable to you is an essay written by an idiot who knows nothing about a topic and has given it no thought at all?

Now, how about the same but by an expert who knows many things about it and has spent time thinking hard about it?

ChatGPT is the first one.

That first essay is worse than useless: it wastes your time and the best it may do is mislead you.

The second one is valuable — if you are interested in it — because it may inform you. But everything that makes it valuable are the things ChatGPT is incapable of doing.

The process of organizing those facts, working out the smaller conclusions they create and how those conclusions build to a final, useful conclusion — that's the actual work of analytical thinking happening there.

The writing process is a vehicle for it.

But ChatGPT cannot do that thinking — it doesn't have the capability to do so. Maybe some future AI will be able to, but not this one. The "low reliability" of its answers isn't a small problem; it isn't a bad thinking engine, but a babble engine that cannot think at all.

Meanwhile the idea that ChatGPT can replace the purpose of the essay confuses all of this. In published form, the purpose of the essay is to convey thoughts that a thinking person has, clearly and persuasively.

ChatGPT can produce "copy", it can churn out lots of low quality "content" but it cannot analyze or explain because that's all communicating the thoughts of a mind it does not have.

That some journalists think this is valuable might tell us something about their process.

If I ask ChatGPT to do, say, @KofmanMichael's job and explain the Ukraine War to me, it will babble confidently but because that babble is not and cannot be based on any knowledge or analysis of the topic, it is entirely worthless except in its power to deceive.

Alternately, we use essays to train these skills: analysis, organization and communication. ChatGPT replaces none of these save perhaps the last in trivial ways.

When I assign an essay in a class, it isn't because I want the "copy"/"content" — that's actually almost wholly worthless to me. If my goal was to get One Unit of Essay, I could write a far better one than my students do and in far less time than it takes to sort/grade them.

The essay is a learning exercise. By analogy, I am asking my students to forge some nails not because I want nails — their nails will mostly be quite poor — but because I want them to learn how to smith things.

I want them to learn how to analyze, organize and communicate.

If the student uses ChatGPT to automate their analysis, well it can't do that, so the result is bad. Likewise for organization — it doesn't understand any of these concepts and so doesn't know how they go together. It does organization beyond rote formula badly.

That leaves communication — which is something it can sort of do. But here's the thing: students will need, in a variety of contexts and genres in their life, to be able to communicate the ideas they have clearly and convincingly, based on sound reasoning.

They cannot rely on a chatbot to do that for them in all cases, because some of those cases are going to be effectively live, in a conversation or an interview or a meeting or delivered live via notes or a Q&A.

If they've learned with a crutch, like taking a ChatGPT product and then editing it into a less insane shape and fixing all of its facts, they're not going to have learned those communication skills to frame an entire response live.

So a ChatGPT-assisted assignment wouldn't teach the things I want my assignments to teach because again they are for teaching not because I want One Unit of Essays as some kind of final product.

(Grades, by the by, are signals, also not products).

Now I can imagine more tailored chatbots still being useful in producing some things — highly formulaic documents, for instance. But making a machine that can analyze, understand its material and output an idea from that understanding — ChatGPT doesn't even attempt that.

So many of the defenders of this technology see a carriage rolling downhill and declare that the creators have invented the automobile.

There is no engine in there, and the engine is the hard part. ChatGPT does the easy part … badly … and the hard part not at all.


Image
User avatar
Fist and Faith
Magister Vitae
Posts: 23439
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

It's the Chinese room, eh?
User avatar
Wosbald
A Brainwashed Religious Flunkie
Posts: 6084
Joined: Sat Feb 07, 2015 1:35 am
Been thanked: 2 times

Post by Wosbald »

+JMJ+
Fist and Faith wrote:It's the Chinese room, eh?
I had to look this up.

So, for reference:

Image

The Chinese Room Argument
Image


The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a 1980 article by American philosopher John Searle (1932– ). It has become one of the best-known arguments in recent philosophy. Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.

The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Hence the “Turing Test� is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics, philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument.

[…]


Image
User avatar
peter
The Gap Into Spam
Posts: 11488
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 4 times

Post by peter »

Z said above that a job that can be automated is a job not fit for a human to do.

Is the job not fit by virtue of being able to be automated? I mean is it the ability to be automated that renders the job unfit? Or that we assume any job that could be automated must have to be sufficiently low grade in terms of its 'mindlessness' that it is really unfit/improper to ask anyone to do it.

The latter will certainly not be the case because it is already understood that it is not the complexity of an operation that will render it difficult to automate so much as the repetition involved.

Example. The development of a robot capable of doing domestic chores will likely come way after the development of systems capable of carrying out medical operations on human patients, the reason being that the latter has far more repeatability than the job of cleaning up a sixteen year old's bedroom, which will have a billion different possible permutations and obstacles which have to be navigated. Operations are carried out within a very narrow band of parameters, and the best surgeons for a given operation are those who do the that operation day after day after day repeatedly. (The surgeon who did the gall bladder operation I had done a few years ago told me he had done over five hundred with only two ever returning with complications.)
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
Fist and Faith
Magister Vitae
Posts: 23439
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

I disagree with the statement if either of those is the reason for it, and probably other reasons.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

Interesting post on reddit that makes sense to me and meshes with my own understanding.
Stop treating ChatGPT like it knows anything.

A man owns a parrot, who he keeps in a cage in his house. The parrot, lacking stimulation, notices that the man frequently makes a certain set of sounds. It tries to replicate these sounds, and notices that when it does so, the man pays attention to the parrot. Desiring more stimulation, the parrot repeats these sounds until it is capable of a near-perfect mimicry of the phrase "fucking hell," which it will chirp at the slightest provocation, regardless of the circumstances.

There is a tendency on this subreddit and other places similar to it online to post breathless, gushing commentary on the capabilities of the large language model, ChatGPT. I see people asking the chatbot questions and treating the results as a revelation. We see venture capitalists preaching its revolutionary potential to juice stock prices or get other investors to chip in too. Or even highly impressionable lonely men projecting the illusion of intimacy onto ChatGPT.

It needs to stop. You need to stop. Just stop.

ChatGPT is impressive in its ability to mimic human writing. But that's all its doing -- mimicry. When a human uses language, there is an intentionality at play, an idea that is being communicated: some thought behind the words being chosen deployed and transmitted to the reader, who goes through their own interpretative process and places that information within the context of their own understanding of the world and the issue being discussed.

ChatGPT cannot do the first part. It does not have intentionality. It is not capable of original research. It is not a knowledge creation tool. It does not meaningfully curate the source material when it produces its summaries or facsimiles.

If I asked ChatGPT to write a review of Star Wars Episode IV, A New Hope, it will not critically assess the qualities of that film. It will not understand the wizardry of its practical effects in context of the 1970s film landscape. It will not appreciate how the script, while being a trope-filled pastiche of 1930s pulp cinema serials, is so finely tuned to deliver its story with so few extraneous asides, and how it is able to evoke a sense of a wider lived-in universe through a combination of set and prop design plus the naturalistic performances of its characters.

Instead it will gather up the thousands of reviews that actually did mention all those things and mush them together, outputting a reasonable approximation of a film review.

Crucially, if all of the source material is bunk, the output will be bunk. Consider the "I asked ChatGPT what future AI might be capable of" post I linked: If the preponderance of the source material ChatGPT is considering is written by wide-eyed enthusiasts with little grasp of the technical process or current state of AI research but an invertebrate fondness for Isaac Asimov stories, then the result will reflect that.

What I think is happening, here, when people treat ChatGPT like a knowledge creation tool, is that people are projecting their own hopes, dreams, and enthusiasms onto the results of their query. Much like the owner of the parrot, we are amused at the result, imparting meaning onto it that wasn't part of the creation of the result. The lonely deluded rationalist didn't fall in love with an AI; he projected his own yearning for companionship onto a series of text in the same way an anime fan might project their yearning for companionship onto a dating sim or cartoon character.

It's the interpretation process of language run amok, given nothing solid to grasp onto, that treats mimicry as something more than it is.
.
User avatar
Fist and Faith
Magister Vitae
Posts: 23439
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

Sounds right.
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
peter
The Gap Into Spam
Posts: 11488
Joined: Tue Aug 25, 2009 10:08 am
Location: Another time. Another place.
Been thanked: 4 times

Post by peter »

I'm still not getting exactly what the tech is involved with this.

Is it capable of sorting all the data available to it, and putting it together in new ways - in other words innovating on it's own behalf - or not?

If say it had all of the information contained in the different entries of the Encyclopedia Brittanica, could it, in response to a question, sort it, understand it, and present it in a new way, previously unthought of or not?

If it cannot do this then it has no claim to the appellation of 'Intelligence' of any kind, artificial or otherwise. It's just a smart and extensive filing cabinet.
The truth is a Lion and does not need protection. Once free it will look after itself.

....and the glory of the world becomes less than it was....
'Have we not served you well'
'Of course - you know you have.'
'Then let it end.'

We are the Bloodguard
User avatar
Fist and Faith
Magister Vitae
Posts: 23439
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

It's smart, but it has no claim to the appellation of 'Intelligence'?
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

peter wrote:Is it capable of sorting all the data available to it, and putting it together in new ways - in other words innovating on it's own behalf - or not?
I think "not". It's primary purpose is to construct text that looks like it was written by a human. This is serviced by a (carefully manicured) database which is used to inform this activity. (And that is nonetheless amazing.) But it's using the data to write something - it's not thinking about the data in any way.

This AI is like a 10th grader writing a book report. He reads enough of the textbook to write the report, and then writes the report by paraphrasing those parts of the textbook that seem important. But he doesn't care about anything except passing in a report and getting a good grade. He's not actually going to think about it, never mind make his own inferences from the data.
peter wrote:If it cannot do this then it has no claim to the appellation of 'Intelligence' of any kind, artificial or otherwise. It's just a smart and extensive filing cabinet.
Well, it's claim to intelligence is that it can mimic the way humans write. You can't do that with a doorknob -- it takes intelligence.

And it takes intelligence to determine what on-hand data is relevant to their task, how to assemble that data into a coherent outline, and how to set a tone which matches the desired outcome.

It's not HAL. But that doesn't mean it isn't intelligent. What you can say is that it isn't sentient. It isn't a critical thinker.
.
User avatar
Zarathustra
The Gap Into Spam
Posts: 19621
Joined: Tue Jan 04, 2005 12:23 am

Post by Zarathustra »

Thinking can be automated, though understanding probably cannot. So what is thinking that doesn't include understanding? It's the manipulation of ideas/information. This process has a formal structure--syntax. Grammar is syntax. Logic is syntax. If you know the rules, you can manipulate the symbols or words without even understanding what they mean (i.e. the semantics). This is information processing, which is indeed one kind of thinking. Granted, we use the word "thinking" in ways that include more than simple information processing, but this is still one legitimate meaning of the word.

The defenders of chatGPT (like myself) aren't assuming that it understands. When we say it can find new knowledge, it's like finding a new logical proof, or a new combination of the symbols that lead to a logically true result that no one has thought of before.

When I say that a job that can be automated isn't fit for humans, what I mean is that if a robot can do it, it is dehumanizing work. Humans aren't biological robots, and our minds are not universal Turing machines (i.e. "computers"). Being repetitive can be one aspect it, but that criterion is neither necessary nor sufficient. Humans are creatures of cycles and rhythms, so repetition isn't necessarily dehumanizing. However, we're also creatures designed to adapt to change and we often fulfill our greatest potential in novel situations. What is dehumanizing is menial, manual labor with repetitive motions that cause damage like carpal tunnel syndrome, or literally destroy your body because it is hazardous. It's also dehumanizing to do something that doesn't utilize those aspects of being human that set us apart from robots/computers: our creativity, our emotions, our understanding, our compassion, our judgment, our talents.

Fist, thanks for the clarifications on whether this is political or not. These are ideas we debated in the Tank, so somehow we all understood that it fit under that umbrella. I'm not sure what makes it apolitical now. Is it just because we don't utter "Dem" or "Rep" when making our points? What if I brought up the liberal bias that is being found in chatGPT's answers? Why is it okay to analyze the situation in liberal terms, but not mention that chatGPT is itself a liberal bias?

These are the problems you get into when you arbitrarily control speech. What are the exceptions? When will the hammer come down? Why is there a hammer at all? There are no rules or warnings on this board regarding forbidden content. The way the situation was handled was like a tantrum.
Joe Biden … putting the Dem in dementia since (at least) 2020.
User avatar
Fist and Faith
Magister Vitae
Posts: 23439
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

Zarathustra wrote:When we say it can find new knowledge, it's like finding a new logical proof, or a new combination of the symbols that lead to a logically true result that no one has thought of before.
Would this extend to science, so that it could piece together various facts in ways we never thought to, and come up with a new energy source, or way to use an old source that is far more efficient than any way we currently have?
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
Damelon
Lord
Posts: 8540
Joined: Fri Dec 13, 2002 10:40 pm
Location: Illinois
Has thanked: 2 times
Been thanked: 4 times

Post by Damelon »

peter wrote:Ask it, what is the significance of the white hand holding the apple found near the excavation site of the Venus de Milo, and how does it relate to the fall of Troy?… ;)
I asked it, Peter.
As an AI language model, I don't have knowledge of any specific discovery of a white hand holding an apple near the excavation site of the Venus de Milo. However, I can tell you that in Greek mythology, the apple is significant because it was the prize in a beauty contest between the goddesses Aphrodite, Athena, and Hera, which ultimately led to the Trojan War. The Trojan prince Paris was tasked with deciding which goddess was the most beautiful and awarded the golden apple to Aphrodite, who had promised him the love of the most beautiful woman in the world, Helen of Sparta. This led to the abduction of Helen and the subsequent Greek invasion of Troy, which marked the beginning of the ten-year Trojan War. So, the apple is often seen as a symbol of discord, and its association with the Trojan War highlights the destructive consequences of vanity, desire, and betrayal in ancient Greek mythology.
Image
User avatar
wayfriend
.
Posts: 20957
Joined: Wed Apr 21, 2004 12:34 am
Has thanked: 2 times
Been thanked: 4 times

Post by wayfriend »

BTW, be careful about distinguishing what chatGPT can do - a specific AI with a specific and limited purpose - and what AIs could do in general. AIs may be able to find new knowledge; chatGPT cannot. chatGPT communicates, but it does not think.

Ever since Eliza was written in 1964, people have understood how humans anthropomorphize computer communication. In fact, Eliza was "Created to demonstrate the superficiality of communication between humans and machines." With a few dozen lines of code, it can make the average person think that they are talking to a sentient program.

Image
.
User avatar
Zarathustra
The Gap Into Spam
Posts: 19621
Joined: Tue Jan 04, 2005 12:23 am

Post by Zarathustra »

Fist and Faith wrote:
Zarathustra wrote:When we say it can find new knowledge, it's like finding a new logical proof, or a new combination of the symbols that lead to a logically true result that no one has thought of before.
Would this extend to science, so that it could piece together various facts in ways we never thought to, and come up with a new energy source, or way to use an old source that is far more efficient than any way we currently have?
I don’t know enough about chatGPT to talk specifics. However, if the positron can be ‘discovered’ simply by looking at the math, I don’t see why AI which processes math couldn’t make a similar ‘discovery.’ The fact that reality is explicable—i.e. rational—means that truth can be found through the application of math and logic, which AI should excel at doing. But I think humans probably are still needed to analyze the results and of course to verify them experimentally.

Wos and WF both bring up relevant points about whether AI is able to think. I would use the word, ‘understand,’ because I believe thinking can be automated whereas understanding requires consciousness. But the point is similar. Without understanding what it is doing, can it create new knowledge?

It depends on what we call ‘knowledge.’ David Deutsche calls things like DNA knowledge, even prior to any sentient creature being aware of DNA, because he defines knowledge as, ‘information with causal power.’ Regardless of whether you like that definition, it’s indisputable that DNA is information that has transformed our world. How do processes which aren’t conscious create such complex and transformative structures? If evolution can do it without consciousness, why can’t AI create equally powerful and transformative information with causal power?

But I have reservations about whether evolution actually happens ‘blindly.’ I suspect consciousness is involved much earlier in the evolution of life, so perhaps this reasoning to AI is a flawed comparison.

I guess we’ll find out! It’s all happening right now. Exciting times, for sure.
Joe Biden … putting the Dem in dementia since (at least) 2020.
User avatar
Fist and Faith
Magister Vitae
Posts: 23439
Joined: Sun Dec 01, 2002 8:14 pm
Has thanked: 6 times
Been thanked: 30 times

Post by Fist and Faith »

Zarathustra wrote:I don’t know enough about chatGPT to talk specifics. However, if the positron can be ‘discovered’ simply by looking at the math, I don’t see why AI which processes math couldn’t make a similar ‘discovery.’
I wonder what the programmed command would have to be in order to make it tell us it has found an implication that it does not see already written anywhere. Then, if it gives us millions of implications every day, because there's so much we haven't noticed yet (and possibly things we noticed, but don't care about?), how do we tell it to alert us only to implications that we find significant?

Zarathustra wrote:The fact that reality is explicable—i.e. rational—means that truth can be found through the application of math and logic, which AI should excel at doing.
Assuming we are sufficiently versed in those things to be able to program AI to find greater knowledge than we can find on our own.

Zarathustra wrote:But I think humans probably are still needed to analyze the results and of course to verify them experimentally.
I Wonder at what point, assuming it is right every time, we will just take its word for it.

Zarathustra wrote:Wos and WF both bring up relevant points about whether AI is able to think. I would use the word, ‘understand,’ because I believe thinking can be automated whereas understanding requires consciousness. But the point is similar. Without understanding what it is doing, can it create new knowledge?
Exactly. How will it know it found something worth reporting?

Zarathustra wrote:It depends on what we call ‘knowledge.’ David Deutsche calls things like DNA knowledge, even prior to any sentient creature being aware of DNA, because he defines knowledge as, ‘information with causal power.’ Regardless of whether you like that definition, it’s indisputable that DNA is information that has transformed our world. How do processes which aren’t conscious create such complex and transformative structures? If evolution can do it without consciousness, why can’t AI create equally powerful and transformative information with causal power?
If I remember the story correctly, somebody said that Sagan claimed finding information would be proof of intelligence. If we received radio signals that contained information, even if we didn't know what the information was, we would know it was created by an intelligence. This guy claimed that DNA contains information, and is therefore proof it was created by an intelligence. Of course, as you say, DNA more than contains information. It has causal power.
Zarathustra wrote:I guess we’ll find out! It’s all happening right now. Exciting times, for sure.
Holy cow, you aren't kidding!
All lies and jest
Still a man hears what he wants to hear
And disregards the rest
-Paul Simon
User avatar
Damelon
Lord
Posts: 8540
Joined: Fri Dec 13, 2002 10:40 pm
Location: Illinois
Has thanked: 2 times
Been thanked: 4 times

Post by Damelon »

Chat GPT is a tool.

I’ve found it useful in taking a copy of a legislative bill or a piece of statute and asking it to summarize the piece. It seems to do a good job at that.

I asked it a simple question: How to apply for a program that I’m very familiar with. It wasn’t wrong necessarily about the outcome, but it was blowing a lot of smoke over the process of how applications were considered. It answered like it knew but it didn’t give the right answer.

I suspect that the quality of output increases with the quality of the inquiry. I know educators are worried about it but I think it would take a good understanding of a subject to coax anything but an obvious manufactured essay out of it.
Image
User avatar
SerScot
The Gap Into Spam
Posts: 4678
Joined: Tue Oct 23, 2007 9:37 pm

Post by SerScot »

In my earbuds opinon we cannot call anything “strong AI� until we have reasonably specific testable definition for consciousness/sentience/sapiance itself. I don’t see that coming in the near future. ;)
"Futility is the defining characteristic of life. Pain is proof of existence" - Thomas Covenant
Post Reply

Return to “General Discussion Forum”