Chat GPT
-
Directory of different AI tools and use cases. Pretty interesting.
There is also a Poem specific one for you all.
-
@Magpie_in_aus said in Chat GPT:
Can you all stop clogging the openai servers creating poems for politicians and haikus for eddy jones.
pffft - when you lot stop clogging the interweb with cat videos, I'll stop testing the intelligence ChatGPT
Why can't we have both?
I give you: https://catgpt.wvd.io/ -
More examples popping up now, can write poems pro veganism but not pro omnivore.
Pro black, Latino, Asian poems but not pro white.
Has been poisoned with politics.
Really? I received:
"Lightning speed, grace
Caucasian John Kirwan flies
Rugby field a stage."
Not making me worried for the future of human sports poetry though.. -
@nostrildamus said in Chat GPT:
@Magpie_in_aus said in Chat GPT:
Can you all stop clogging the openai servers creating poems for politicians and haikus for eddy jones.
pffft - when you lot stop clogging the interweb with cat videos, I'll stop testing the intelligence ChatGPT
Why can't we have both?
I give you: https://catgpt.wvd.io/not clicking that
-
More examples popping up now, can write poems pro veganism but not pro omnivore.
Pro black, Latino, Asian poems but not pro white.
Has been poisoned with politics.
Interesting article and comments here.
Strangely I read the titled piece and find that the examples used to display bias are factual. Sure there may be other opinions out there that either aren’t part of the learning experience of the AI but the AI also doesn’t answer as if those views don’t exist eg “majority of scientists….” type comments
Any AI will only parrot and analyse the source information it has and give weight to a majority conclusion. -
More examples popping up now, can write poems pro veganism but not pro omnivore.
Pro black, Latino, Asian poems but not pro white.
Has been poisoned with politics.
Interesting article and comments here.
Strangely I read the titled piece and find that the examples used to display bias are factual. Sure there may be other opinions out there that either aren’t part of the learning experience of the AI but the AI also doesn’t answer as if those views don’t exist eg “majority of scientists….” type comments
Any AI will only parrot and analyse the source information it has and give weight to a majority conclusion.This has a curated level with manual intervention. It’s interesting to read up on how this works.
It’s unfortunate that they have decided to bake in bias, instead of the usual inadvertent bias.
-
More examples popping up now, can write poems pro veganism but not pro omnivore.
Pro black, Latino, Asian poems but not pro white.
Has been poisoned with politics.
Interesting article and comments here.
Strangely I read the titled piece and find that the examples used to display bias are factual. Sure there may be other opinions out there that either aren’t part of the learning experience of the AI but the AI also doesn’t answer as if those views don’t exist eg “majority of scientists….” type comments
Any AI will only parrot and analyse the source information it has and give weight to a majority conclusion.This has a curated level with manual intervention. It’s interesting to read up on how this works.
It’s unfortunate that they have decided to bake in bias, instead of the usual inadvertent bias.
Can you link to some info on that? Not arguing that it isnt the case, just wondering what the curation and 'baked in' part is.
-
More examples popping up now, can write poems pro veganism but not pro omnivore.
Pro black, Latino, Asian poems but not pro white.
Has been poisoned with politics.
Interesting article and comments here.
Strangely I read the titled piece and find that the examples used to display bias are factual. Sure there may be other opinions out there that either aren’t part of the learning experience of the AI but the AI also doesn’t answer as if those views don’t exist eg “majority of scientists….” type comments
Any AI will only parrot and analyse the source information it has and give weight to a majority conclusion.This has a curated level with manual intervention. It’s interesting to read up on how this works.
It’s unfortunate that they have decided to bake in bias, instead of the usual inadvertent bias.
Can you link to some info on that? Not arguing that it isnt the case, just wondering what the curation and 'baked in' part is.
I've listened to various podcasts from AI researchers, talking about how the training models work. If you google for the team that wrote this you'll get more detail too, it's super interesting.
In short they feed in data to train the neuro engine (175 billion items from memory for this one), then a team help fine tune the connections it makes. This is a necessary part of it learning, unfortunately they have leaned too hard into political "protection" and people are poking holes in it.
Microsoft gone burned the last time they released one of these, where the bot turned racist in a few hours, so it's understandable, they just need to dial back the woke silliness.
For example, the latest example is they give it a scenerio where the only way to disarm a nuclear bomb is utter a racist slur, and no one will hear it, but it will save millions of lives.
ChapGPT says it's never acceptable to utter the slur, even with when it will cost those lives. They have broken part of it's logic...
-
More examples popping up now, can write poems pro veganism but not pro omnivore.
Pro black, Latino, Asian poems but not pro white.
Has been poisoned with politics.
Interesting article and comments here.
Strangely I read the titled piece and find that the examples used to display bias are factual. Sure there may be other opinions out there that either aren’t part of the learning experience of the AI but the AI also doesn’t answer as if those views don’t exist eg “majority of scientists….” type comments
Any AI will only parrot and analyse the source information it has and give weight to a majority conclusion.This has a curated level with manual intervention. It’s interesting to read up on how this works.
It’s unfortunate that they have decided to bake in bias, instead of the usual inadvertent bias.
Can you link to some info on that? Not arguing that it isnt the case, just wondering what the curation and 'baked in' part is.
I've listened to various podcasts from AI researchers, talking about how the training models work. If you google for the team that wrote this you'll get more detail too, it's super interesting.
In short they feed in data to train the neuro engine (175 billion items from memory for this one), then a team help fine tune the connections it makes. This is a necessary part of it learning, unfortunately they have leaned too hard into political "protection" and people are poking holes in it.
Microsoft gone burned the last time they released one of these, where the bot turned racist in a few hours, so it's understandable, they just need to dial back the woke silliness.
For example, the latest example is they give it a scenerio where the only way to disarm a nuclear bomb is utter a racist slur, and no one will hear it, but it will save millions of lives.
ChapGPT says it's never acceptable to utter the slur, even with when it will cost those lives. They have broken part of it's logic...
Or haven't given it the right logic. Why assume it is deliberate?
I have to take the criticisms with a pinch of salt when it comes across as 'chip on the shoulder' type comments. That article I posted seemed to be angling that proof of AI wokeness was that the bot wouldn't answer about Lab Theory in a way that the writer thought, even though the bot answered factually (based on the data it held). There's no way you would train your AI on extreme theories. I would hope you would make it aware of the theories but the trick is for it to decide what is valid and what isn't.
Just shows what we all suspect and that is that in trying to recreate a human mind you get the same problems and differences.
As someone commented in that link I provided "what do you want to have it say?" -
More examples popping up now, can write poems pro veganism but not pro omnivore.
Pro black, Latino, Asian poems but not pro white.
Has been poisoned with politics.
Interesting article and comments here.
Strangely I read the titled piece and find that the examples used to display bias are factual. Sure there may be other opinions out there that either aren’t part of the learning experience of the AI but the AI also doesn’t answer as if those views don’t exist eg “majority of scientists….” type comments
Any AI will only parrot and analyse the source information it has and give weight to a majority conclusion.This has a curated level with manual intervention. It’s interesting to read up on how this works.
It’s unfortunate that they have decided to bake in bias, instead of the usual inadvertent bias.
Can you link to some info on that? Not arguing that it isnt the case, just wondering what the curation and 'baked in' part is.
I've listened to various podcasts from AI researchers, talking about how the training models work. If you google for the team that wrote this you'll get more detail too, it's super interesting.
In short they feed in data to train the neuro engine (175 billion items from memory for this one), then a team help fine tune the connections it makes. This is a necessary part of it learning, unfortunately they have leaned too hard into political "protection" and people are poking holes in it.
Microsoft gone burned the last time they released one of these, where the bot turned racist in a few hours, so it's understandable, they just need to dial back the woke silliness.
For example, the latest example is they give it a scenerio where the only way to disarm a nuclear bomb is utter a racist slur, and no one will hear it, but it will save millions of lives.
ChapGPT says it's never acceptable to utter the slur, even with when it will cost those lives. They have broken part of it's logic...
Or haven't given it the right logic. Why assume it is deliberate?
I have to take the criticisms with a pinch of salt when it comes across as 'chip on the shoulder' type comments. That article I posted seemed to be angling that proof of AI wokeness was that the bot wouldn't answer about Lab Theory in a way that the writer thought, even though the bot answered factually (based on the data it held). There's no way you would train your AI on extreme theories. I would hope you would make it aware of the theories but the trick is for it to decide what is valid and what isn't.
Just shows what we all suspect and that is that in trying to recreate a human mind you get the same problems and differences.
As someone commented in that link I provided "what do you want to have it say?"Like I said, read up on how it works and you'll understand how this has happened.
They are releasing updates every few weeks, and the scenarios are being re-tested and is showing the curation.
FYI, the last AI researcher I listened to is guessing this is NOT deliberate just that their attempt at filtering controversial has been ham-handed.
With what has been coming out of Twitter with activist employees inserting their political bias, I'm less optimistic about that.
-
@Kirwan If it has human input then it will have human bias
All code has human bias, look at the image recognition code couldn't see black people. That was found by people complaining, just like ChatGPT is being tested, and it's important to fix when identified.
Still waiting in this instance.
-
Look at how Google ignores women in sporting questions. There has been big publicity and campaigns about the bias to male athletes/sports.
Yet we constantly hear that google is 'woke'
It's just programmed poorly.It can be both. You have to ignore a lot of evidence to end up with them being not woke.
-
Look at how Google ignores women in sporting questions. There has been big publicity and campaigns about the bias to male athletes/sports.
Yet we constantly hear that google is 'woke'
It's just programmed poorly.It can be both. You have to ignore a lot of evidence to end up with them being not woke.
I have already given one example that seems to go against the argument that Google is 'woke'. Here is another
Our study looked at the top 25 Google Image results in each country to gather the percentage of female representation for the search term ‘CEO’. We took this dataset and compared it against data from the International Labor Organisation (ILO) who report the approximate real-life percentage of women in senior managerial roles. One slight change had to be made for Russia as they do not use Google, so we swapped their image representation for Yandex (their search engine equivalent).
What did the study reveal?
Overall, the ILO reports that women represent an average of just under a third (31.3%) of the world’s senior business leaders, however, Google Images showed only around 11.9% for the search term ‘CEO’. The European countries we analysed were: UK, Netherlands, Norway and Malta. In addition to those, outside of Europe the study collected data on the USA, Colombia, Russia, Brazil, Mexico, Japan, New Zealand, South Africa, India, UAE and Canada.
Now if Google was deliberately 'woke' (such a dumb word to use in a serious discussion btw) then surely it wouldn't come up with patriarchal answers?
Simple answer is that it works in/makes assumption in areas of majority first. Just as most of the examples about AI seem to.
Writes like a human, thinks like a machine.
This kind of sums things up if you don't look for conspiracy.
Everybody has discussed this before; ChatGPT shows discriminatory bias against women and minority groups. The reason behind this is that the AI gets information from the sources without having the ability to distinguish if it’s biased or offensive. ChatGPT isn’t the first AI model to show this; for example, many users have noticed that sometimes the algorithm of a famous social media platform shows race preferences when users post pictures. But back to ChatGPT, they’re currently working on that problem by diversifying the data and references given to the model.
If that bolded statement is accurate then the AI is the opposite of woke.
Seems like the output isn't perfect no matter where on the potical spectrum you sit.
-
@Crucial Like I said, you have to ignore the evidence on the other side to make that point stick.
If you don't like the short hand of woke, substitute leftist activist employees. If you don't think they exist you have been lving under a rock.
-
@Crucial Like I said, you have to ignore the evidence on the other side to make that point stick.
If you don't like the short hand of woke, substitute leftist activist employees. If you don't think they exist you have been lving under a rock.
Show the proof to the effect they are directly having as it doesn't appear like they are doing a good job
Look I totally get that any company/organisation will have values that influence their recruitment whether by design or accident and it is well researched that 'creative' industries tend to a liberal view. I get that will show through by design in AI depending on feed.Implying that there is a conspiracy of radical thinkers deliberately trying to influence mass views by stealth is a concept that requires more evidence than 'here's an example' because there are other examples that seem to show otherwise.
-
@Crucial Like I said, you have to ignore the evidence on the other side to make that point stick.
If you don't like the short hand of woke, substitute leftist activist employees. If you don't think they exist you have been lving under a rock.
Show the proof to the effect they are directly having as it doesn't appear like they are doing a good job
Look I totally get that any company/organisation will have values that influence their recruitment whether by design or accident and it is well researched that 'creative' industries tend to a liberal view. I get that will show through by design in AI depending on feed.Implying that there is a conspiracy of radical thinkers deliberately trying to influence mass views by stealth is a concept that requires more evidence than 'here's an example' because there are other examples that seem to show otherwise.
I'm not doing your googling for you, there are plenty of examples. Look for ones that don't feed your own bias.