Artificial Intelligence (Previously "Chat GPT")
-
@Rembrandt said in Artificial Intelligence (Previously "Chat GPT"):
Yeah its a concern. Such powerful technology, long term impacts could be massive.
Same thing happened with search engines, wikipedia and calculators.
-
@NTA said in Artificial Intelligence (Previously "Chat GPT"):
@Rembrandt said in Artificial Intelligence (Previously "Chat GPT"):
Yeah its a concern. Such powerful technology, long term impacts could be massive.
I think there was a similar study around taking photos or video on your phone, and how it harmed short-term memory of the event
When I'd sketch places I'd remember them even visiting two decades later. My memory is not quite so good when I photograph then revisit them.
-
@nostrildamus said in Artificial Intelligence (Previously "Chat GPT"):
@NTA said in Artificial Intelligence (Previously "Chat GPT"):
@Rembrandt said in Artificial Intelligence (Previously "Chat GPT"):
Yeah its a concern. Such powerful technology, long term impacts could be massive.
I think there was a similar study around taking photos or video on your phone, and how it harmed short-term memory of the event
When I'd sketch places I'd remember them even visiting two decades later. My memory is not quite so good when I photograph then revisit them.
Same principle that taking notes in lectures freehand with pen and paper means information lodges in a different part of your brain than if you’re simply transcribing on a laptop (most people type faster than they write). Something to do with having to order your thoughts and choose what to record
-
Here's an example of lazy AI getting obvious shit wrong.
https://x.com/i/grok/share/PNSENgGcUVUDNjhTh1XxMd0Pp -
And in another criticism - I got chatgpt to review a report I uploaded. I asked it questions and had to continually prompt it and fact check. When told to restrict its answers to the report, it made shit up and offered the excuse it must have got the answers it provided from other sources.
I wonder how many people are taking what these models are saying at face value.
-
Yes free. The obvious solution if I really wanted accuracy would be to host and firewall my own, restricted to a storage container that doesn't have internet access. But then I'd still have to invest in tweaking the indexes etc.
And if I'm honest, right now I CBF.
-
@antipodean said in Artificial Intelligence (Previously "Chat GPT"):
And in another criticism - I got chatgpt to review a report I uploaded. I asked it questions and had to continually prompt it and fact check. When told to restrict its answers to the report, it made shit up and offered the excuse it must have got the answers it provided from other sources.
I wonder how many people are taking what these models are saying at face value.
Huge, huge, issue for us in education.
AI slop is everywhere. -
@antipodean you’d still get hallucinations with a local model. It’s baked in.
You need the massive data centres and the top models to reduce it. I’ve noticed that MCP task trackers help enormously, helps it keep track of its chain of thought.
-
@gt12 said in Artificial Intelligence (Previously "Chat GPT"):
@antipodean said in Artificial Intelligence (Previously "Chat GPT"):
And in another criticism - I got chatgpt to review a report I uploaded. I asked it questions and had to continually prompt it and fact check. When told to restrict its answers to the report, it made shit up and offered the excuse it must have got the answers it provided from other sources.
I wonder how many people are taking what these models are saying at face value.
Huge, huge, issue for us in education.
AI slop is everywhere.You guys need to transition to AI Tutors ASAP. I won’t hold my breath
-
@gt12 said in Artificial Intelligence (Previously "Chat GPT"):
@antipodean said in Artificial Intelligence (Previously "Chat GPT"):
And in another criticism - I got chatgpt to review a report I uploaded. I asked it questions and had to continually prompt it and fact check. When told to restrict its answers to the report, it made shit up and offered the excuse it must have got the answers it provided from other sources.
I wonder how many people are taking what these models are saying at face value.
Huge, huge, issue for us in education.
AI slop is everywhere.In my recently departed job I've had to develop an AI Action Plan and policy (e.g. what
wethey allow, what they don't allow, development of assessments utilising AI).We realised how urgent it was when in an assessment in our final capstone subject had 8 out of 11 AI written responses. By far the biggest issue with it was the assessment was just the students updating their project plan, it was essentially them just telling the lecturer what they've done, what they're doing, and what they still had to do.
He rightfully failed them, the GM tried to get us to pass them, and we had to settle on them all undertaking a new assessment as a replacement.
-
@Nepia said in Artificial Intelligence (Previously "Chat GPT"):
8 out of 11 AI written responses
You are no doubt correct. However, dumb people (ie non Ferners) seem to put trust in 'AI detection software'. It appears to be as random as a TMO with a head clash.
-
Isn’t school supposed to be preparing you for a career? These tools are going to be used everywhere, education needs to reform completely.
I had three AI bots working as a team this afternoon, one searching for a root cause of an issue so I could resolve it.
Another reviewing pull requests from the team, and another speeding me along writing code.
They are increasingly becoming autonomous and if used properly, undetectable. The kids will work out how to bypass any checks, and like with most cheating you’ll just catch the dummies.
-
My main use of AI at work is transcribing Teams meetings, chucking the transcription into Co-Pilot, and getting it to write the minutes and actions from the meeting. Even though the transcription doesn't record what was said all that well, Co-Pilot seems to do a great job of deciphering it into something that I only need to make small tweaks to. It can also explain some of the technical stuff talked about in the meeting as it can look up what the technology is and what is does etc. Very handy and has saved me a lot of time already. In 5 years time the corporate world is going to be a completely different place.
-
@No-Quarter get teams premium to save you the hassle.
The summary and action items it derives from them is reasonably good, but the problem I've experienced is the transcription itself isn't great.
-
@Kirwan said in Artificial Intelligence (Previously "Chat GPT"):
Isn’t school supposed to be preparing you for a career? These tools are going to be used everywhere, education needs to reform completely.
I had three AI bots working as a team this afternoon, one searching for a root cause of an issue so I could resolve it.
Another reviewing pull requests from the team, and another speeding me along writing code.
They are increasingly becoming autonomous and if used properly, undetectable. The kids will work out how to bypass any checks, and like with most cheating you’ll just catch the dummies.
Did you see this is my post:
(e.g. what we they allow, what they don't allow, development of assessments utilising AI).
However, that doesn't mean that academic integrity should be thrown out the window.
As @antipodean notes it's a value add, AI is a tool, you clearly use it as a tool, it doesn't mean we need to "reform the education system completely".
Also, AI is actually pretty easy to spot in assessment without "checks" - if the marker is actually looking for it.
-
I mean the 1950 style teacher classroom model should be completely reformed.
1:1 AI Tutors shepherded by far less teachers.
This model is being tried in Texas and they already are having great results.
And, no, AI is not easy to spot. Lazily used AI is. I could set up a model for a kid that would be indistinguishable from any of their other work in a few weeks.
But it would take them minutes to generate the work.
The way people are assessed needs to change.
The top models are ticking off exams pretty easily now.