Artificial Intelligence (Previously "Chat GPT")
-
I haven't used ChatGPT yet (didn't want to give out my phone number - Oz is notorious for spam calls so I try not give my phone number to anyone if possible) but have played around with other "AI".
I have a question for the combined knowledge of the fern, does ChatGTP, or any other AI, have a setting that allows you to upload your own docs and then only use those docs to answer your questions?
Reason being is that I have am writing policy for my current job, and I have five other jobs worth of the same policy and annoyingly have to re-write it. So was wondering if I can feed it into AI for a first draft then go from there.
-
GPT+ lets you upload 32K tokens, so should handle a decent sized document. So feed it the doc as the first prompt, after saying what you want it to do with it.
FYI, I've had no issues with calls.
Cheers.
Clearly I need to read up on ChatGPT+ as I have no idea what tokens are
, the AI I've been using have just been basic question asking ones which actually turn out decent broad policy principles but don't go in depth.
-
-
A lot of ex-googlers discussing the woke activists being in control at google. Gemini’s embarrassing output is a good barometer of the craziness.
It’s literally a 1984 alternate history machine.
It’s a shame because that huge prompt makes the tech very useful. They need to throw away the woke layer.
-
You need to scroll through the twitter thread
Basically Google released an AI tool that can be used to create images. It's infected with the usual progressive politics so it keeps forcing 'diversity' into everything. So ask for images of Swedish people and you'll get Africans, Chinese & Indians etc. Ask for specific locations and centuries and it will force the results to non-white when that is clearly incorrect
Lots of funny examples being posted on twitter today
Google has apologised etc. However this will only be corrected because of how obvious this particular distortion is. There'll be plenty of insidious shit in the rest of the model
-
-
@voodoo think of tokens as words or symbols. The idea being you prompt the language model rather than give it full context (or an entire sentence), providing freedom for the model to generate a response in a certain manner.
Example from chatgpt itself:
Let's say we have a language model trained on various texts, including cooking recipes. The user wants to generate a new recipe for chocolate cake, but they want to ensure that the recipe includes specific ingredients like cocoa powder and vanilla extract.
Instead of providing the entire recipe context, the user could provide a token prompt like this:
Chocolate cake recipe: Ingredients: [Cocoa powder], [Vanilla extract], ...
In this prompt, the user has highlighted the key ingredients they want to include in the recipe by enclosing them in square brackets. The model can then generate the rest of the recipe based on this prompt, incorporating the provided ingredients while also adding its own language generation to complete the recipe instructions.
The model might generate something like:
Chocolate Cake Recipe: Ingredients: - 2 cups all-purpose flour - 1 3/4 cups granulated sugar - 3/4 cup unsweetened cocoa powder - 1 1/2 teaspoons baking powder - 1 1/2 teaspoons baking soda - 1 teaspoon salt - 2 eggs - 1 cup milk - 1/2 cup vegetable oil - 2 teaspoons vanilla extract - 1 cup boiling water Instructions: 1. Preheat oven to 350°F (175°C). Grease and flour two 9-inch round cake pans. 2. In a large bowl, mix together flour, sugar, cocoa powder, baking powder, baking soda, and salt. 3. Add eggs, milk, oil, and vanilla extract to the dry ingredients and beat on medium speed for 2 minutes. 4. Stir in boiling water until the batter is well combined. 5. Pour the batter evenly into the prepared pans. 6. Bake in preheated oven for 30 to 35 minutes, or until a toothpick inserted into the center comes out clean. 7. Allow cakes to cool in pans for 10 minutes, then remove from pans and cool completely on wire racks before frosting. Enjoy your delicious chocolate cake!
Or another example. You prompt:
[Token Prompt] The detective walked into the dimly lit room and saw ________.
And it takes that and responds with the following:
[Generated Text] The detective walked into the dimly lit room and saw a shadowy figure lurking in the corner.