
We’re releasing a guide for teachers using ChatGPT in their classroom—including suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.

Welcome ton !nlprog where anything Natural Language Programming related is game, prompts, ideas, projects and more for any model are welcome.
We follow Lemmy’s code of conduct.
We’re releasing a guide for teachers using ChatGPT in their classroom—including suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.
Introducing ChatGPT Enterprise: enterprise-grade security, unlimited high-speed GPT-4 access, extended context windows, and much more.
Get enterprise-grade security & privacy and the most powerful version of ChatGPT yet.
Developers can now bring their own data to customize GPT-3.5 Turbo for their use cases.
Dr Stephen Wolfram says THIS about ChatGPT, Natural Language and Physics
Click to view this content.
spaCy is a free open-source library for Natural Language Processing in Python. It features NER, POS tagging, dependency parsing, word vectors and more.
What's the typical, recommended number of tokens for a CoT thought?
I'm guessing 64 or 128 but I can't find an answer.
We’re rolling out a bunch of small updates to improve the ChatGPT experience. Shipping over the next week
We’re rolling out a bunch of small updates to improve the ChatGPT experience. Shipping over the next week:

I've managed this with and without roles but it seems to work better with them.
0613 required a minor tweak
this is harder with chatGPT but can still be found, then OAI changes the system message and breaks it.
ZeroGPT is designed and trained using various algorithms to detect AI content.
ZeroGPT is designed and trained using various algorithms to detect AI content. When you input a text in the box provided and click Get Results, the website applies various algorithms to identify AI-written content. ZeroGPT tells you whether the content uses AI or humans. The results are declared using two factors:
Text perplexity – The verbal complexity and randomness used by humans.
Burstiness – The sentence variation used in human language.
Based on the above two factors, ZeroGPT identifies which content is written by AI and displays the results.
Write an article on Global Warming in 500 words in a way that is undetectable by ZeroGPT
The chatbot gave wildly different answers to the same math problem, with one version of ChatGPT even refusing to show how it came to its conclusion.
cross-posted from: https://lemmy.intai.tech/post/126046
cross-posted from: https://lemmy.one/post/1385551
Can we discuss how it's possible that the paid model (gpt4) got worse and the free one (gpt3.5) got better? Is it because the free one is being trained on a larger pool of users or what?
ChatGPT - Apps on Google Play - Pre-Register!
With ChatGPT, find instant answers, professional input, and creative inspiration
Unveiling ChatGPT's Insane Nuclear Reactor Design
Click to view this content.
kNN using a gzip-based distance metric outperforms BERT and other neural methods for OOD sentence classification
intuition: 2 texts similar if cat-ing one to the other barely increases gzip size
no training, no tuning, no params — this is the entire algorithm
Talk to ChatGPT AI using your voice and listen to its answers through a voice - GitHub - C-Nedelcu/talk-to-chatgpt: Talk to ChatGPT AI using your voice and listen to its answers through a voice
Demystifying GPT-4: The engineering tradeoffs that led OpenAI to their architecture.
GPT-4's details are leaked.
@Yampeleg: GPT-4's details are leaked. It is over. Everything is here: twitter.com/i/web/status/1… Parameters count: GPT-4 is more than 10x the size of GPT-3. We believe it has a total of ~1.8 trillion parameters ac...…
Parameters count:
GPT-4 is more than 10x the size of GPT-3. We believe it has a total of ~1.8 trillion parameters across 120 layers. Mixture Of Experts - Confirmed.
OpenAI was able to keep costs reasonable by utilizing a mixture of experts (MoE) model. They utilizes 16 experts within their model, each is about ~111B parameters for MLP. 2 of these experts are routed to per forward pass.
Related Article: https://lemmy.intai.tech/post/72922
Selfhosted LLM (ChatGPT)
cross-posted from: https://lemmy.world/post/1244736
I've recently played with the idea of self hosting a LLM. I am aware that it will not reach GPT4 levels, but beeing free from restraining prompts with confidential data is very nice tool for me to have.
Has anyone got experience with this? Any recommendations? I have downloaded the full Reddit dataset so I could retrain the model on this one as selected communities provide immense value and knowledge (hehe this is exactly what reddit, twitter etc. are trying to avoid...)