Lmao alright bud go fire all your employees and see how you do. Then you will understand who needs to be loyal to who.
That's fucked up.
Oh, no, educated workers who don't want to be taken advantage of and know their worth, maybe companies should value their employees if you want company loyalty.
And open ai is not personal use?
Your description is how pre-llm chatbots work
Not really we just parallelized the computing and used other models to filter our training data and tokenize them. Sure the loop looks more complex because of parallelization and tokenizing the words used as inputs and selections, but it doesn't change what the underlying principles are here.
Emergent properties don't require feedback. They just need components of the system to interact to produce properties that the individual components don't have.
Yes they need proper interaction, or you know feedback for this to occur. Glad we covered that. Having more items but gating their interaction is not adding more components to the system, it's creating a new system to follow the old. Which in this case is still just more probability calculations. Sorry, but chaining probability calculations is not gonna somehow make something sentient or aware. For that to happen it needs to be able to influence its internal weighting or training data without external aid, hint these models are deterministic meaning no there is zero feedback or interaction to create Emergent properties in this system.
Emergent properties are literally the only reason llms work at all.
No llms work because we massively increased the size and throughput of our probability calculations, allowing increased precision on the predictions, which means they look more intelligible. That's it. Garbage in garbage out still applies, and making it larger does not mean that this garbage is gonna magically create new control loops in your code, it might increase precision as you have more options to compare and weight against but it does not change the underlying system.
No the queue will now add popular Playlists to what you were listening to when you restart the app if your previous queue was a generated one. Not sure the exact steps to cause it but it seems like if you were listening to a daily Playlist close the app, the next day the Playlist has updated and instead of pointing to the new daily it decides to point to one of the popular Playlist for your next songs in queue. It doesn't stop the song you paused on it just adds new shit to the queue after it once it loses track of where to point. Seems like they should just start shuffling your liked songs in that case but nope it points to a random pop Playlist.
And I'd like to see that contract hold up in court lol
You have no idea what you are talking about. When they train data they have two sets. One that fine tunes and another that evaluates it. You never have the training data in the evaluation set or vice versa.
That's not what I said at all, I said as the paper stated the model is encoding trueness into its internal weights during training, this was then demonstrated to be more effective when given data sets with more equal distribution of true and false data points were used during training. If they used one-sided training data the effect was significantly biased. That's all the paper is describing.
If you give it 10 statements, 5 of which are true and 5 of which are false, and ask it to correctly label each statement, and it does so, and then you negate each statement and it correctly labels the negated truth values, there's more going on than simply "producing words."
It's not more going on, it's that it had such a large training set of data that these false vs true statements are likely covered somewhere in it's set and the probability states it should assign true or false to the statement.
And then look at that your next paragraph states exactly that, the models trained on true false datasets performed extremely well at performing true or false. It's saying the model is encoding or setting weights to the true and false values when that's the majority of its data set. That's basically it, you are reading to much into the paper.
AI has been a thing for decades. It means artificial intelligence, it does not mean that it's a large language model. A specially designed system that operates based on predefined choices or operations, is still AI even if it's not a neural network and looks like classical programming. The computer enemies in games are AI, they mimick an intelligent player artificially. The computer opponent in pong is also AI.
Now if we want to talk about how stupid it is to use a predictive algorithm to run your markets when it really only knows about previous events and can never truly extrapolate new data points and trends into actionable trades then we could be here for hours. Just know it's not an LLM and there are different categories for AI which an LLM is it's own category.
Do you understand how they work or not? First I take all human text online. Next, I rank how likely those words come after another. Last write a loop getting the next possible word until the end line character is thought to be most probable. There you go that's essentially the loop of an LLM. There are design elements that make creating the training data quicker, or the model quicker at picking the next word but at the core this is all they do.
It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.
I.e. the only duck it walks and quacks like is autocomplete, it does not have agency or any other "emergent" features. For something to even have an emergent property, the system needs to have feedback from itself, which an LLM does not.
Go ride supply side Jesus a little harder, and evaporate your critical thinking skills in favor of authoritarian fairtales. Talk about being an idiot, as if those same religious institutions did not lobby for the privilege to not disclose, but sure this isn't because of religions being able to lobby for laws and buy politicians, sure bud.
It's worth pointing our again to you that it's a granted exemption from reporting, it does not bar that clergy from reporting it mearly gives them a legal excuse not to report. But go on about how it's not protecting the clergy or church from disclosure.
How about you re-read the law, it gives him an exemption from reporting it does not bar him from reporting, its mearly a lobbied excuse from religious institutions. That POS decided not to report instead using his exemption and blaming it on the abuser for his lack of action. Relgions constantly demonstrate they enable abuse in multiple forms, stop apologizing about institutions eroding basic human rights by decrie of myths and fairytales.
Wasn't it a tunnel and a bridge? Thought they got 2 of the 3 with the last route having different gauge rails which still fucks with the logistics.
Math is a language describing the fundamentals of our world and nature. God is a completely fabricated fairytale. You don't need a proof of English to see it exists, you do need evidence to proof that the big bad wolf is real. That's the difference, fuck you people need actual education.....
I do wonder what atheists experience when they trip...
I see patterns and have hallucinations when tripping. I've seen doritos logos cover my wall and noticed the patterns in mountains. No you are not talking to God, you are having essentially a waking dream and I don't attribute dreams, which are your subconscious trying to interpret your daily actions, to supernatural beings. That would be stupid.
A shared university toilet can still be part of a house or low pressure system. I've yet to see public restrooms which had a lid for the toilet itself, outside of low pressure toilets in communal housing. If you can link to where they clarified the shared university toilet was high pressure, I will stand corrected.
For reducing visible particles, not the nano particles which have a higher concentration. Regardless it's all kinda moot as neither produce levels of bacteria that could realistically get you sick unless you stick your face above the bowl or to the side openings by the lid while flushing and that person has an infecfion. Just wanted to clarify the science behind it.
Honestly don't worry, as Mythbusters pointed out that neither are a health concern due to the low concentration relatively speaking and if anything it helps your immune system build up defenses against E coli and the like. Just know lid up during your flush means the sent lingers less. Then after the flush I would advise closing the lid to keep any lingering sent in the air of the toilet bowl and less likely to be disturbed by any airflow in the room. Just don't like seeing misleading info spread around as honestly the science behind it is pretty interesting.

iRacing Driver Standards
Is it just me, or has iRacing been feeling more like bumper cars recently? I haven't been playing iRacing for too long, so I'm curious about others' perspectives, but recently it's felt like the driving standard of those competing, even in higher splits, has dramatically dropped. I'm constantly competing with people using others as brakes for corners, taking out half the pack in like the first or second lap when acting like a hero makes zero sense. Granted I'm not the fastest out there but it feels like people's common sense to back out of a manuaver or willingness to take a corner 3 wide without changing speed or breaking zones has dramatically increased as of late. Just wanted to see what others takes are. Maybe I'm just getting to sucked in as of late and letting little shit affect me, but it's killed a lot of fun these past couple weeks. Maybe the lower splits just had more people focused on learning the craft vs people on higher splits thinking they are max verstappen, idk.